Software To Help You Turn Your Data Into AI
Forget fragmented workflows, annotation tools, and Notebooks for building AI applications. Encord Data Engine accelerates every step of taking your model into production.
When you’re working with datasets or developing a machine learning model, you often find yourself looking for or hypothesizing about subsets of data, labels, or model predictions with certain properties.
Quality metrics form the foundation for finding such data and testing the hypotheses.
The core concept is to use quality metrics to index, slice, and analyze the subject in question in a structured way to perform informed actions when continuously cranking the active learning cycle.
Concrete example: You hypothesize that object "redness" influences the mAP score of your object detection model. To test this hypothesis, you define a quality metric that captures the redness of each object in the dataset. From the quality metric, you slice the data to compare your model performance on red vs. not red objects.
The best way to think of a quality metric in computer vision is:
By design, quality metrics are a very abstract class of functions because the accompanying methodologies are agnostic to the specific properties that the quality metrics express. No matter the specific quality metric, you can:
All of these are possible with Encord Active.
Data quality metrics are those metrics that require only information about the data itself. Within the computer vision domain, this means the raw images or video frames without any labels. This subset of quality metrics is typically used frequently at the beginning of a machine learning project where labels are scarce or perhaps not even existing.
Below are some examples of data quality metrics ranging from simple to more complex:
Image Brightness as a data quality metric on MS COCO validation dataset on Encord.
Image Singularity as a data quality metric on MS COCO validation dataset on Encord.
Label quality metrics apply to labels. Some metrics use image content, while others apply only to the label information. Label quality metrics serve many purposes, but some more frequent ones are surfacing label errors, model failure modes, and assessing annotator performance.
Here are some concrete examples of label quality metrics ranging from simple to more complex:
Object count as a label quality metric on MS COCO validation dataset on Encord.
Annotation Duplicate as a label quality metric on MS COCO validation dataset on Encord.
Model quality metrics also take into account the model predictions. The most obvious use-case for these metrics is acquisition functions, answering the question, "What should I label next?" There are many intelligent ways to leverage model predictions to answer this question. Here is a list of some of the most common ones:
Using Model Confidence as a model quality metric on MS COCO validation dataset on Encord. It shows the predictions where the confidence is between 50% to 80%.
Using Polygon Shape Similarity as a model quality metric on MS COCO validation dataset on Encord. It ranks objects by how similar they are to their instances in previous frames based on Hu moments. The more an object’s shape changes, the lower its score will be.
We have now reviewed some examples of common quality metrics already in Encord Active.
However, every machine learning project is different, and most likely, you have just the idea of what to compute to surface the data that you want to evaluate or analyze.
With Encord Active, you only need to define the per-data-point computation. The tool will handle everything from executing the computation to visualizing your data based on your new metric.
You may want to know when your skeleton predictions are occluded or in which frames of video-specific annotations are missing.
You could also get even smarter and compare your labels with results from foundational models like SAM.
These different use cases are situations where you would build your custom metrics.
You can find the documentation for writing custom metrics here or you can follow any of the links provided above to specific quality metrics and find their implementation on GitHub.
Quality Metrics constitute the foundation of systematically exploring, evaluating, and iterating on machine learning datasets and models.
With Encord Active, it’s easy to define, execute, and utilize quality metrics to get the most out of your data, models, and annotators. We use them for slicing data, comparing data, tagging data, finding label errors, and much more. The true power of these metrics is that they can be arbitrarily specific to a problem at hand.
Ready to improve the performance and quality metrics of your CV models?
Sign-up for an Encord Free Trial: The Active Learning Platform for Computer Vision, used by the world’s leading computer vision teams.
AI-assisted labeling, model training & diagnostics, find & fix dataset errors and biases, all in one collaborative active learning platform, to get to production AI faster. Try Encord for Free Today.
Want to stay updated?
Join the Encord Developers community to discuss the latest in computer vision, machine learning, and data-centric AI
Join the communityRelated Blogs
Today, organizations are increasingly deploying artificial intelligence (AI) systems in highly sensitive and critical domains, such as medical diagnosis, autonomous driving, and cybersecurity. Reliance on AI models to perform vital tasks has opened up the possibility of large-scale failure with damaging consequences, such as in the event of malicious attacks or compromised infrastructure. AI incidents are growing significantly, reportedly averaging 79 incidents yearly from 2020 to 2023. For instance, Tessa, a healthcare chatbot, reportedly gave harmful advice to people with eating disorders; Tesla’s autonomous car did not recognize a pedestrian on the crosswalk; and Clearview AI’s security system wrongly identified an innocent person as a criminal. These disasters question the efficacy of AI systems and call for developing robust models resistant to vulnerabilities. So, what is model robustness in AI? And how can AI practitioners ensure that a model is robust? In this article, you will: Understand the significance of robustness in AI applications, Learn about the challenges of building robustness into AI systems, Learn how Encord Active can help improve the robustness of your ML models. Model Robustness Explained Model robustness is a machine-learning (ML) model’s ability to withstand uncertainties and perform accurately in different contexts. A model is robust if it performs strongly on datasets that differ from the training data. For instance, in advanced computer vision (CV) and large language models (LLMs), robustness ensures reliable predictions on unseen textual and image data generated from diverse sources. Real-world images can be blurry, distorted, noisy, etc., interfering with a CV model’s prediction performance and causing fatal accidents in safety-critical applications such as self-driving cars and medical diagnosis. Achieving robustness in such models will help mitigate these issues. However, robustness may not always lead to high accuracy, as accuracy is usually calculated based on how well the model fits on a validation dataset. This means a highly accurate model may not generalize well to entirely new data that was not present in the validation set. The diagram below illustrates the point. Robustness vs Accuracy Optimizing a model for robustness may imply lower accuracy and model complexity than required in the case of optimizing for low variance. That’s because robustness aims to create a model that can perform well on novel data distributions that significantly differ from test data. Significance of Model Robustness Ensuring model robustness is necessary as we increase our reliance on AI models to perform critical jobs. Below are a few reasons why model robustness is crucial in today’s highly digitalized world. Reduces sensitivity to outliers: Outliers can adversely affect the performance of algorithms like regression, decision trees, k-nearest neighbors, etc. Ensuring model robustness will make these models less sensitive to outliers and improve generalization performance. Protects models against malicious attacks: Adversarial attacks distort input data, forcing the model to make wrong predictions. For instance, an attacker can change specific images to trick the model into making a classification error. Robustness allows you to build models that can resist such attacks. Fairness: Robustness requires training models on representative datasets without bias. This means robust models generate fairer predictions and perform well on data that may contain inherent biases. Increases trust: Multiple domains, such as self-driving cars, security, medical diagnosis, business decision-making, etc., rely on AI to perform mission and safety-critical tasks. Robustness is essential in these areas to maintain high model performance by eliminating the chance of harmful errors. Reduces cost of retraining models: In robust models, data variations (distribution shifts) have minimal effect on performance. Hence, retraining is less frequent, reducing the computational resource load required to collect, preprocess, and train new data. Improves regulatory compliance: As data security and AI fairness laws become more stringent, data science teams must ensure regulatory compliance to avoid costly fines. Robust models are helpful as they mitigate the effects of adversarial attacks by maintaining stable performance when faced with attempts to exploit model vulnerabilities and perform optimally on new data, reducing data collection needs and the chances of a data breach. Now that we understand the importance of model robustness, let’s explore how you can achieve it in your ML pipelines. How to Achieve Model Robustness? Making machine learning models robust involves several techniques to ensure strong performance on unseen data for diverse use cases. The following section discusses the factors that contribute significantly to achieving model robustness. Data Quality High data quality enables efficient model training by ensuring the data is clean, diverse, consistent, and accurate. As such, models can quickly learn underlying data patterns and perform well on unseen samples without exhibiting bias, leading to higher robustness. Automated data pipelines are necessary to improve data quality as they help with data preprocessing to bring raw data into a usable format. The pipelines can include statistical checks to assess diversity and ensure the training data’s representativeness of the real-world population. Moreover, data augmentation, which artificially increases the training set by modifying input samples in a particular way, can also help reduce model overfitting. The illustration below shows how augmentation works in CV. Examples of Data Augmentation Lastly, the pipeline must include a vigorous data annotation process, as model performance relies heavily on label quality. Labeling errors can cause the model to generate incorrect predictions and become vulnerable to adversarial attacks. A clear annotation strategy with detailed guidelines and a thorough review process by domain experts can help improve the labeling workflow. Using active learning and consensus-based approaches such as majority voting can also boost quality by ensuring consistent labels across samples. {{light_callout_start}} Want to know how to increase data quality? Have a look at Mastering Data Cleaning and Data Preprocessing. {{light_callout_end}} Adversarial Training Adversarial robustness makes a model resistant to adversarial attacks. Such attacks often involve small perturbations to input data, causing the model to generate incorrect output. The attacker aims to steal or copy the model by understanding its inner workings. Types of adversarial attacks Adversarial attacks consist of multiple methodologies, such as: Evasion attacks involve perturbing inputs to cause incorrect model predictions. For instance, the fast gradient sign method (FGSM) is a popular perturbation technique that adds the sign of the loss function’s gradient to modify an input instance. Poisoning attacks occur when an adversary directly manipulates the input by changing labels or injecting harmful data into the training set. Model inversion attacks aim to reconstruct the training data samples using a target classifier. Such attacks can cause serious privacy breaches as attackers can discover sensitive data samples for training a particular model. Model extraction attacks occur when adversaries query a model’s Application Programming Interface (API) to collect output samples to create a synthetic dataset. The adversary can use the fake dataset to train another model that copies the functionality of the original learning algorithms. Let’s explore some prominent techniques to prevent these adversarial attacks. Robustness and model security AI practitioners can use various techniques to prevent adversarial attacks and make models more robust. The following are a few options. Adversarial training: This method involves training models on adversarial examples to prevent evasion attacks. Gradient masking: Building ML models that do not rely on gradients, such as k-nearest neighbors, can prevent attacks that use gradients to perturb inputs. Data cleaning: This simple technique helps prevent poisoning attacks by ensuring that training data does not contain malicious examples or samples with incorrect labels. Outlier detection: Identifying and removing outliers can also help make models robust to poisoning attacks. Differential privacy: The techniques involved in differential privacy add noise to data during model training, making it challenging for an attacker to extract information regarding a specific individual. Data encryption: Techniques like homomorphic encryption allow you to train models on encrypted data and prevent breaches. Output perturbation: You can avoid data leakage by adding noise to a deep learning model’s output. Watermarking: You can add outliers to your data by including watermarks in your input data. The model overfits these outliers, allowing you to identify your model’s replica. {{Training_data_CTA}} Domain Adaptation With domain adaptation, you can tailor a model to perform well on a target domain with limited labeled data, using knowledge from another source domain with sufficient data. For instance, you can have a classifier model that correctly classifies land animal images (source domain). However, you can use domain adaptation techniques to fine-tune the model, so it also classifies marine animals (target domain). This way, you can improve the model’s generalization performance for new classes to increase its robustness. Domain Adaptation Illustration Moreover, domain adaptation techniques make your model robust to domain shifts that occur when underlying data distributions change. For instance, differences between training and validation sets indicate a domain shift. You can broadly categorize domain adaptation as follows: Supervised, semi-supervised, and unsupervised domain adaptation: In supervised domain adaptation, the data in the target domain is completely labeled. In semi-supervised domain adaptation, only a few data samples have labels, while in unsupervised domain adaptation, no labels exist in the target domain. Heterogenous and homogenous domain adaptation: In heterogeneous domain adaptation, the target and source feature spaces are different, while they are the same in homogeneous domain adaptation. One-step and multi-step domain adaptation: In one-step domain adaptation, you can directly transfer the knowledge from the source to the target domain due to the similarity between the two. However, you introduce additional knowledge transfer steps in multi-step adaptation to smoothen the transition process. Multi-step techniques help when target and source domains differ significantly. Lastly, domain adaptation techniques include feature-based learning, where deep learning models learn invariable underlying domain features and use the knowledge to make predictions on the target domain. Other methods involve mapping the source domain to the target domain using generative adversarial networks (GANs). The technique works by learning to map a source image to another domain using a target domain label. Regularization Regularization helps prevent your model from overfitting and makes it more robust by reducing the generalization error. The Effect of Regularization on the Model Common regularization techniques include: Ridge regression: In ridge regression, you add a penalty to the loss function that equals the sum of the squares of the weights. Lasso regression: In lasso regression, the penalty term is the sum of the absolute value of all the weights. Entropy: The penalty term equals the entropy of the output distribution. Dropout: You can use the dropout technique in neural networks to randomly turn off or drop layers and nodes to reduce model complexity and improve generalization. Explainability Explainable AI (XAI) is a recent concept that allows you to understand how a machine learning system behaves and enhances model interpretability. Explainable Model vs. Black Box Model Illustration XAI techniques help make a model robust by allowing you to see the inner workings of a model and identify and fix any biases in the model’s decision-making process. XAI includes the following techniques: SHAP: Shapley Additive Explanations (SHAP) is a technique that computes Shapley values for features to represent their importance in a particular prediction. LIME: Local interpretable model-agnostic explanation (LIME) perturbs input data and analyzes the effects on output to compute feature importance. Integrated gradients: This technique establishes feature importance by computing gradients of features with respect to input data. Permutation importance: You can evaluate a feature’s importance by removing it and observing the effect on a particular performance metric, such as F1-score, precision, recall, etc. Partial dependence plot: This plot shows the marginal effect of features on a model’s output. It helps interpret whether the feature and the output have a simple or more complex relationship. Evaluation Strategies Model evaluation techniques help increase a model’s robustness by allowing you to assess performance and quickly identify issues during model development. While traditional evaluation metrics, such as the F1-score, precision, recall, etc., let you evaluate the performance of simple models against established benchmarks, more complex methods are necessary for modern LLMs and other foundation models. For instance, you can evaluate an LLM’s output using various automated scores, such as BLEU, ROUGE, CIDEr, etc. You can complement LLM evaluation with human feedback for a more robust assessment. In contrast, intersection-over-union (IoU), panoptic quality, mean average precision (mAP), etc., are some common methods for evaluating CV models. {{light_callout_start}} Learn more about model evaluation by reading our comprehensive guide on Model Test Cases: A Practical Approach to Evaluating ML Models. {{light_callout_end}} Challenges of Model Robustness While model robustness is essential for high performance, maintaining it is significantly challenging. The list below mentions some issues you can encounter when building robust models: Data volume and variety: Modern data comes from multiple sources in high volumes. Preprocessing these extensive datasets demands robust data pipelines and expert staff to identify issues during the collection phase. Increased model complexity: Recent advancements in natural language processing and computer vision modeling call for more sophisticated explainability techniques to understand how they process input data. Feature volatility: Model decay is a recurrent issue in dynamic domains with frequent changes in feature distribution. Keeping track of these distributional shifts calls for complex monitoring infrastructure. Evaluation methods: Developing the perfect evaluation strategy is tedious as you must consider several factors, such as the nature of a model’s output, ground-truth availability, the need for domain experts, etc. Achieving Model Robustness with Encord Active You can mitigate the above challenges by using an appropriate ML platform like Encord Active that helps you increase model robustness through automated evaluation features and development tools. Encord Active Encord Active automatically identifies labeling errors and boosts data quality through relevant quality metrics and vector embeddings. It also helps you debug models through comprehensive explainability reports, robustness tests, and model error analysis. In addition, the platform features active learning pipelines to help you identify data samples that are crucial for your model and streamline the data curation process. Evaluate the Quality of the Data You can use Encord Active to improve the quality of your data and, subsequently, enhance the robustness of vision models through several key features. Encord Active offers various features like data exploration, label exploration, similarity search, quality metrics (both off-the-shelf and custom), data and label tagging, image duplication detection, label error detection, and outlier detection. It supports various data types and labels and integrates seamlessly with Encord Annotate. Data curation workflow: The platform supports curating images using embeddings and quality metrics to find data of bad quality for your model to learn from or low-quality samples you might want to test your model on. Here is an example using the Embeddings View within Encord Active to surface images that are too bright from the COCO 2017 dataset: You can also explore the embedding plots and filter the images by a quality metric like "Area" for instances where you might want to find the largest or smallest images from your set, among other off-the-shelf or custom quality metrics. Finding and flagging label errors: Within Encord Active, you can surface duplicate labels that could be overfitting or lead to misleading high-performance metrics during training and validation. Because the model may recognize repeated instances rather than learn generalizable patterns. After identifying such images, you can add them to a “Collection” and send them to Encord Annotate for re-labeling or removing the duplicates. {{gray_callout_start}} 💡Recommended: Exploring the Quality of Hugging Face Image Datasets with Encord Active. {{gray_callout_end}} Evaluating Model Quality Encord Active also allows you to determine which metrics influence your model's performance the most. You can import your model’s prediction to get a 360° view of the quality of your model across performance metrics and data slices. You can also inspect the metric impact on your model's performance. This can help you better understand how the model performs across metrics like the diversity of the data, label duplicates, brightness, and so on. These features collectively ensure that data quality is significantly improved, contributing to the development of more robust and accurate vision models. The focus on active learning and the ability to handle various stages of the data and model lifecycle make Encord Active a comprehensive tool for improving data quality in computer vision applications. {{light_callout_start}} Interested in learning more about Encord Active? Check out the documentation. {{light_callout_end}} Model Robustness: Key Takeaways Building robust models is the only way to leverage AI’s full potential to boost profitability. A few important things to remember about model robustness are: A robust model can maneuver uncertain real-world scenarios appropriately and increase trust in the AI system. Achieving model robustness can imply slightly compromising accuracy to reduce generalization errors. Ensuring model robustness helps you prevent adversaries from stealing your model or data. Improved data quality, domain adaptation techniques, and regularization's reduction of generalization error can all contribute to model robustness. Model explainability is essential for building robust models as it helps you understand a model’s behavior in detail. A specialized ML platform can help you overcome model robustness challenges such as increased model complexity and feature volatility. {{Active_CTA}}
December 6
8 min
Computer vision engineers, data scientists, and machine learning engineers face a pervasive issue: the prevalence of low-quality images within datasets. You have likely encountered this problem through incorrect labels, varied image resolutions, noise, and other distortions. Poor data quality can lead to models learning incorrect features, misclassifications, and unreliable or incorrect outputs. In a domain where accuracy and reliability are paramount, this issue can significantly impede the progress and success of projects. This could result in wasted resources and extended project timelines. Take a look at the following image collage of Chihuahuas or muffins, for example: Chihuahua or muffin? My search for the best computer vision API How fast could you tell which images are Chihuahuas vs. muffins? Fast? Slow? Were you correct in 100% of the images? I passed the collage to GPT-4V because, why not? 😂 And as you can see, even the best-in-class foundation model misclassified some muffins as Chihuahuas! (I pointed out a few.) So, how do you make your models perform better? The sauce lies in the systematic approach of exploring, evaluating, and fixing the quality of images. Enter Encord Active! It provides a platform to identify, tag problematic images, and use features to improve the dataset's quality. This article will show you how to use Encord Active to explore images, visualize potential issues, and take next steps to rectify low-quality images. In particular, you will: Use a dog-food dataset from the Hugging Face Datasets library. Delve into the steps of creating an Encord Active project. Define and run quality metrics on the dataset. Visualize the quality metrics. Indicate strategies to fix the issues you identified. Ready? Let’s delve right in! 🚀 Using Encord Active to Explore the Quality of Your Images Encord Active toolkit helps you find and fix wrong labels through data exploration, model-assisted quality metrics, and one-click labeling integration. It takes a data-centric approach to improving model performance. With Encord Active, you can: Slice your visual data across metrics functions to identify data slices with low performance. Flag poor-performing slices and send them for review. Export your new data set and labels. Visually explore your data through interactive embeddings, precision/recall curves, and other advanced visualizations. Check out the project on GitHub, and hey, if you like it, leave a 🌟🫡. Demo: Explore the quality of 'dog' and 'food' images for ML models In this article, you will use Encord Active to explore the quality of the `sashs/dog-food` images. You’ll access the dataset through the Hugging Face Datasets library. You can use this dataset to build a binary classifier that categorizes images into the "dog" and "food" classes. The 'dog' class has images of canines that resemble fried chicken and some that resemble images of muffins, and the 'food' class has images of, you guessed it, fried chicken and muffins. The complete code is hosted on Colab. Open the Colab notebook side-to-side with this blog post. {{light_callout_start}} Interested in more computer vision, visual foundation models, active learning, and data quality notebooks? Check out the Encord Notebook repository {{light_callout_end}} Use Hugging Face Datasets to Download and Generate the Dataset Whatever machine learning, deep learning, or AI tasks you are working on, the Hugging Face Datasets library provides easy access to, sharing, and processing datasets, particularly those catering to audio, computer vision, and natural language processing (NLP) domains. The 🤗 datasets library enables an on-disk cache that is memory-mapped for quick lookups to back the datasets. Explore the Hugging Face Hub for the datasets directory You can browse and explore over 20,000 datasets housed in the library on the Hugging Face Hub. The Hub is a centralized platform for discovering and choosing datasets pertinent to your projects. In the search bar at the top, enter keywords related to the dataset you're interested in, e.g., "sentiment analysis," "image classification," etc. You should be able to: Filter datasets by domain, license, language, and so on. Find information such as the size, download number, and download link on the dataset card. Engage with the community by contributing to discussions, providing feedback, or suggesting improvements to the dataset. Load the ‘sashs/dog-food’ dataset Loading the `sashs/dog-food` dataset is pretty straightforward: Install the 🤗 Datasets library and download the dataset. To install Hugging Face Datasets, run the following command: pip install datasets Use the `load_dataset` function to load the 'sasha/dog-food' dataset from Hugging Face: dataset_dict=load_dataset('sasha/dog-food') `load_dataset` returns a dictionary object (`DatasetDict`). You can iterate through the train and test dataset split keys in the `DatasetDict` object. The keys map to a `Dataset` object containing the images for that particular split. You will explore the entire dataset rather than in separate splits. This should provide a comprehensive understanding of the data distribution, characteristics, and potential issues. To do that, merge the different splits into a single dataset using the `concatenate_datasets` function: dataset=concatenate_datasets([dfordindataset_dict.values()]) Perfect! Now, you have an entire dataset to explore with Encord Active in the subsequent sections. If you have not done that already, create a dataset directory to store the downloaded images. # Create a new directory "hugging_face_dataset" in the current working dir huggingface_dataset_path = Path.cwd() / "huggingface_dataset" # Delete dir if it already exists and recreate if huggingface_dataset_path.exists(): shutil.rmtree(huggingface_dataset_path) huggingface_dataset_path.mkdir() Use a loop to iterate through images from the ‘sashs/dog-food’ dataset and save them to the directory you created: for counter, item in tqdm(enumerate(dataset)): image = item['image'] image.save(f'./Hugging Face_dataset/{counter}.{image.format}') If your code throws errors, run the cell in the Colab notebook in the correct order. Super! You have prepared the groundwork for exploring your dataset with Encord Active. Create an Encord Active Project You must specify the directory containing your datasets when using Encord Active for exploration. You will initialize a local project with the image files—there are different ways to import and work with projects in Encord. Encord Active provides functions and utilities to load all your images, compute embeddings, and, based on that, evaluate the embeddings using pre-defined metrics. The metrics will help you search and find images with errors or quality issues. Before initializing the Encord Active project, define a function, `collect_all_images`, that obtains a list of all the image files from the `huggingface_dataset_path` directory, takes a root folder path as input, and returns a list of `Path` objects representing image files within the root folder: def collect_all_images(root_folder: Path) -> list[Path]: image_extensions = {".jpg", ".jpeg", ".png", ".bmp"} image_paths = [] for file_path in root_folder.glob("**/*"): if file_path.suffix.lower() in image_extensions: image_paths.append(file_path) return image_paths Remember to access and run the complete code in this cell. Initialize Encord Active project Next, initialize a local project using Encord Active's `init_local_project` function. This function provides the same functionality as running the `init` command in the CLI. If you prefer using the CLI, please refer to the “Quick import data & labels” guide. try: project_path: Path = init_local_project( files = image_files, target = projects_dir, project_name = "sample_ea_project", symlinks = False, ) except ProjectExistsError as e: project_path = Path("./sample_ea_project") print(e)# A project already exists with that name at the given path. Compute image embeddings and analyze them with metrics Analyzing raw image data directly in computer vision can often be impractical due to the high dimensionality of images. A common practice is to compute embeddings for the images to compress the dimensions, then run metrics on these embeddings to glean insights and evaluate the images. Ideally, you compute the embeddings using pre-trained (convolutional neural network) models. The pre-trained models capture the essential features of the images while reducing the data dimensionality. Once you obtain the embeddings, run similarity, clustering, and classification metrics to analyze different aspects of the dataset. Computing embeddings and running metrics on them can take quite a bit of manual effort. Enter Encord Active! Encord Active provides utility functions to run predefined subsets of metrics, or you can import your own sets of metrics. It computes the image embeddings and runs the metrics by the type of embeddings. Encord Active has three different types of embeddings: Image embeddings - general for each image or frame in the dataset Classification embeddings - associated with specific frame-level classifications Object embeddings - associated with specific objects, like polygons or bounding boxes Use the `run_metrics_by_embedding_type` function to execute quality metrics on the images, specifying the embedding type as `IMAGE`: run_metrics_by_embedding_type( EmbeddingType.IMAGE, data_dir=project_path, use_cache_only=True ) The `use_cache_only=True` parameter cached data only when executing the metrics rather than recomputing values or fetching fresh data. This can be a useful feature for saving computational resources and time, especially when working with large datasets or expensive computations. Create a `Project` object using the `project_path` - you will use this for further interactions with the project: ea_project=Project(project_path) Exploring the Quality Of Images From the Hugging Face Datasets Library Now that you have set up your project, it’s time to explore the images! There are typically two ways you could visualize images with Encord Active (EA): Through the web application (Encord Active UI) Combining EA with visualization libraries to display those embeddings based on the metrics We’ll use the latter in this article. You will import helper functions and modules from Encord Active with visualization libraries (`matplotlib` and `plotly`). This code cell contains a list of the modules and helper functions. Pre-defined subset of metrics in Encord Active Next, iterate through the data quality metrics in Encord Active to see the list of available metrics, access the name attribute of each metric object within that iterable, and construct a list of these names: [metric.nameformetricinavailable_metrics] You should get a similar output: There are several quality metrics to explore, so let’s define and use the helper functions to enable you to visualize the embeddings. Helper functions for displaying images and visualizing the metrics Define the `plot_top_k_images` function to plot the top k images for a metric: def plot_top_k_images(metric_name: str, metrics_data_summary: MetricsSeverity, project: Project, k: int, show_description: bool = False, ascending: bool = True): metric_df = metrics_data_summary.metrics[metric_name].df metric_df.sort_values(by='score', ascending=ascending, inplace=True) for _, row in metric_df.head(k).iterrows(): image = load_or_fill_image(row, project.file_structure) plt.imshow(image) plt.show() print(f"{metric_name} score: {row['score']}") if show_description: print(f"{row['description']}") The function sorts the DataFrame of metric scores, iterates through the top `k` images in your dataset, loads each image, and plots it using Matplotlib. It also prints the metric score and, optionally, the description of each image. You will use this function to plot all the images based on the metrics you define. Next, define a `plot_metric_distribution` function that creates a histogram of the specified metric scores using Plotly: def plot_metric_distribution(metric_name: str, metric_data_summary: MetricsSeverity): fig = px.histogram(metrics_data_summary.metrics[metric_name].df, x="score", nbins=50) fig.update_layout(title=f"{metric_name} score distribution", bargap=0.2) fig.show() Run the function to visualize the score distribution based on the “Aspect Ratio” metric: plot_metric_distribution(“AspectRatio”,metrics_data_summary) Most images in the dataset have aspect ratios close to 1.5, a normal distribution. The set has only a few extremely small or enormous image proportions. Use EA’s `create_image_size_distribution_chart` function to plot the size distribution of your images: image_sizes = get_all_image_sizes(ea_project.file_structure) fig = create_image_size_distribution_chart(image_sizes) fig.show() As you probably expected for an open-source dataset for computer vision applications, there is a dense cluster of points in the lower-left corner of the graph, indicating that many images have smaller resolutions, mostly below 2000 pixels in width and height. A few points are scattered further to the right, indicating images with a much larger width but not necessarily a proportional increase in height. This could represent panoramic images or images with unique aspect ratios. You’ll identify such images in subsequent sections. Inspect the Problematic Images What are the severe and moderate outliers in the image set? You might also need insights into the distribution and severity of outliers across various imaging attributes. The attributes include metrics such as green values, blue values, area, etc. Use the `create_outlier_distribution_chart` utility to plot image outliers based on all the available metrics in EA. The outliers are categorized into two levels: "severe outliers" (represented in red “tomato”) and "moderate outliers" (represented in orange): available_metrics = load_available_metrics(ea_project.file_structure.metrics) metrics_data_summary = get_metric_summary(available_metrics) all_metrics_outliers = get_all_metrics_outliers(metrics_data_summary) fig = create_outlier_distribution_chart(all_metrics_outliers, "tomato", 'orange') fig.show() Here’s the result: "Green Values," "Blue Values," and "Area" appear to be the most susceptible to outliers, while attributes like "Random Values on Images" have the least in the ‘sashs/dog-food’ dataset. This primarily means there are lots of images that have abnormally high values of green and blue tints. This could be due to the white balance settings in the camera for the images or low-quality sensors. If your model trains on this set, it’s likely that more balanced images may perturb the performance. What are the blurry images in the image set? Depending on your use case, you might discover that blurry images can sometimes deter your model. A model trained on clear images and then tested or used on blurry ones may not perform well. If the blur could lead to misinterpretations and errors, which can have significant consequences, you might want to explore the blurry images to remove or enhance them. plot_top_k_images('Blur',metrics_data_summary,ea_project,k=5,ascending=False) Based on a "Blur score" of -9.473 calculated by Encord Active, here is the output with one of the five blurriest images: What are the darkest images in the image set? Next, surface images with poor lighting or low visibility. Dark images can indicate issues with the quality. These could result from poor lighting during capture, incorrect exposure settings, or equipment malfunctions. Also, a model might struggle to recognize patterns in such images, which could reduce accuracy. Identify and correct these images to improve the overall training data quality. plot_top_k_images('Brightness', metrics_data_summary, ea_project, k=5, ascending=True) The resulting image reflects a low brightness score of 0.164: What are the duplicate or nearly similar images in the set? Image singularity in the context of image quality is when images have unique or atypical characteristics compared to most images in a dataset. Duplicate images can highlight potential issues in the data collection or processing pipeline. For instance, they could result from artifacts from a malfunctioning sensor or a flawed image processing step. In computer vision tasks, duplicate images can disproportionately influence the trained model, especially if the dataset is small. Identify and address these images to improve the robustness of your model. Use the “Image Singularity” metric to determine the score and the images that are near duplicates: plot_top_k_images('Image Singularity', metrics_data_summary, ea_project, k=15, show_description=True) Here, you can see two nearly identical images with similar “Image Singularity” scores: The tiny difference between the singularity scores of the two images—0.01299857 for the left and 0.012998693 for the right—shows how similar they are. Check out other similar or duplicate images by running this code cell. Awesome! You have played with a few pre-defined quality metrics. See the complete code to run other data quality metrics on the images. Next Steps: Fixing Data Quality Issues Identifying problematic images is half the battle. Ideally, the next step would be for you to take action on those insights and fix the issues. Encord Active (EA) can help you tag problematic images, which may skew model performance downstream. Post-identification, various strategies can be employed to rectify these issues. Below, I have listed some ways to fix problematic image issues. Tagging and annotation Once you identify the problematic images, you can tag them within EA. One of the most common workflows we see from our users at Encord is identifying image quality issues at scale with Encord Active, tagging problematic images, and sending them upstream for annotation with Annotate. Re-labeling Incorrect labels can significantly hamper model performance. EA facilitates the re-labeling process by exporting the incorrectly labeled images to an annotation platform like Encord Annotate, where you can correct the labels. Active learning Use active learning techniques to improve the quality of the dataset iteratively. You can establish a continuous improvement cycle by training the model on good-quality datasets and then evaluating the model on low-quality datasets to suggest datasets to improve. Active learning (encord.com) {{light_callout_start}} Check out our practical guide to active learning for computer vision to learn more about active learning, its tradeoffs, alternatives, and a comprehensive explanation of active learning pipelines. {{light_callout_end}} Image augmentation and correction Image augmentation techniques enhance the diversity and size of the dataset to improve model robustness. Consider augmenting the data using techniques like rotation, scaling, cropping, and flipping. Some images may require corrections like brightness adjustment, noise reduction, or other image processing techniques to meet the desired quality standards. Image quality is not a one-time task but a continuous process. Regularly monitoring and evaluating your image quality will help maintain a high-quality dataset pivotal for achieving superior model performance. Key Takeaways In this article, you defined the objective of training a binary classification model for your use case. Technically, you “gathered” human labels since the open 'sashs/dog-food' dataset was already labeled on Hugging Face. Finally, using Encord Active, you computed image embeddings and ran metrics on the embeddings. Inspect the problematic images by exploring the datasets based on the objective quality metrics. Identifying and fixing the errors in the dataset will set up your downstream model training and ML application for success. If you are interested in exploring this topic further, there’s an excellent article from Aliaksei Mikhailiuk that perfectly describes the task of image quality assessment in three stages: Define an objective Gather the human labels for your dataset Train objective quality metrics on the data {{Active_CTA}}
October 19
14 min
What is a Quality Metric? When you are working with datasets or developing a machine learning model, you often find yourself looking for or hypothesizing about subsets of data, labels, or model predictions with certain properties. Quality metrics form the foundation for finding such data and testing the hypotheses. The core concept is to use quality metrics to index, slice, and analyze the subject in question in a structured way to continuously perform informed actions when cranking the active learning cycle. Concrete example: You hypothesize that object "redness" influences the mAP score of your object detection model. To test this hypothesis, you define a quality metric that captures the redness of each object in the dataset. From the quality metric, you slice the data to compare your model performance on red vs. not red objects. {{gray_callout_start}}💡 Tip: Find an example notebook for this use-case here. {{gray_callout_end}} Quality Metric Defined We like to think of a quality metric as: [A quality metric is] any function that assigns a value to individual data points, labels, or model predictions in a dataset. By design, quality metrics is a very abstract class of functions because the accompanying methodologies are agnostic to the specific properties that the quality metrics express. No matter the specific quality metric, you can: sort your data according to the metric slice your data to inspect specific subsets find outliers compare training data to production data to detect data drifts evaluate your model performance as a function of the metric define model test-cases and much more all of which are possible with Encord Active. Tip: Try to read the remainder of this post with the idea of "indexing" your data, labels, and model prediction based on quality metrics in mind. The metrics mentioned below are just the tip of the iceberg in terms of what quality metrics can capture -- only imagination limits the space. Data Quality Metric Data quality metrics are those metrics that require only information about the data itself. Within the computer vision domain, this means the raw images or video frames without any labels. This subset of quality metrics is typically used frequently at the beginning of a machine learning project where labels are scarce or perhaps not even existing. Below are some examples of data quality metrics ranging from simple to more complex: Image Brightness as a data quality metric on MS COCO validation dataset on Encord. Source: Author Image Singularity as a data quality metric on MS COCO validation dataset on Encord. Source: Author {{gray_callout_start}}💡 Tip: See the list of all pre-built data quality metrics here. {{gray_callout_end}} Label Quality Metric Label quality metrics apply to labels. Some metrics use image content while others apply only to the label information. Label quality metrics serve many purposes but some of the more frequent ones are surfacing label errors, model failure modes, and assessing annotator performance. Here are some concrete examples of label quality metrics ranging from simple to more complex: Object count as a label quality metric on MS COCO validation dataset on Encord. Source: Author Annotation Duplicate as a label quality metric on MS COCO validation dataset on Encord. Source: Author {{gray_callout_start}}💡 Tip: See the list of all pre-built label quality metrics here. {{gray_callout_end}} Model Quality Metric Model quality metrics also take into account the model predictions. The most obvious use-case for these metrics is acquisition functions; answering the question "What should I label next?". There are many intelligent ways to leverage model predictions to answer this question. Here is a list of some of the most common ones: Using Model Confidence as model quality metric on MS COCO validation dataset on Encord. It shows the predictions where the confidence is between 50% to 80%. Source: Author Using Polygon Shape Similarity as model quality metric on MS COCO validation dataset on Encord. It ranks objects by how similar they are to their instances in previous frames based on Hu moments. The more an object’s shape changes, the lower its score will be. {{gray_callout_start}}💡 Tip: To utilize acquisition functions with Encord Active, have a look here. {{gray_callout_end}} Custom Quality Metrics We have now gone over some examples of common quality metrics that already exist in Encord Active. However, every machine learning project is different, and most likely, you have just the idea of what to compute in order to surface the data that you want to evaluate or analyze. With Encord Active, you only need to define the per-data-point computation and the tool will take care of everything from executing the computation to visualizing your data based on your new metric. Perhaps, you want to know when your skeleton predictions are occluded or in which frames of video-specific annotations are missing. You could also get even smarter and compare your labels with results from foundational models like SAM. These different use-cases are situations in which you would be building your own custom metrics. You can find the documentation for writing custom metrics here or you can follow any of the links provided above to specific quality metrics and find their implementation on GitHub. If you need assistance developing your custom metric, the [slack channel][ea-slack] is also always open. Conclusion Quality Metrics constitute the foundation of systematically exploring, evaluating and iterating on machine learning datasets and models. We use them for slicing data, comparing data, tagging data, finding label errors, and much more. The true power of these metrics is that they can be arbitrarily specific to a problem at hand. With Encord Active, it is super easy to define, execute, and utilize quality metrics to get the most out of your data, your models, and your annotators. Footnotes [^1]: The difficulty metric is inspired by this paper. [^2]: With COCO, model performance is already evaluated against multiple different subsets of labels. For example, scores like $AP^{\text{small}}$ and $AR^{\text{max=10}}$ from COCO can be expressed as label quality metrics and evaluated with Encord Active.
April 19
3 min
Forget fragmented workflows, annotation tools, and Notebooks for building AI applications. Encord Data Engine accelerates every step of taking your model into production.