Back to Blogs

How to Find and Fix Label Errors with Encord Active

December 19, 2022
|
8 mins
blog image

Introduction

Are you trying to improve your model performance by finding label errors and correcting them? 

You’re probably spending countless hours manually debugging your data sets to find data and label errors with various scripts in Jupyter Notebooks.

Encord Active, a new open-source active learning framework makes it easy to find and fix label errors in your computer vision datasets. With Encord Active, you can quickly and easily identify label errors in your datasets and fix them with just a few clicks. Plus, with a user-friendly UI and a range of different visualizations to slice your data, Encord Active makes it easier than ever to investigate and understand the failure modes in your computer vision models.

When to use Encord Active

In this guide, we will show you how to use Encord Active to find and fix label errors in the COCO validation dataset.

Before we begin, let us quickly recap the three types of label errors in computer vision.

Label errors in computer vision

Incorrect labels in your training data can significantly impact the performance of your computer vision models. While it's possible to manually identify label errors in small datasets, it quickly becomes impractical when working with large datasets containing hundreds of thousands or millions of images. It’s basically like finding a needle in a haystack.

  

blog_image_1828

The three types of labeling errors in computer vision are:

  • Mislabeled objects: A sample that has a wrong object class attached to it.
  • Missing labels: A sample that does not contain a label.
  • Inaccurate labels: A label that is either too tight, too loose, or overlaps with other objects.

Below you see examples of the three types of errors on a Bengalese tiger: 

Label errors

Tip! If you’d like to read more about label errors, we recommend you check out Data errors in Computer Vision.

How to find label errors with a pre-trained model

As your computer vision activities mature, you can use a trained model to spot label errors in your data annotation pipelines. You will need to follow a simple 4-step approach:

  1. Run a pre-trained model on your newly annotated samples to obtain model predictions. 
  2. Visualize your model predictions and ground truth labels on top of each other.
  3. Sort for high-confidence false positive predictions and compare them with the ground truth labels.
  4. Flag missing or wrong labels and send them for re-labeling.

Tip! It is important that the computer vision model you use to get predictions has not been trained on the newly annotated samples we are investigating.

How to fix label errors with Encord Active

Getting started

The sandbox dataset used in this example is the COCO validation dataset combined with model predictions from a pre-trained MASK R-CNN RESNET50 FPN V2 model. The sandbox dataset with labels and predictions can be downloaded directly from Encord Active.

Tip! The quality of your model can greatly impact the effectiveness of using it to identify label errors. The better your model, the more accurate the predictions will be. So be sure to carefully select and use your model to get the best results. 

 First, we install Encord Active using pip:

$ pip install encord-active 

Hereafter, we download a sandbox dataset:

$ encord-active download
Loading prebuilt projects ...
[?] Choose a project: [open-source][validation]-coco-2017-dataset (1145.2 mb)
 > [open-source][validation]-coco-2017-dataset (1145.2 mb)
   [open-source]-covid-19-segmentations (55.6 mb)
   [open-source][validation]-bdd-dataset (229.8 mb)
   quickstart (48.2 mb)

Downloading sandbox project: 100%|################################################| 1.15G/1.15G [00:22<00:00, 50.0MB/s]
Unpacking zip file. May take a bit.
╭───────────────────────────── 🌟 Success 🌟 ─────────────────────────────╮
│                                                                         │
│     Successfully downloaded sandbox dataset. To view the data, run:     │
│                                                                         │
│  cd "C:/path/to/[open-source][validation]-coco-2017-dataset" │
│  encord-active visualise                                                │
│                                                                         │
╰─────────────────────────────────────────────────────────────────────────╯

Lastly, we visualize Encord Active:

cd "[open-source][validation]-coco-2017-dataset"
$ encord-active visualise

blog_image_6010

In the UI, we navigate to the false positive page. A false positive prediction is when a model incorrectly identifies an object and gives it a wrong class or if the IOU is lower than the determined threshold . For example, if a model is trained to recognize tigers and mistakenly identifies a cat as a tiger, that would be a false positive prediction.

Next, we select the metric “Model confidence” and filter for predictions with >75% confidence.

Encord Active - False positives

Using the UI we can then sort for the highest confidence false positives to find images with possible label errors.

In the example below, we can see that the model has predicted four missing labels on the selected image. The objects missing are a backpack, a handbag, and two people. The predictions are marked in purple with a box around them.

False positive predictions - missing labels

As all four predictions are correct the label errors can automatically be sent back to the label editor to be corrected immediately.

Encord label editor

Similarly, we can use the false positive predictions to find mislabeled objects and send them for re-labeling in your label editor. The vehicle below is predicted with 99.4% confidence to be a bus but is currently mislabeled as a truck.

False positive prediction - wrong label

Using Encord’s label editor, we can quickly correct the label.

Encord label editor

To find and fix any remaining incorrect labels in the dataset, we repeated this process until we were satisfied.

If you're curious about identifying label errors in your own training data, you can try using Encord Active, the open-source active learning framework. Simply upload your data, labels, and model predictions to get started.

Conclusion

  • Finding and fixing label errors is a tedious manual process that can take countless hours. It is often done manually by shifting through one image at a time or writing one-off scripts in Jupyter notebooks.
  • The three different label error types are 1) missing labels, 2) wrong labels, and 3) inaccurate labels.
  • The easiest way to find and fix label errors and missing labels is to use a trained model to spot label errors in your training dataset by running a model. 

Want to test your own models?

"I want to get started right away" - You can find Encord Active on Github here.

"Can you show me an example first?" - Check out this Colab Notebook.

"I am new, and want a step-by-step guide" - Try out the getting started tutorial.

If you want to support the project you can help us out by giving a Star on GitHub :)

Want to stay updated?

encord logo

Power your AI models with the right data

Automate your data curation, annotation and label validation workflows.

Get started
Written by
author-avatar-url

Nikolaj Buhl

View more posts

Explore our products