How to Onboard 100s of Annotators for High-Quality Labels

Nikolaj Buhl
March 12, 2023
5 min read
blog image

Introduction

Over the last 2 years, we have helped hundreds of computer vision companies onboard and train thousands of annotators for their data labeling projects. The main takeaway? It’s a time-consuming and tedious process.

The same questions kept popping up:

  • “How do we ensure the annotators deliver high-quality labels?”  
  • “Is there a most efficient way to onboard them onto annotation projects?” 
  • “How long does it typically take for annotators to be qualified and ready to start labeling training data?”
  • "Should we be retraining our annotators?"

Our answer was often "It depends" - which wasn't very satisfying to us, nor to our customers. And which is why, over the last year, our team has been at work building the Annotator Training platform we wish existed!

Onboarding and training new annotators can be a daunting task, especially when dealing with complex datasets and specific use cases. But with Encord's Annotator Training Module, you can streamline the process, provide clear and concise training materials, and measure annotator performance before allowing them to annotate images that your models are trained on.

Accurate labeling ensures that your models can properly identify and classify objects. However, producing high-quality labeling is challenging, especially when dealing with large datasets.

In this article, we will explore you can onboard annotators using the Annotator Training Module to improve the speed and performance of your annotators and the speed and high-quality labels.

Fbackground image

One platform for creating better training data and debugging models.

If you like this post we know you’d also like these:

Why High-Quality Labels are Critical for Machine Learning Models

As you know machine learning models rely on high-quality training data to make accurate predictions, and thus decisions. In computer vision applications, the quality of the training data is dependent on the quality of the annotations. Annotations are labels that are applied to images or videos to identify objects, regions, or other features of interest. For example, in an image of a street scene, annotations may include the locations of vehicles, pedestrians, and traffic signs and classifications on the time of day, weather, or action taking place in the image.

Inaccurate or inconsistent annotations can lead to incorrect predictions and decisions, which can have serious consequences further down the line when you deploy your models to production

To ensure high-quality annotations, it is essential to have well-trained and experienced annotators who follow best practices and guidelines. 

However, onboarding and training thousands of annotators can be a challenge, especially when dealing with multiple annotators (and ever changing personnel), complex domains, and different use cases.

Existing Practices for Annotator Onboarding

Traditional methods for annotator onboarding typically involve providing annotators with written guidelines and instructions, and then relying on them to apply those guidelines consistently.

However, this approach can quickly lead to variations in annotation quality and inconsistencies between annotators.

Another common approach is to have a small group of expert annotators who perform the annotations and then use their annotations as ground truth library for your annotators to refer to. The downside with this approach is that it can be expensive, time-consuming, and it doesn’t scale very well.

To address these challenges, a growing number of companies are turning to specialized annotation tools that help ensure consistency and quality in the annotation training process. These tools provide a more structured and efficient way to onboard new annotator. 

Be aware though, with the majority if these tools, it can be difficult to efficiently onboard and train yours annotators. That’s where Encord’s Annotator Training Module comes in.

Measuring Annotation Quality

I think we can agree that High-quality annotation is critical for the success of  your computer vision models. Therefore, measuring the quality of annotations is an essential step to ensure that the data is reliable, accurate, and unbiased. In this chapter, we will discuss the importance of measuring annotation quality and the different methods used to assess the quality of annotations. Skip ahead if you want to read about existing practices and the Annotator Training Module.

Overview of Different Methods to Measure Annotation Quality

There are different methods to measure the quality of annotations. Some of the most common methods are:

  • Benchmark IOU: It measures the degree of agreement between two different labels. The most common method to measure Benchmark IOU agreement is through the use of intersection-over-union (IOU) scores. IOU measures the overlap between the bounding boxes created by different annotators. The higher the IOU score, the greater the agreement between the annotators.
  • Accuracy: Accuracy measures the proportion of annotations that are correctly labeled. It is calculated by dividing the number of correctly labeled annotations by the total number of annotations.
  • Ground truth Benchmark: The last approach is to have a small group of expert annotators who perform the annotations and then use their annotations as ground truth for to benchmark quality against. Ground truth Benchmark labels are the most reliable method for measuring annotation quality, but they can be time-consuming to create.


Comparison of Different Methods

Each method for measuring annotation quality has its strengths and weaknesses.


Benchmark IOU is a good measure of the degree of agreement between annotations, but it can be affected by the size and shape of the object being annotated.

Accuracy is a good measure of the proportion of annotations that are correct, but it does not take into account the degree of agreement between annotators.

Ground truth Benchmark labels are the most reliable method for measuring annotation quality, but they can be time-consuming to create.

Encord’s Annotator Training Module mixes all three methods into one and automates the evaluation process (Benchmark IOU ofcourse only applicable for cases with bounding boxes, polygons, or segmentation tasks).

Introducing Encord's Annotator Training Module

The Annotator Training Module has been designed to integrate seamlessly into your existing data operations workflows. The module can be customized to meet the specific needs and requirements of each use-case and project, with the ability to adjust the evaluation score for each project.

With the Annotator Training Module, onboarding and evaluating annotators becomes a breeze. The module is designed to ensure that annotators receive the proper training and support they need to produce high-quality annotations consistently.

The module includes the option to include Annotator training instructions directly in the UI. Such instructions can range from detailed instructions on how to use the annotation tool to best practices for specific annotation tasks.

You can customize the training instructions according to your specific use cases and workflows, making it easier for your annotators to understand the project's requirements and guidelines.

Your Data Operations team (or you) can monitor the performance of your annotators and identify areas for improvement.

Step-by-Step Guide on How to Use the Module to Onboard Annotators

Using Encord's Annotator Training Module is a straightforward and easy process. Here is a step-by-step guide on how to use the module to onboard annotators:

If you want to view the full guide with a video and examples see this guide:

Step 1: Upload Data

First you upload the data to Encord and create a new dataset. This dataset will contain the data on which the ground truth labels are drawn. In order to do this, you needs to choose the appropriate dataset for your specific use case. Once the dataset is chosen, it needs to be uploaded to the annotation platform. This is done by selecting the dataset from your local folder or uploading it via your cloud bucket.

Step 2: Set up Benchmark Project

The next step in the process is to set up a benchmark project. The benchmark project is used to evaluate the quality of the annotations created by the annotators. It is important to set up the benchmark project correctly to ensure that the annotations created by the annotators are accurate and reliable. To set up the benchmark project, you needs to create a new standard project. Once the project is created, an ontology needs to be defined. The ontology is a set of rules and guidelines that dictate how the annotations should be created. This ensures consistency across all annotations and makes it easier to evaluate the quality of the annotations.

Step 3: Create Ground Truth Labels

After the benchmark project is set up, it is time to create the ground truth labels. This can be done manually or programmatically. The ground truth labels are the labels that will be used to evaluate the accuracy of the annotations created by the annotators. 

Manually creating the ground truth labels involves having subject matter experts use the annotation app to manually annotate data units, as shown here with the bounding boxes drawn around the flowers. Alternatively, one can use the SDK to programmatically upload labels that were generated outside Encord.

Step 4: Set up and Assign Training Projects

Once the ground truth labels are created, it is time to set up and assign a training project with the same ontology. Once the training project is created, the scoring functions need to be set up. These will assign scores to the annotator submissions and calculate the relative weights of different components of the annotations.

With the module set up, you can now invite annotators to participate in the training. Encord provides a pool of trained annotators that can be added to your project, or you can invite your own annotators. Once the annotators have been added to the project, they will be provided with the training tasks to complete.

Step 5: Annotator Training

With the training project set up and the scoring functions assigned, it is time to train the annotators using the assigned tasks. Each annotator will see the labeling tasks assigned to them and how many tasks are left. The progress of the annotators can be monitored by the admin of the training module. This allows the admin to see the performance of the annotators as they progress through the training and to evaluate their overall score at the end.

Step 6: Evaluate Annotator Performance

After the annotators have completed their assigned tasks, it is time to evaluate their performance using the scoring function. 

This function assigns scores to the annotations created by the annotators and calculates the overall score. If necessary, modifications can be made to the scoring function to adjust the relative weights of different components of the annotations.

This ensures that the scoring function accurately reflects the importance of each component and that the overall score accurately reflects the quality of the annotations. Finally, the annotators can be provided with feedback on their performance and given additional training if necessary.

Conclusion

Annotating large datasets is a complex and time-consuming process, but it is a crucial step in developing high-quality machine learning models. Without accurate and consistent annotations, machine learning algorithms will produce inaccurate or unreliable results.

Encord's Annotator Training Module provides a powerful solution for data operation teams and computer vision engineers who need to onboard thousands of annotators quickly and efficiently. With the module, you can ensure that your annotators receive the proper training and support they need to produce high-quality annotations consistently.

Want to stay updated?

  • Follow us on Twitter and LinkedIn for more content on computer vision, training data, and active learning.
  • Join the Slack community to chat and connect.

author-avatar-url
Written by Nikolaj Buhl
Nikolaj is a Product Manager at Encord and a computer vision enthusiast. At Encord he oversees the development of Encord Active. Nikolaj holds a M.Sc. in Management from London Business School and Copenhagen Business School. In a previous life, he lived in China working at the Danish Embas... see more
View more posts
cta banner

Build better ML models with Encord

Get started today
cta banner

Discuss this blog on Slack

Join the Encord Developers community to discuss the latest in computer vision, machine learning, and data-centric AI

Join the community

Software To Help You Turn Your Data Into AI

Forget fragmented workflows, annotation tools, and Notebooks for building AI applications. Encord Data Engine accelerates every step of taking your model into production.