How to Build the Ultimate Automated Data Labeling Workflow Using Superb AI

James Kim

James Kim

Growth Manager | 2023/1/10 | 10 min read

Optimizing and streamlining the process of labeling data is a goal for many practitioners. Even though it might seem easy, it involves a lot of steps and factors that must be taken into account. In Part 3, the topics of attending to labeling hindrances, inspecting, generating a standard dataset, and exporting it for mechanization were discussed. In this section, we'll explore the technology underpinning the Superb AI Suite and the most effective strategies and procedures for your machine learning workflow.

In Part 4 we discuss:

  • Custom Auto-Label and Auto-Label AI

  • Building Your Own Custom Auto-Label

  • Applying Custom Automation to Your Ground Truth

  • Uncertainty Estimation and Active Learning

  • Assigning Your Team Review Tasks

  • Performance Metrics and Understanding the Results

  • Retraining Your Model

Automated Labeling

Pre-programmed labels can be used to identify certain objects and classes within a dataset through automated labeling and custom automated processes. With only a small portion of the data labeled and trained, the model can then take that information and employ it on alternate datasets, eliminating the necessity for manual annotation and the amount of time for labeling. Implementing automation into computer vision workflows will save numerous hours instead of manual annotation and quality control and decrease cost. Superb AI unifies automation with active learning to make the team's project easier and produce more favorable results.

Custom Auto-Label and Auto-Label: What’s the Difference?

Superb AI's system comprises two distinct automation techniques: Custom-Auto Labeling (CAL) and Auto-Labeling. They are both utilized to give tags to an unlabeled dataset by using pre-labeled data. The appropriate method to be employed depends on the project and the object classes involved.

Custom Auto-Label

On many occasions, customers have to work with specialized datasets, such as medical imaging or the detection of diseases in plants, that necessitate expertise from outside. In such circumstances, automated labeling will not suffice. Rather Superb AI's CAL makes use of pertinent skill and experience to create a unique automated labeling model tailored to individual use cases.

By making use of a pre-labeled ground truth dataset, project administrators have the capability to train groups of unlabeled data, judge performance, and make necessary amendments until their model is meeting desired standards. The advantages of utilizing CAL for your labeling workflow are as follows:

  • Customization: CAL is easily adaptable for your use case and can be tweaked and adjusted by your team as they see fit

  • Edge cases: For machine learning models to be successful, unique examples and edge cases must be taken into consideration. Using CAL, you can be sure to incorporate edge cases that will help improve your model.

  • Adaptability: New labels can be added to your CAL as your team continues to train your model, boosting performance.

Note: Custom Auto-Label is only available through the purchase of a team plan. To speak to one of our sales representatives about a product demonstration, fill out our contact form at https://www.superb-ai.com/contact.

Creating your CAL

With the Superb AI Suite, building your own CAL is straightforward; however, you must ensure you are acquainted with the procedure of forming a data export which was mentioned in the preceding part of this tutorial. To recall, data exports consist of manually labeled training and validation sets that are then utilized to condition your model. Now, let us consider how to build a custom auto-label.

For a step-by-step approach on building your CAL, consider these two methods:

Approach 1: Create and Name a New Dataset

1. Navigate to your Label Exports tab.

2. Choose “Create Custom Auto-Label AI.”

3. Name your CAL. We recommend being very specific and including the names of your datasets, so that you can keep track of how well your model performs with each iteration. Because our first CAL consists of ground truth and validation sets, we suggest calling it GT1 + VAL.

4. Wait for Superb AI to create your CAL; this typically takes 45 minutes to an hour.

Approach 2: Export a Labeled Dataset

1. Choose “Custom Auto-Label” from your project sidebar. A list of your exports will appear to the right of the sidebar.

2. Select “Create Custom Auto-Label” on the top right of your screen.

3. Your export history will appear, and you will be prompted to choose a labeled dataset..

4. Name your dataset, keeping in mind specifics.

5. Click “Confirm” and your CAL will be created.

The initial iteration of your CAL will be assessed based on its performance in comparison with your validation set. We suggested denoting the validation set as _VALIDATION_ in the previous segment of this series. This is because our system is aware of it being a fixed validation set and will compare each iteration with it.

By having this control as a starting point, you will be able to tell how your CAL's performance is improving as time goes on. Moreover, your team is going to be offered useful pieces of information, including precision, recall, and expected efficiency gains, which can be seen by tapping on the tiny arrow at the bottom of the export.

Applying your Custom Auto-Label

If you'd like to tweak the classes and parameters of your CAL before you apply it, you can do so by exploring auto-label settings, which can be found on the sidebar located on the left side of your screen. It's important to note that not all users choose to go this route and may opt to just click the "Apply" button without making any changes.

In Auto-Label settings, you have the choice of arranging your classes through Auto-Label Mapping. This allows you to select the categories you want to keep and exclude certain elements. This option is really convenient when you want to work with particular classes or sub-categories.

When ready, you should pick the images you want the auto-label applied to. Through your label list, choose the test set images that have not been labeled yet. Then, click the auto-label option in the right-hand corner. You'll be asked to confirm the auto-label settings and apply them to the desired images. After the auto-labeling is finished, your team can review the results.

Auto-Label AI

Our auto-label tool is set up with more than 100 frequently used objects and categories that can be added to your dataset. It's mainly suitable for projects that involve common objects, like people and vehicles. Instead of having to develop your own auto-labeler, you can use our auto-label AI immediately to make your workflow simpler. Custom Auto-Label, on the other hand, is made to work with your own data and identified classes.

Not every project will be siloed to using one method or the other. Instead, Superb AI provides the option to combine both auto-label and Custom Auto-Label for the best results for your project.

Active Learning and Uncertainty Estimation

Once a CAL has been applied to a dataset, it's important to assess the quality of the labels that have been generated. In order to minimize the time needed to verify each label, our patented Uncertainty Estimation tool provides a focused strategy.

How does it work?

Our estimation system applies both uncertainty sampling and an entropy-based query technique to determine which labels are the most uncertain. Entropy in machine learning signifies the discrepancies in the outcomes of the model depending on the heterogeneous dataset.

When you have diverse data, the outcomes are usually more successful. Still, the model must be able to adjust to the variations and make precise forecasts. If the data is not uniform, it can be difficult for the model to achieve the wanted result. Nevertheless, it's possible to modify the model and teach it to comprehend difficult samples.

Superb AI uses uncertainty sampling to identify the examples your model is least sure about. By applying this principle through active learning, the uncertainty estimation tool can highlight examples with the highest output variance. Output variance refers to how different your model’s predictions are based on the data, so when output variance is high, those predictions tend to fluctuate.

With varying predicted outcomes, your model will fail to give you the results you desire; knowing this, you can improve your model through a querying process known as variance reduction, thus training your model to make more consistent predictions.

The Uncertainty Estimation tool uses output variance to assign each submitted auto-labeled data with a difficulty score based on its level of uncertainty. As part of the platform, we’ve made it easy to recognize those labels through a traffic light visualization system.

When you visit the labels page, an icon and bar graph will appear in the "Auto-Label" column for any labels that have been processed with your auto-label or Custom Auto-Label AI. Each label is assigned a color - green, yellow, or red - which indicates the difficulty your model has in making a prediction based on the level of uncertainty. Green is the easiest, yellow is medium, and red is the most difficult.

The purpose of producing a difficulty score associated with each label is to highlight which instances need to be evaluated and audited by your team. Unlike other querying strategies, Uncertainty Estimation highlights which particular labels are worth auditing and which have a high chance of being correct.

In addition, your team is given valuable information on edge cases and difficult scenarios, meaning that they can use this information to add more data as they see fit. This eliminates hours of QA work from your team and lets them focus on examples that your model is having the most difficulty with.

Assigning Reviewing Tasks to Your Team

Based on the difficulty scoring assigned to your labels, your team can concentrate solely on auditing those examples to best improve your model. As part of our interface, Superb AI has implemented helpful filters to isolate your data and illustrate certain qualities or statuses. We recommend using those tools to help your team streamline the auditing and evaluation process. To isolate your labels, take the following steps:

1. Navigate to your Labels page.

2. At the top of the screen, notice that there are tools to filter and isolate your data.

3. Click on where it says “Filter by” and choose “Auto-Label Difficulty.” You should only be able to see data that has been auto-labeled.

4. Next, where it prompts you to select further options, choose “Difficult” and “Medium.” This will exclude any data classified as easy.

5. You want to be sure that the labels you’ll be working with have been submitted, so you’ll need to add an additional filter. You can do this by clicking on the “+” icon to the right of your filtering tool.

6. Another row of filtering options will populate. Here, under “Filter by” select “Status” is any one of.

7. Then, where you are prompted to choose the status of your data, select “Submitted.” This will only populate data that has been properly submitted and none that are in progress or skipped.

Note: After analyzing the data, it's likely you'll need to re-run the model multiple times. Generally, teams don't settle with the results from the first trial, and they usually carry out repetitions of the model until they get the desired outcome. To better prepare your model for subsequent processes and to make sure your labels are arranged and classified correctly, we advise that you use our tagging system at this stage.

Just as we named our ground truth and validation sets, we’re going to use the same process to segment our data with high difficulty scoring and call it GT2, or ground truth 2. Additionally, there are likely to be some images that your auto-labeller skipped. It can be helpful to add these images to your GT2 subset as well.

Now that you’ve properly sectioned off the labels your team will be working with and optionally tagged them, you’ll need to assign specific labels to each reviewer. This way, you can easily divide tasks among your team and later track progress. We’ve touched on splitting labels earlier in this series; assigning them is a similar process:

1. Remain on your Labels page, and click the selection box on the right hand side above your labels. This will select all of the labels currently populated on that page.

2. You will be prompted to select every label within your filtered parameters in the middle of your page. Click that option.

3. Above your labels is the option to “Request Review.” Select this option, and a list of your team members will appear.

4. Decide which reviewers you would like to have audit your labels, and check off their names. Click next.

5. On the following page, you can choose how you would like to divide your labels, either equally or proportionally. As a reminder, an equal distribution divides them in half, while a proportional distribution assigns your team labels until everyone has completed the same amount. Each method has its upsides and drawbacks depending on your team structure.

6. Have your team make the necessary revisions to their assigned images, send them out for approval, and ensure their accuracy before continuing.

Retrain Your Model

Uncertainty estimation is a valuable tool for detecting any issues that your model may have had difficulty with. If you focus on these scenarios, you can enhance the effectiveness of your model and its ability to work in the real world.

To continue improving your model, you should look at the cases where it had trouble making correct predictions. Otherwise, your model won't be able to produce consistent results in similar situations reliably.

So, how do we overcome these identification challenges? We can repeat the Custom Auto-Label training process multiple times in order to eliminate any issues and determine any unusual scenarios. With each repetition, we can anticipate a rise in our productivity and gain a greater level of accuracy and recovery, resulting in a more refined model. So, how do we utilize the data we have to refine our model?

Superb AI was designed to easily integrate and isolate specific datasets and images, so that it’s easy to adjust and edit training data. For this next iteration, it’s imperative that we include our difficult examples. Here’s how it works:

1. Populate your labels list.

2. Utilize your filter feature to isolate your data with the following tags: GT1, __VALIDATION__, and GT2. This will group your previous ground truth set with the added labels that your team manually corrected.

3. Create a new export based on the above data, and repeat the process for building your Custom Auto-Label.

Compare your results

Grouping images with higher difficulty ratings and utilizing them in the next cycle should help your model gain a better understanding of comparable instances. Once you apply this data to a new Custom Auto-Label, you should be able to recognize the major progress your model has made.

Important analytics

As mentioned previously, Superb AI supplies your team with important analytics to help measure efficiency gains and model performance. Being able to understand these numbers can help project managers assess overall performance as well as accuracy gains by class. And with each iteration, it’s possible to work on specific areas of improvement, meaning that your model will be able to incorporate those improvements into real-world applications.

Reading your results

To understand how well your model performed, Superb AI supplies its users with performance breakdowns with each iteration of your Custom Auto-Label.

To see these results, simply navigate to your Custom Auto-Label page. With each CAL, there is a tiny arrow that says “See all available classes.” If you toggle that arrow, you’re given a full list of each class and their precision and recall measurements.

Precision

A measure of precision in computer vision is the proportion of correct positive class identifications in relation to the sum of both true and false class identifications. To calculate precision, we can plug our numbers into the following formula:

In this instance, TP is equivalent to true positives, and FP is equivalent to false positives. Precision, then, aims to solve how many positive classifications are proven to be correct. In essence, this measurement calculates actual true values in relation to what the model classifies as true.

Recall

Recall, on the other hand, calculates the instances classified as true by the model in relation to instances in which its value is actually true. We calculate recall with the following formula:

Again, TP = true positives, and FN = false negatives. Recall aims to find out the percentage of true positives that were accurately identified.

Both precision and recall serve as essential measurements in computer vision, but the two are constantly at odds with each other, i.e. if precision is improved, recall suffers, and vice versa. Deciding which measurements yield the most value to your model and use case is something your team should evaluate, as well as finding the proper balance.

Next Steps

After repeating the CAL procedure and evaluating the outcomes, your team can determine what action to take. If they continue to refine the automated labeling engine, the results can only be anticipated to get better.

Moreover, by analyzing performance according to the category, it's possible to customize the model further by adding examples that will upgrade it. The versatility of the Superb AI allows for various adjustments or changes according to application.

If you’ve found this series valuable thus far, then stay tuned for part 5. We’ll be breaking down the best strategies for monitoring team performance, mitigating data bias, and tracking labeling time. We’re excited to continue educating you on the various nuances of computer vision, data labeling, and the Superb AI Suite.

Subscribe to our newsletter

Stay updated latest MLOps news and our product releases

About Superb AI

Superb AI is an enterprise-level training data platform that is reinventing the way ML teams manage and deliver training data within organizations. Launched in 2018, the Superb AI Suite provides a unique blend of automation, collaboration and plug-and-play modularity, helping teams drastically reduce the time it takes to prepare high quality training datasets. If you want to experience the transformation, sign up for free today.

Join The Ground Truth Community

The Ground Truth is a community newsletter featuring computer vision news, research, learning resources, MLOps, best practices, events, podcasts, and much more. Read The Ground Truth now.

home_ground_truth

Designed for Data-Centric Teams

We’ve built a platform for everyone involved in the journey from training to production - from data scientists and engineers to ML engineers, product leaders, labelers, and everyone in between. Get started today for free and see just how much faster you can go from ideation to precision models.