Building a Dataset: Object Detector
Building a Dataset: Object Detector
Building a Dataset: Object Detector
Building a Dataset: Object Detector

Object Detector

Now that you have identified that an Object Detector is the right option for you and your use case, you have to collect and prepare your data to train an Object Detector. For the below steps, we will use the example of detecting when someone is not wearing proper protective equipment from the Use Cases & AI Architectures document.

Define the Classes

Start by identifying the classes you want your model to predict.

This could be a larger object you want to detect in a frame or something small. For this example use case of wanting to detect whether people are compliant for wearing the proper protective equipment, you may want to detect if they are wearing a hard hat, high visibility vest, safety glasses, etc. We will focus on detecting people not wearing a high visibility vest for this document.

NOTE: The detail of detection will be depend on the training of the model as well as the resolution and clarity of the images fed to the model. Something larger, like the high visibility vest, will be easier to detect than something like safety glasses.

Collect the Data

Now you will need to collect your data. If you already have a dataset collected, then use this time to review your dataset and make sure it is ready to be annotated.

Before you start capturing images, you will want to make sure the dataset you are curating represents real-world scenarios for your use case. This is extremely important for your model performance. If you already have a dataset of images, review them to make sure they represent real-world scenarios.

With collecting data, you will need to take into account numerous variables in order to ensure that your model will have the best data to train on.

  • Lighting settings

  • Image resolution quality

  • Camera angles

  • Backgrounds and environments

  • If images without the object to identify are needed

  • If images with both the object and not the object in the same frame are needed

For the example use case, in order for the model to train properly, we will want images of people wearing a high visibility vest and images of people not wearing a high visibility vest.

Organize the Data

Once you have your data collected, you will want to organize and distribute the images between two folders.

  • 80% Training images

    • Create a second folder inside your Training images folder and call it "annotations". This is needed for the next step.

  • 20% Testing images

Unlike the Image Classifier dataset, you do not need to worry about organizing separate Validation images as Navigator will automatically "create" your Validation images with the training element. Do this by adjusting the Validation Split slider found in the settings panel for the Object Detection Trainer Element. We recommend starting with 20% Validation Split.

Annotate the Data

In order to be able to train an object detector model, you will need to annotate your training dataset so the model knows what to learn and what it is looking for in the image.

To annotate your training images, you will need an annotation tool that allows you to annotate using bounding boxes and can export the annotations in COCO JSON format. We recommend using the MAC application RectLabel Pro.

You will draw a bounding box around the objects in the frame of your training images that you want to assign a class to. For this example, you will want two classes, people wearing a high visibility vest and people not wearing a high visibility vest, to ensure the model knows the difference. You will want to draw your bounding boxes as close to the edge of the object you want to detect. This will ensure the model trains on just the object you want to detect and not anything in between the edge of the item and the edge of the bounding box.

Once you have annotated your training images, then you can move onto training your Object Detector.