Notara logo

Classification of cargo train carts

What?

The solution detects and classifies cargo train carts and their identification labels, then extracts the text to categorize the cart for further processing. Helps reduce manual inspection steps.

How?

Images of the carts were collected and the text areas on them were labeled, and a video model was trained with this labeled data. The trained video model runs either in the cloud or behind an API, where video streaming from the camera is received for real-time classification.

With what?

We've used the powerful and latest YOLOV12 model for video modeling and Google OCR solution for text extraction from complex backgrounds.

Optimizing and tracking time of container loading

What?

Terminal tractors are categorized into two statuses: waiting to be loaded (marked with a yellow box) and loaded container status (marked with a red box) depending on whether the container has been loaded onto the vehicle or not. A loading area marked with a yellow box has been designated, within which the loading time per tractor is measured. This system allows for the optimization and measurement of container loading times. Vehicles can be redirected in advance to another loading area if loading is currently taking place in the designated area, and it automatically tracks the times at which certain vehicles were loaded with cargo.

How?

The model is trained to track two different objects such as containers in the foreground and terminal tractors. Tractors are further classified based on their status using bounding box colors. A designated loading area is defined within the camera's field of view, and the time each tractor spends in this area is recorded to monitor loading durations.

With what?

Object detection is performed using the YOLOV12 model, and further vehicle status determination and loading time tracking logic are implemented with scripts on our server.

Recognizing defects on the manufacturing line

What?

The model identifies trained products from the production line and then applies a second model to analyze the product's surface or shape for defect detection. The validated model achieves a high accuracy of 97.5%, making it well-suited for optimizing processes on the production line.

How?

First model is trained to locate specific products in the camera image, and a second multi-layer model is trained to analyze details of the object's appearance that categorize the product as either defective or meeting standards.

With what?

For object detection, the versatile YOLOV12 model has been used again, and for defect determination, an 18-layer ResNet convolutional model, which has proven itself well for feature extraction from images.