Testing AI in a Digital Twin
Virtual commissioning and testing of the entire AI-driven process
Last updated
Virtual commissioning and testing of the entire AI-driven process
Last updated
After training an AI model, the next (optional) step is to test the model within a digital twin environment. This step involves using the AI model to perform object detection in real-time to evaluate its accuracy and performance in recognizing objects or patterns. Testing can be conducted in a comprehensive digital twin environment that simulates various industrial and automation components, including PLC interfaces, robotics, drives, sensors, and more.
In the provided Lego example, the AI detection system is used to track Lego bricks and sort them into designated boxes. The testing process begins with the AI performing object detection to identify Lego bricks in the scene, recognizing their position and classifying their type or color. The system then continuously tracks the detected bricks across frames, maintaining information about their positions and identities.
Testing within a digital twin environment offers several benefits. It provides a safe, simulated space to test AI algorithms before real-world deployment, ensuring that the trained model performs as expected. It also allows for integration testing, verifying that the AI system works seamlessly with other components, such as PLCs, robots, and sensors. Additionally, it provides an opportunity to fine-tune the AI model and system parameters to optimize performance before actual deployment.
The Object Detection script is responsible for identifying objects within the digital twin environment using a trained AI model. The configuration properties for this component include:
Model:
The trained AI model asset used for detection. In this example, the model is set to lego-demo-n
, which has been trained before in AI Training.
Labels:
The label definitions associated with the model, such as LegoLabels
. These labels specify the classes that the model can recognize (e.g., different types of objects or components) and the colors for the classes used in the Detection preview.
Backend:
Specifies the computational backend used for running the detection. It can be set to CPU
or GPU
, depending on the available hardware and performance requirements.
Please note for GPU CUDA needs to be installed
Confidence:
The confidence threshold for detecting objects. The AI will only recognize objects with a confidence score equal to or above this value. For example, if set to 0.25
, only detections with at least 25% confidence will be considered valid.
Margin: Defines additional margin settings for the detection area:
X: The horizontal margin.
Y: The vertical margin.
These margins can be adjusted to expand or contract the area considered for detection. Margin of 0.1 means 10% of camera side length.
Detections: Indicates the number of detected objects. You can open the table for more information about the detections.
The Object Tracker script manages the tracking of detected objects across multiple frames, maintaining their identity over time. The configuration settings include:
Tracking Parameters: Several parameters control the tracking algorithm:
Required Frames: The minimum number of frames an object must be detected to be considered a valid tracked object.
Max Missing Frames: The maximum number of frames an object can be undetected before it is considered lost.
Max Distance X / Max Distance Y: The maximum distance (in X and Y directions) that an object can move between frames to still be considered the same object.
Box Loss Weight: A weight factor used in the tracking algorithm for the bounding box size.
Horizontal Loss Weight / Vertical Loss Weight: Weight factors used to prioritize tracking accuracy in horizontal and vertical directions, respectively.
Tracked Objects: Displays information about currently tracked objects, including:
Time Stamp: The current time frame or step in the simulation.
Ids: Unique identifiers assigned to tracked objects.
Centers: The central position of the tracked objects in the scene.
Sizes: The size of each detected object's bounding box.
Labels: The class label of each detected object.
The AI Tracking PLC Interface script integrates AI-based tracking with a Programmable Logic Controller (PLC) to enable real-time automation control based on visual data.
Active: Indicates when the AI Tracking PLC Interface is active. In this case, it is set to Always, meaning the interface is continuously monitoring and sending tracking information.
Signal Tracking:
Specifies the PLC input or signal that is used for tracking purposes. For example, DemoSignalTrackingToPLC
represents a signal that communicates object tracking data to the PLC. In the Example all tracking data is send as a JSON to the PLCInputText signal.
Debug Mode: Enables or disables debugging logs. When enabled, additional information about the tracking and PLC interface is logged, which can help with troubleshooting.