realvirtual.io
  • Building Digital Twins with Unity and realvirtual.io
  • Basics
    • Installation
    • Demo Model
      • Old Demo Model
    • Editor User Interface
      • Hierarchy Window
      • 3D Views
      • Quick Edit
      • Move Pivot Points (Pro)
      • Selection Window (Pro)
      • Material Window (Pro)
      • Measurement (Pro)
      • Kinematic Tool (Pro)
      • Model Checker
    • Runtime UI
      • Group Manager
      • Debug Console
      • Responsiveness
      • Runtime Persistence
    • Importing and exporting
    • Folder structure
    • Tutorial
    • Physics
    • CAD import (Pro)
      • CADLink (Pro)
      • CAD Checker (Pro)
      • CAD Updater (Pro)
    • Reusable Components (Prefabs)
    • Cadenas parts4cad
    • Publishing the Digital Twin
    • Revision management
  • News
  • Login & Download Updates (Pro)
  • Components & Scripts
    • Realvirtual
      • Environment Controller
    • MU, Source and Sink
      • Source
      • Sink
    • Motion and Kinematic
      • Drive
      • Kinematic
      • Group
      • Joint
      • CAM
      • TransportSurface
      • Guided Transport
      • Drive behavior
      • Chain
      • Chain element
      • Motion for developers
    • Sensors
      • Sensor
      • Measure
      • MeasureRaycast
    • Picking and Placing MUs
      • Grip
      • Gripper
      • Fixer
      • Pattern
    • Changing MUs
      • MaterialChanger
      • PartChanger
      • Cutter
    • Scene Interaction
      • 3D Buttons
      • Screen Overlay Buttons
      • Scene Selectables
      • Lamp
      • Interact3D
      • UI components
      • HMI components (Pro)
        • Start your own HMI
        • HMI Tab
        • HMI DropDown
        • HMI Puschbutton
        • HMI Switch
        • HMI Value
        • HMI Text
        • HMI Slider
        • HMI Message
        • HMI Marker
      • ModelZoo (Pro)
    • Interfaces
      • Interface Tools
        • Signal Manager
        • Signal Importer Exporter
        • Signal Catcher
        • OnValueChangedReconnect
      • Signal Importer / Exporter
      • ABB RobotStudio (Pro)
      • Denso Robotics (Pro)
      • EthernetIP (Pro)
      • Fanuc (Pro)
      • FMI
      • Igus Rebel
      • MQTT (Pro)
      • Modbus (Pro)
      • OPCUA (Pro)
      • PLCSIM Advanced (Pro)
      • RoboDK (Pro)
      • RFSuite (Pro)
      • SEW SimInterface (Pro)
      • Siemens Simit Interface (Pro)
      • Simit Shared Memory (Pro)
      • Simulink (Pro)
      • S7 TCP
      • TwinCAT (Pro)
      • TwinCAT HMI (Pro)
      • UDP (Pro)
      • Universal Robots (Pro)
      • Wandelbots Nova (Pro)
      • Websocket (Pro)
      • Windmod Y200 (Pro)
      • Custom Interfaces
    • Performance Tools
      • Combine Meshes (Pro)
      • Create Prefab (Pro)
      • Hierarchy Cleanup (Pro)
      • Mesh Optimizer (Pro)
      • Performance Optimizer (Pro)
    • Defining Logic
      • Unity Scripting
      • Behavior Graph
      • Logicsteps
      • Unity Visual Scripting
      • Recorder
    • Robot Inverse Kinematics (Pro)
    • Volume Tracking (Pro)
  • Multiplayer (Pro)
  • Extensions
    • ModelZoo
      • Beckhoff TwinCAT
      • Siemens S7
      • Denso Cobotta 1300
      • Wandelbots Nova Fanuc CRX
      • Universal Robots UR10
      • Fanuc Roboguide
    • realvirtual.io AIBuilder
      • realvirtual.io AI Builder Overview
      • Generate AI Training Data
      • AI Training
      • Testing AI in a Digital Twin
      • Deploying the AI
    • realvirtual.io Simulation
      • Conveyor Library
      • Path System
        • Path finding
        • Line
        • Curve
        • Workstation
        • Catcher
        • Pathmover
    • realvirtual.io Industrial Metaverse
      • Setup Guide
      • VR Modules
      • AR Modules
      • Multiuser
    • AGX Physics
    • VR Builder
    • CMC ViewR
  • Advanced Topics
    • Render Pipelines
    • Mixed Reality with Meta Quest3
    • Upgrade Guide
      • Upgrade to 2022
    • Open Digital Twin Interface
    • Usefull Addons
    • Improving Performance
    • Supported Platforms
    • Compiler Defines
    • For Developers
      • Important Classes and Interfaces
      • Assembly Definitions
      • Starting your development
      • Class documentation
      • realvirtual Init Sequence
      • realvirtualBehavior Lifetime Methods
      • Testing
    • Newtonsoft JSON
    • Troubleshooting
  • Release Notes
  • AI Digital Twin Assistant (GPT4)
  • License Conditions
Powered by GitBook
On this page
  • Introduction
  • Object Detection
  • Object Tracker
  • AI Tracking PLC Interface
Edit on GitHub
  1. Extensions
  2. realvirtual.io AIBuilder

Testing AI in a Digital Twin

Virtual commissioning and testing of the entire AI-driven process

PreviousAI TrainingNextDeploying the AI

Last updated 6 months ago

Introduction

After training an AI model, the next (optional) step is to test the model within a digital twin environment. This step involves using the AI model to perform object detection in real-time to evaluate its accuracy and performance in recognizing objects or patterns. Testing can be conducted in a comprehensive digital twin environment that simulates various industrial and automation components, including PLC interfaces, robotics, drives, sensors, and more.

In the provided Lego example, the AI detection system is used to track Lego bricks and sort them into designated boxes. The testing process begins with the AI performing object detection to identify Lego bricks in the scene, recognizing their position and classifying their type or color. The system then continuously tracks the detected bricks across frames, maintaining information about their positions and identities.

Testing within a digital twin environment offers several benefits. It provides a safe, simulated space to test AI algorithms before real-world deployment, ensuring that the trained model performs as expected. It also allows for integration testing, verifying that the AI system works seamlessly with other components, such as PLCs, robots, and sensors. Additionally, it provides an opportunity to fine-tune the AI model and system parameters to optimize performance before actual deployment.

Object Detection

The Object Detection script is responsible for identifying objects within the digital twin environment using a trained AI model. The configuration properties for this component include:

  • Labels: The label definitions associated with the model, such as LegoLabels. These labels specify the classes that the model can recognize (e.g., different types of objects or components) and the colors for the classes used in the Detection preview.

  • Backend: Specifies the computational backend used for running the detection. It can be set to CPU or GPU, depending on the available hardware and performance requirements.

Please note for GPU CUDA needs to be installed

  • Confidence: The confidence threshold for detecting objects. The AI will only recognize objects with a confidence score equal to or above this value. For example, if set to 0.25, only detections with at least 25% confidence will be considered valid.

  • Margin: Defines additional margin settings for the detection area:

    • X: The horizontal margin.

    • Y: The vertical margin.

    These margins can be adjusted to expand or contract the area considered for detection. Margin of 0.1 means 10% of camera side length.

  • Detections: Indicates the number of detected objects. You can open the table for more information about the detections.

Object Tracker

The Object Tracker script manages the tracking of detected objects across multiple frames, maintaining their identity over time. The configuration settings include:

  • Tracking Parameters: Several parameters control the tracking algorithm:

    • Required Frames: The minimum number of frames an object must be detected to be considered a valid tracked object.

    • Max Missing Frames: The maximum number of frames an object can be undetected before it is considered lost.

    • Max Distance X / Max Distance Y: The maximum distance (in X and Y directions) that an object can move between frames to still be considered the same object.

    • Box Loss Weight: A weight factor used in the tracking algorithm for the bounding box size.

    • Horizontal Loss Weight / Vertical Loss Weight: Weight factors used to prioritize tracking accuracy in horizontal and vertical directions, respectively.

  • Tracked Objects: Displays information about currently tracked objects, including:

    • Time Stamp: The current time frame or step in the simulation.

    • Ids: Unique identifiers assigned to tracked objects.

    • Centers: The central position of the tracked objects in the scene.

    • Sizes: The size of each detected object's bounding box.

    • Labels: The class label of each detected object.

AI Tracking PLC Interface

The AI Tracking PLC Interface script integrates AI-based tracking with a Programmable Logic Controller (PLC) to enable real-time automation control based on visual data.

  • Active: Indicates when the AI Tracking PLC Interface is active. In this case, it is set to Always, meaning the interface is continuously monitoring and sending tracking information.

  • Signal Tracking: Specifies the PLC input or signal that is used for tracking purposes. For example, DemoSignalTrackingToPLC represents a signal that communicates object tracking data to the PLC. In the Example all tracking data is send as a JSON to the PLCInputText signal.

  • Debug Mode: Enables or disables debugging logs. When enabled, additional information about the tracking and PLC interface is logged, which can help with troubleshooting.

Model: The trained AI model asset used for detection. In this example, the model is set to lego-demo-n, which has been trained before in .

AI Training
Object detection and tracking in the Digital Twun
Labels used for object detection