Deploying the AI

Deploying the AI to external systems with ONNX or run it within a Unity Build

After training and testing an AI model within a digital twin environment, the final step is deployment. Deployment involves making the trained model available for real-time inference, where it can be used to perform tasks such as object detection, classification, or decision-making in production environments. There are two primary ways to deploy the AI model: as part of a Unity application or in an external inference system.

  1. Unity Application Deployment: In this approach, the trained AI model is converted to an ONNX (Open Neural Network Exchange) format and included directly in a Unity executable. The Unity app performs the inference using the embedded ONNX model, allowing the AI to process data in real time within the simulation or a deployed Unity application. This method is suitable for scenarios where the entire solution, including AI inference, needs to run as a standalone Unity application, offering an integrated experience with the digital twin.

    Deployment to a Unity application can be done for any available Unity destination platform by starting a Unity build process. This enables the AI-powered application to be deployed on various platforms, including Windows, Linux, macOS, Android, iOS, and many more. This cross-platform flexibility ensures that the AI model can be used in a wide range of environments, from desktop applications to mobile devices and specialized hardware.

  2. External Inference System Deployment: Alternatively, the ONNX model can be exported from Unity and used in an external inference system. This setup involves deploying the model on a separate inference platform, such as a cloud-based AI service, an edge device, or a custom-built inference server. The external system can perform the AI computations and send the results back to the Unity application or other control systems. This approach is beneficial for leveraging specialized hardware, integrating with existing AI infrastructure, or scaling inference across multiple devices.

Deployment blueprint - Camera Device Demo

We provide a simple example application called CameraDeviceDemo, which can be found in the main AI Builder folder. This app demonstrates how to use a trained ONNX model for real-time inference together with a webcam or an integrated camera on a mobile device, such as a phone or tablet. The CameraDeviceDemo app showcases the deployment process by allowing the AI model to perform object detection or classification using live camera feeds.

The example app is versatile and can be used on various platforms, supporting real-time inference on desktop systems with webcams or mobile devices equipped with built-in cameras. This flexibility enables quick usage of AI models in different environments, making it easy to test, validate, and utilize the trained model for practical applications in diverse settings.

Last updated