Simulation Reports and Artifacts
Overview
Simulation outputs go beyond simple pass/fail metrics, offering visual documentation, detailed test results, and even complete simulation replay capabilities depending on your chosen simulator. All reports are organized in a standardized directory structure at /opt/chassy/reports
, making them easy to collect, process, and integrate into your existing workflows. Chassy’s example simulation containers generate comprehensive reports and artifacts that provide deep insights into your simulation runs.
Test Results and Metrics
Every simulation container generates JUnit XML reports that provide detailed test execution metrics compatible with standard CI/CD platforms. These reports capture test suite organization, individual test outcomes, execution times, and failure messages when tests don't pass. The hierarchical naming convention reflects your test organization—from unit tests that validate basic functionality through integration tests that verify complex multi-component interactions. Each XML file contains granular timing data, allowing you to identify performance bottlenecks and track test execution trends over time. Examples are provided on how to generate JUnit XML reports in each unit test in Chassy’s example code repository.
The standardized JUnit format ensures these results integrate seamlessly with test aggregation platforms and dashboard tools. Primarily ingested in Chassy SLAM, these reports are also useful in Jenkins, GitLab CI, or other custom reporting infrastructure, as these XML files provide the structured data needed for test trend analysis, failure tracking, and quality metrics. The consistent naming patterns across different simulators make it straightforward to build unified reporting dashboards that aggregate results from multiple simulation types.
/opt/chassy/reports/
├── junit_setup_validation.xml
├── test-results-minimal.xml
├── test-results-gazebo.xml
├── test-results-flight.xml
└── test-results-integration.xml # Combined test results
Visual Documentation and Debugging
Using Chassy SLAM, it is possible to display visual data in case of test failure, or just for logging of status. For simulations involving visual components, Chassy’s example Gazebo container automatically captures screenshots throughout test execution, providing visual documentation of simulation states. These captures are particularly valuable for debugging physics interactions, validating sensor placements, and verifying that visual elements render correctly in headless environments. The system generates both individual frame captures and animated GIFs that show simulation progression over time, making it easy to identify when and how issues occur during complex test scenarios.
The visual artifacts are organized by test name and component. This visual documentation proves invaluable when debugging intermittent failures or validating that simulated scenarios match expected behavior. The frame-by-frame captures allow you to precisely identify the moment when unexpected behavior occurs, while the animated GIFs provide quick visual summaries perfect for including in bug reports or design reviews.
/opt/chassy/reports/
├── gazebo-test-results-integration.xml # Test Suite 03-04 JUnit results
├── gazebo-test-results-manipulation.xml # Test Suite 05 JUnit results
├── gazebo-test-results-simulation.xml # Test Suite 02 JUnit results
├── gazebo-test-results-unit.xml # Test Suite 01 JUnit results
├── screenshots/
│ └──manipulation/ # Test Suite 05 visual docs
│ ├── observer_camera::link::scene_camera_0.png # Frame 0
│ ├── observer_camera::link::scene_camera_1.png # Frame 1
│ ├── ... # Frames 2-21 (22 total)
│ ├── observer_camera::link::scene_camera_21.png # Frame 21
│ ├── unknown_test_animation.gif # 303KB animated GIF
Advanced Replay and Analysis
It is also possible to record simulations and replay them later. This is particularly useful for investigating failures caused by non-deterministic simulations.
Chassy’s example Isaac Sim container takes simulation artifacts to the next level by generating complete USD (Universal Scene Description) files and accompanying JSON state files for each significant simulation event. These files capture the entire simulation state, including object positions, physics parameters, and sensor configurations, enabling you to replay non-deterministic simulations exactly as they occurred. This capability transforms debugging from guesswork into precise analysis, as you can load any saved state into Isaac Sim or your preferred Nvidia Omniverse Visual Debugger and inspect every aspect of the simulation at that moment. In the future, Chassy SLAM will also be able to render these scenes in your web browser.
The OVD (Omniverse Validation Data) directory structure organizes these replay files by test scenario and timestamp, with each test generating multiple state captures at critical points during execution. The summary.json
files provide metadata about the test run, including configuration parameters and high-level outcomes, while the numbered state files allow you to step through the simulation chronologically. This comprehensive capture approach is particularly valuable for scenarios involving complex physics interactions, multi-robot coordination, or AI behavior validation where understanding the exact sequence of events is crucial.
/opt/chassy
└── reports
├── junit_isaac_environment.xml
├── OVD
│ ├── articulated_robot_20250907_171459_856
│ │ ├── state_001.json
│ │ ├── state_001.usd
│ │ ├── state_002.json
│ │ ├── state_002.usd
│ │ ├── state_003.json
│ │ ├── state_003.usd
│ │ ├── state_004.json
│ │ ├── state_004.usd
│ │ └── summary.json
│ ├── collision_detection_20250907_171333_867
│ │ ├── state_001.json
│ │ ├── state_001.usd
│ │ ├── state_002.json
│ │ ├── state_002.usd
│ │ ├── state_003.json
│ │ ├── state_003.usd
│ │ ├── state_004.json
│ │ ├── state_004.usd
│ │ ├── state_005.json
│ │ ├── state_005.usd
│ │ ├── state_006.json
│ │ ├── state_006.usd
│ │ └── summary.json
│ ├── rigid_body_physics_20250907_171315_612
│ │ ├── state_001.json
│ │ ├── state_001.usd
│ │ ├── state_002.json
│ │ ├── state_002.usd
│ │ ├── state_003.json
│ │ ├── state_003.usd
│ │ ├── state_004.json
│ │ ├── state_004.usd
│ │ ├── state_005.json
│ │ ├── state_005.usd
│ │ └── summary.json
│ └── robot_loading_20250907_171440_848
│ ├── state_001.json
│ ├── state_001.usd
│ ├── state_002.json
│ ├── state_002.usd
│ └── summary.json
├── test-results-isaac-core.xml
├── test-results-isaac-init.xml
├── test-results-isaac-physics.xml
└── test-results-isaac-robot.xml
Artifact Collection and Processing
The consistent report structure across all simulation containers simplifies artifact collection in automated workflows. The predictable organization - with JUnit XMLs at the root level and additional artifacts in subdirectories - makes it straightforward to build post-processing pipelines that extract relevant data based on your specific needs.
For continuous integration workflows, the combination of structured test results and rich debugging artifacts provides a complete picture of simulation outcomes. Failed tests can be quickly diagnosed using visual documentation or simulation replays, while successful runs generate artifacts that serve as validation evidence for certification or compliance requirements. This comprehensive approach to simulation reporting ensures that every simulation run contributes valuable data to your development and validation process.
Last updated