lzbinden commited on
Commit
a1203c3
·
verified ·
1 Parent(s): b551298

Add model sub cards

Browse files
Files changed (4) hide show
  1. BIAS.md +6 -0
  2. EXPLAINABILITY.md +38 -0
  3. PRIVACY.md +36 -0
  4. SAFETY_and_SECURITY.md +13 -0
BIAS.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # Bias Subcard
2
+ ## Participation considerations from adversely impacted groups protected classes in model design and testing:
3
+ None
4
+
5
+ ## Measures taken to mitigate against unwanted bias:
6
+ None
EXPLAINABILITY.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Explainability Subcard
2
+ ## intended_domain
3
+ Surgical policy online evaluation and synthetic data generation.
4
+
5
+ ## Model Type
6
+ Diffusion Transformer
7
+
8
+ ## Intended Users
9
+ Medical Robotics Engineers, Surgeons
10
+
11
+ ## Output
12
+ Types: A sequence of 12 video frames. Formats: Red, Green, Blue (RGB)
13
+
14
+ ## Describe how the model works:
15
+ The model accepts a 28-dimensional action vector (14 dimensions per arm) alongside the current video frame, and predicts the subsequent 12 frames.
16
+ Through autoregressive rollout, it can generate videos of complete surgical trajectories from either learned policies or manually designed action sequences.
17
+
18
+
19
+ ## Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of:
20
+ None
21
+
22
+ ## Technical Limitations & Mitigation:
23
+ The model may underperform in poor or variable lighting conditions, occlusions from instruments or blood, and specular reflections, which can degrade visual predictions. The model may not perform well for out-of-distribution scenarios including novel procedures, unusual anatomies, or emergency situations not well-represented in training data; rapid motions or long-horizon predictions where autoregressive drift accumulates errors; actions beyond the trained kinematic range or near joint limits; and generalization challenges across different camera placements, surgical sites, or surgeon styles.
24
+
25
+ **Mitigation:**
26
+ To mitigate these limitations, we recommend performing data augmentation with lighting and occlusion variations; uncertainty estimation and out-of-distribution detection to flag anomalous states; limiting autoregressive rollout length with periodic ground-truth re-initialization; enforcing kinematic and safety constraints; multi-site training data collection; and maintaining strict human oversight with multiple safety layers.
27
+
28
+ ## Verified to have met prescribed NVIDIA quality standards:
29
+ Yes
30
+
31
+ ## Performance Metrics:
32
+ Robust L1 and SSIM vs. number of generated frames.
33
+
34
+ ## Potential Known Risks:
35
+ The model may generate videos that contain artifacts. The model may inaccurately represent 3D space, 4D space-time, or physical laws in the generated videos, leading to artifacts such as disappearing or morphing objects, unrealistic interactions, implausible motions, and physically inconsistent outcomes.
36
+
37
+ ## Licensing:
38
+ Governing Terms: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
PRIVACY.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Privacy Subcard
2
+ ## Generatable or reverse engineerable personal data?
3
+ No
4
+
5
+ ## Personal data used to create this model?
6
+ Yes
7
+
8
+ ## How often is dataset reviewed?
9
+ Dataset is initially reviewed upon addition, and subsequent reviews are conducted as needed or upon request for changes.
10
+
11
+ ## Is a mechanism in place to honor data subject right of access or deletion of personal data?
12
+ Yes
13
+
14
+ ## If personal data was collected for the development of the model, was it collected directly by NVIDIA?
15
+ No
16
+
17
+ ## If personal data was collected for the development of the model by NVIDIA, do you maintain or have access to disclosures made to data subjects?
18
+ N/A
19
+
20
+ ## If personal data was collected for the development of this AI model, was it minimized to only what was required?
21
+ Yes
22
+
23
+ ## Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model?
24
+ No
25
+
26
+ ## Is there provenance for all datasets used in training?
27
+ Yes
28
+
29
+ ## Does data labeling (annotation, metadata) comply with privacy laws?
30
+ Yes
31
+
32
+ ## Is data compliant with data subject requests for data correction or removal, if such a request was made?
33
+ Yes
34
+
35
+ ## Applicable Privacy Policy
36
+ https://www.nvidia.com/en-us/about-nvidia/privacy-policy/
SAFETY_and_SECURITY.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Safety & Security Subcard
2
+ ## Model Application Field(s):
3
+ Surgical Robotics
4
+
5
+ ## Describe the life critical impact (if present).
6
+ This model performs surgical robotics simulation; it is not intended for diagnostic purposes. Additional testing and evaluation is
7
+ recommended prior to use in clinical settings and non-experimental downstream applications.
8
+
9
+ ## Use Case Restrictions:
10
+ Abide by [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
11
+
12
+ ## Model and dataset restrictions:
13
+ The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.