sabato-nocera commited on
Commit
6c470c6
·
verified ·
1 Parent(s): 852965e

Dear model owner(s),
We are a group of researchers investigating the usefulness of sharing AIBOMs (Artificial Intelligence Bill of Materials) to document AI models – AIBOMs are machine-readable structured lists of components (e.g., datasets and models) used to enhance transparency in AI-model supply chains.

To pursue the above-mentioned objective, we identified popular models on HuggingFace and, based on your model card (and some configuration information available in HuggingFace), we generated your AIBOM according to the CyclonDX (v1.6) standard (see https://cyclonedx.org/docs/1.6/json/). AIBOMs are generated as JSON files by using the following open-source supporting tool: https://github.com/MSR4SBOM/ALOHA (technical details are available in the research paper: https://github.com/MSR4SBOM/ALOHA/blob/main/ALOHA.pdf).

The JSON file in this pull request is your AIBOM (see https://github.com/MSR4SBOM/ALOHA/blob/main/documentation.json for details on its structure).

Clearly, the submitted AIBOM matches the current model information, yet it can be easily regenerated when the model evolves, using the aforementioned AIBOM generator tool.

We open this pull request containing an AIBOM of your AI model, and hope it will be considered. We would also like to hear your opinion on the usefulness (or not) of AIBOM by answering a 3-minute anonymous survey: https://forms.gle/WGffSQD5dLoWttEe7.

Thanks in advance, and regards,
Riccardo D’Avino, Fatima Ahmed, Sabato Nocera, Simone Romano, Giuseppe Scanniello (University of Salerno, Italy),
Massimiliano Di Penta (University of Sannio, Italy),
The MSR4SBOM team

Files changed (1) hide show
  1. OpenGVLab_InternVL3-78B.json +136 -0
OpenGVLab_InternVL3-78B.json ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bomFormat": "CycloneDX",
3
+ "specVersion": "1.6",
4
+ "serialNumber": "urn:uuid:f3d4b532-6fad-4b9a-b4cb-24dc7ba6e74a",
5
+ "version": 1,
6
+ "metadata": {
7
+ "timestamp": "2025-06-05T09:39:11.687113+00:00",
8
+ "component": {
9
+ "type": "machine-learning-model",
10
+ "bom-ref": "OpenGVLab/InternVL3-78B-e9d921a6-e013-5632-9136-d28da087616e",
11
+ "name": "OpenGVLab/InternVL3-78B",
12
+ "externalReferences": [
13
+ {
14
+ "url": "https://huggingface.co/OpenGVLab/InternVL3-78B",
15
+ "type": "documentation"
16
+ }
17
+ ],
18
+ "modelCard": {
19
+ "modelParameters": {
20
+ "task": "image-text-to-text",
21
+ "architectureFamily": "internvl_chat",
22
+ "modelArchitecture": "InternVLChatModel",
23
+ "datasets": [
24
+ {
25
+ "ref": "OpenGVLab/MMPR-v1.2-f5ad7f01-75b1-5539-aff3-747fe24b14f6"
26
+ }
27
+ ]
28
+ },
29
+ "properties": [
30
+ {
31
+ "name": "library_name",
32
+ "value": "transformers"
33
+ },
34
+ {
35
+ "name": "base_model",
36
+ "value": "OpenGVLab/InternVL3-78B-Instruct"
37
+ },
38
+ {
39
+ "name": "base_model_relation",
40
+ "value": "finetune"
41
+ }
42
+ ]
43
+ },
44
+ "authors": [
45
+ {
46
+ "name": "OpenGVLab"
47
+ }
48
+ ],
49
+ "licenses": [
50
+ {
51
+ "license": {
52
+ "name": "qwen",
53
+ "url": "https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE"
54
+ }
55
+ }
56
+ ],
57
+ "description": "We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/overall.png)",
58
+ "tags": [
59
+ "transformers",
60
+ "safetensors",
61
+ "internvl_chat",
62
+ "feature-extraction",
63
+ "internvl",
64
+ "custom_code",
65
+ "image-text-to-text",
66
+ "conversational",
67
+ "multilingual",
68
+ "dataset:OpenGVLab/MMPR-v1.2",
69
+ "arxiv:2312.14238",
70
+ "arxiv:2404.16821",
71
+ "arxiv:2412.05271",
72
+ "arxiv:2411.10442",
73
+ "arxiv:2504.10479",
74
+ "arxiv:2412.09616",
75
+ "base_model:OpenGVLab/InternVL3-78B-Instruct",
76
+ "base_model:finetune:OpenGVLab/InternVL3-78B-Instruct",
77
+ "license:other",
78
+ "region:us"
79
+ ]
80
+ }
81
+ },
82
+ "components": [
83
+ {
84
+ "type": "data",
85
+ "bom-ref": "OpenGVLab/MMPR-v1.2-f5ad7f01-75b1-5539-aff3-747fe24b14f6",
86
+ "name": "OpenGVLab/MMPR-v1.2",
87
+ "data": [
88
+ {
89
+ "type": "dataset",
90
+ "bom-ref": "OpenGVLab/MMPR-v1.2-f5ad7f01-75b1-5539-aff3-747fe24b14f6",
91
+ "name": "OpenGVLab/MMPR-v1.2",
92
+ "contents": {
93
+ "url": "https://huggingface.co/datasets/OpenGVLab/MMPR-v1.2",
94
+ "properties": [
95
+ {
96
+ "name": "task_categories",
97
+ "value": "visual-question-answering"
98
+ },
99
+ {
100
+ "name": "language",
101
+ "value": "en"
102
+ },
103
+ {
104
+ "name": "size_categories",
105
+ "value": "1M<n<10M"
106
+ },
107
+ {
108
+ "name": "pretty_name",
109
+ "value": "MMPR-v1.2"
110
+ },
111
+ {
112
+ "name": "configs",
113
+ "value": "Name of the dataset subset: default {\"split\": \"train\", \"path\": \"annotations.zip\"}"
114
+ },
115
+ {
116
+ "name": "license",
117
+ "value": "mit"
118
+ }
119
+ ]
120
+ },
121
+ "governance": {
122
+ "owners": [
123
+ {
124
+ "organization": {
125
+ "name": "OpenGVLab",
126
+ "url": "https://huggingface.co/OpenGVLab"
127
+ }
128
+ }
129
+ ]
130
+ },
131
+ "description": "\n\t\n\t\t\n\t\tMMPR-v1.2\n\t\n\n[\ud83d\udcc2 GitHub] [\ud83c\udd95 Blog] [\ud83d\udcdc Paper] [\ud83d\udcd6 Documents]\nThis is a newer version of MMPR and MMPR-v1.1, which includes additional data sources to enhance the data diversity and greatly improves the overall performance of InternVL3 across all scales. The prompts used to build this dataset is released in MMPR-v1.2-prompts.\nTo unzip the archive of images, please first run cat images.zip_* > images.zip and then run unzip images.zip.\n\n\n\t\n\t\t\n\t\tIntroduction\n\t\n\nMMPR is a large-scale and\u2026 See the full description on the dataset page: https://huggingface.co/datasets/OpenGVLab/MMPR-v1.2."
132
+ }
133
+ ]
134
+ }
135
+ ]
136
+ }