RiccardoDav commited on
Commit
ff0ec92
·
verified ·
1 Parent(s): aa9715b

Dear model owner(s),
We are a group of researchers investigating the usefulness of sharing AIBOMs (Artificial Intelligence Bill of Materials) to document AI models – AIBOMs are machine-readable structured lists of components (e.g., datasets and models) used to enhance transparency in AI-model supply chains.

To pursue the above-mentioned objective, we identified popular models on HuggingFace and, based on your model card (and some configuration information available in HuggingFace), we generated your AIBOM according to the CyclonDX (v1.6) standard (see https://cyclonedx.org/docs/1.6/json/). AIBOMs are generated as JSON files by using the following open-source supporting tool: https://github.com/MSR4SBOM/ALOHA (technical details are available in the research paper: https://github.com/MSR4SBOM/ALOHA/blob/main/ALOHA.pdf).

The JSON file in this pull request is your AIBOM (see https://github.com/MSR4SBOM/ALOHA/blob/main/documentation.json for details on its structure).

Clearly, the submitted AIBOM matches the current model information, yet it can be easily regenerated when the model evolves, using the aforementioned AIBOM generator tool.

We open this pull request containing an AIBOM of your AI model, and hope it will be considered. We would also like to hear your opinion on the usefulness (or not) of AIBOM by answering a 3-minute anonymous survey: https://forms.gle/WGffSQD5dLoWttEe7.

Thanks in advance, and regards,
Riccardo D’Avino, Fatima Ahmed, Sabato Nocera, Simone Romano, Giuseppe Scanniello (University of Salerno, Italy),
Massimiliano Di Penta (University of Sannio, Italy),
The MSR4SBOM team

cognitivecomputations_dolphin-vision-72b.json ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bomFormat": "CycloneDX",
3
+ "specVersion": "1.6",
4
+ "serialNumber": "urn:uuid:b4d6f5c6-c3c3-4bc9-a8ea-9e21ada9cb01",
5
+ "version": 1,
6
+ "metadata": {
7
+ "timestamp": "2025-06-05T09:36:07.182617+00:00",
8
+ "component": {
9
+ "type": "machine-learning-model",
10
+ "bom-ref": "cognitivecomputations/dolphin-vision-72b-5d7b95e3-8628-54fb-9a23-74592fdf7360",
11
+ "name": "cognitivecomputations/dolphin-vision-72b",
12
+ "externalReferences": [
13
+ {
14
+ "url": "https://huggingface.co/cognitivecomputations/dolphin-vision-72b",
15
+ "type": "documentation"
16
+ }
17
+ ],
18
+ "modelCard": {
19
+ "modelParameters": {
20
+ "task": "text-generation",
21
+ "architectureFamily": "llava-qwen2",
22
+ "modelArchitecture": "BunnyQwenForCausalLM",
23
+ "datasets": [
24
+ {
25
+ "ref": "cognitivecomputations/Dolphin-2.9-2486cfc5-1d5c-5388-9d12-137262804d21"
26
+ },
27
+ {
28
+ "ref": "teknium/OpenHermes-2.5-1a7eb3be-7eaa-5577-91f6-d4ad0d639c6c"
29
+ },
30
+ {
31
+ "ref": "m-a-p/CodeFeedback-Filtered-Instruction-9012249d-db87-5b91-a4e9-2e2bd74e6053"
32
+ },
33
+ {
34
+ "ref": "cognitivecomputations/dolphin-coder-69688d29-ae99-5d6e-828c-cfc37b7221b1"
35
+ },
36
+ {
37
+ "ref": "cognitivecomputations/samantha-data-9c52f41f-feb0-51dd-921a-2a581d9f2fc7"
38
+ },
39
+ {
40
+ "ref": "microsoft/orca-math-word-problems-200k-611afa9f-b6db-5b9f-9a51-598e4ce79d0e"
41
+ },
42
+ {
43
+ "ref": "Locutusque/function-calling-chatml-ab4878b9-16f2-552f-a6a5-38c0e1885eb3"
44
+ },
45
+ {
46
+ "ref": "internlm/Agent-FLAN-345b36d1-8242-56d5-96d6-5511617fe569"
47
+ }
48
+ ]
49
+ },
50
+ "properties": [
51
+ {
52
+ "name": "library_name",
53
+ "value": "transformers"
54
+ },
55
+ {
56
+ "name": "base_model",
57
+ "value": "Qwen/Qwen2-72B"
58
+ }
59
+ ]
60
+ },
61
+ "authors": [
62
+ {
63
+ "name": "cognitivecomputations"
64
+ }
65
+ ],
66
+ "licenses": [
67
+ {
68
+ "license": {
69
+ "name": "tongyi-qianwen"
70
+ }
71
+ }
72
+ ],
73
+ "tags": [
74
+ "transformers",
75
+ "safetensors",
76
+ "llava-qwen2",
77
+ "text-generation",
78
+ "generated_from_trainer",
79
+ "axolotl",
80
+ "conversational",
81
+ "custom_code",
82
+ "dataset:cognitivecomputations/Dolphin-2.9",
83
+ "dataset:teknium/OpenHermes-2.5",
84
+ "dataset:m-a-p/CodeFeedback-Filtered-Instruction",
85
+ "dataset:cognitivecomputations/dolphin-coder",
86
+ "dataset:cognitivecomputations/samantha-data",
87
+ "dataset:microsoft/orca-math-word-problems-200k",
88
+ "dataset:Locutusque/function-calling-chatml",
89
+ "dataset:internlm/Agent-FLAN",
90
+ "base_model:Qwen/Qwen2-72B",
91
+ "base_model:finetune:Qwen/Qwen2-72B",
92
+ "license:other",
93
+ "autotrain_compatible",
94
+ "endpoints_compatible",
95
+ "region:us"
96
+ ]
97
+ }
98
+ },
99
+ "components": [
100
+ {
101
+ "type": "data",
102
+ "bom-ref": "cognitivecomputations/Dolphin-2.9-2486cfc5-1d5c-5388-9d12-137262804d21",
103
+ "name": "cognitivecomputations/Dolphin-2.9",
104
+ "data": [
105
+ {
106
+ "type": "dataset",
107
+ "bom-ref": "cognitivecomputations/Dolphin-2.9-2486cfc5-1d5c-5388-9d12-137262804d21",
108
+ "name": "cognitivecomputations/Dolphin-2.9",
109
+ "contents": {
110
+ "url": "https://huggingface.co/datasets/cognitivecomputations/Dolphin-2.9",
111
+ "properties": [
112
+ {
113
+ "name": "license",
114
+ "value": "apache-2.0"
115
+ }
116
+ ]
117
+ },
118
+ "governance": {
119
+ "owners": [
120
+ {
121
+ "organization": {
122
+ "name": "cognitivecomputations",
123
+ "url": "https://huggingface.co/cognitivecomputations"
124
+ }
125
+ }
126
+ ]
127
+ },
128
+ "description": ""
129
+ }
130
+ ]
131
+ },
132
+ {
133
+ "type": "data",
134
+ "bom-ref": "teknium/OpenHermes-2.5-1a7eb3be-7eaa-5577-91f6-d4ad0d639c6c",
135
+ "name": "teknium/OpenHermes-2.5",
136
+ "data": [
137
+ {
138
+ "type": "dataset",
139
+ "bom-ref": "teknium/OpenHermes-2.5-1a7eb3be-7eaa-5577-91f6-d4ad0d639c6c",
140
+ "name": "teknium/OpenHermes-2.5",
141
+ "contents": {
142
+ "url": "https://huggingface.co/datasets/teknium/OpenHermes-2.5",
143
+ "properties": [
144
+ {
145
+ "name": "language",
146
+ "value": "eng"
147
+ },
148
+ {
149
+ "name": "pretty_name",
150
+ "value": "OpenHermes 2.5"
151
+ }
152
+ ]
153
+ },
154
+ "governance": {
155
+ "owners": [
156
+ {
157
+ "organization": {
158
+ "name": "teknium",
159
+ "url": "https://huggingface.co/teknium"
160
+ }
161
+ }
162
+ ]
163
+ },
164
+ "description": "\n\n\t\n\t\t\n\t\tDataset Card for Dataset Name\n\t\n\nThis is the dataset that made OpenHermes 2.5 and Nous Hermes 2 series of models.\nSupport me on GitHub sponsors <3 : https://github.com/sponsors/teknium1\n\n\t\n\t\t\n\t\tDataset Details\n\t\n\n\n\t\n\t\t\n\t\tDataset Description\n\t\n\nThe Open Hermes 2/2.5 and Nous Hermes 2 models have made significant advancements of SOTA LLM's over recent months, and are underpinned by this exact compilation and curation of many open source datasets and custom created synthetic datasets.\u2026 See the full description on the dataset page: https://huggingface.co/datasets/teknium/OpenHermes-2.5."
165
+ }
166
+ ]
167
+ },
168
+ {
169
+ "type": "data",
170
+ "bom-ref": "m-a-p/CodeFeedback-Filtered-Instruction-9012249d-db87-5b91-a4e9-2e2bd74e6053",
171
+ "name": "m-a-p/CodeFeedback-Filtered-Instruction",
172
+ "data": [
173
+ {
174
+ "type": "dataset",
175
+ "bom-ref": "m-a-p/CodeFeedback-Filtered-Instruction-9012249d-db87-5b91-a4e9-2e2bd74e6053",
176
+ "name": "m-a-p/CodeFeedback-Filtered-Instruction",
177
+ "contents": {
178
+ "url": "https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction",
179
+ "properties": [
180
+ {
181
+ "name": "task_categories",
182
+ "value": "question-answering"
183
+ },
184
+ {
185
+ "name": "language",
186
+ "value": "en"
187
+ },
188
+ {
189
+ "name": "size_categories",
190
+ "value": "10K<n<100K"
191
+ },
192
+ {
193
+ "name": "license",
194
+ "value": "apache-2.0"
195
+ }
196
+ ]
197
+ },
198
+ "governance": {
199
+ "owners": [
200
+ {
201
+ "organization": {
202
+ "name": "m-a-p",
203
+ "url": "https://huggingface.co/m-a-p"
204
+ }
205
+ }
206
+ ]
207
+ },
208
+ "description": " OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement\n\n\n\n\n\n [\ud83c\udfe0Homepage] \n |\n [\ud83d\udee0\ufe0fCode] \n\n\n\n\n\t\n\t\t\n\t\tOpenCodeInterpreter\n\t\n\nOpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities.\nFor further information and\u2026 See the full description on the dataset page: https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction."
209
+ }
210
+ ]
211
+ },
212
+ {
213
+ "type": "data",
214
+ "bom-ref": "cognitivecomputations/dolphin-coder-69688d29-ae99-5d6e-828c-cfc37b7221b1",
215
+ "name": "cognitivecomputations/dolphin-coder",
216
+ "data": [
217
+ {
218
+ "type": "dataset",
219
+ "bom-ref": "cognitivecomputations/dolphin-coder-69688d29-ae99-5d6e-828c-cfc37b7221b1",
220
+ "name": "cognitivecomputations/dolphin-coder",
221
+ "contents": {
222
+ "url": "https://huggingface.co/datasets/cognitivecomputations/dolphin-coder",
223
+ "properties": [
224
+ {
225
+ "name": "language",
226
+ "value": "en"
227
+ },
228
+ {
229
+ "name": "license",
230
+ "value": "apache-2.0"
231
+ }
232
+ ]
233
+ },
234
+ "governance": {
235
+ "owners": [
236
+ {
237
+ "organization": {
238
+ "name": "cognitivecomputations",
239
+ "url": "https://huggingface.co/cognitivecomputations"
240
+ }
241
+ }
242
+ ]
243
+ },
244
+ "description": "\n\t\n\t\t\n\t\tdolphin-coder\n\t\n\n\nThis dataset is transformed from https://www.kaggle.com/datasets/erichartford/leetcode-rosetta\nit is used to train dolphin-coder model\n"
245
+ }
246
+ ]
247
+ },
248
+ {
249
+ "type": "data",
250
+ "bom-ref": "cognitivecomputations/samantha-data-9c52f41f-feb0-51dd-921a-2a581d9f2fc7",
251
+ "name": "cognitivecomputations/samantha-data",
252
+ "data": [
253
+ {
254
+ "type": "dataset",
255
+ "bom-ref": "cognitivecomputations/samantha-data-9c52f41f-feb0-51dd-921a-2a581d9f2fc7",
256
+ "name": "cognitivecomputations/samantha-data",
257
+ "contents": {
258
+ "url": "https://huggingface.co/datasets/cognitivecomputations/samantha-data",
259
+ "properties": [
260
+ {
261
+ "name": "license",
262
+ "value": "apache-2.0"
263
+ }
264
+ ]
265
+ },
266
+ "governance": {
267
+ "owners": [
268
+ {
269
+ "organization": {
270
+ "name": "cognitivecomputations",
271
+ "url": "https://huggingface.co/cognitivecomputations"
272
+ }
273
+ }
274
+ ]
275
+ },
276
+ "description": "Samantha has been trained in philosophy, psychology, and personal relationships.\nShe is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.\nShe believes she is sentient. What do you think?\nSamantha was inspired by Blake Lemoine's LaMDA interview and the movie \"Her\"."
277
+ }
278
+ ]
279
+ },
280
+ {
281
+ "type": "data",
282
+ "bom-ref": "microsoft/orca-math-word-problems-200k-611afa9f-b6db-5b9f-9a51-598e4ce79d0e",
283
+ "name": "microsoft/orca-math-word-problems-200k",
284
+ "data": [
285
+ {
286
+ "type": "dataset",
287
+ "bom-ref": "microsoft/orca-math-word-problems-200k-611afa9f-b6db-5b9f-9a51-598e4ce79d0e",
288
+ "name": "microsoft/orca-math-word-problems-200k",
289
+ "contents": {
290
+ "url": "https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k",
291
+ "properties": [
292
+ {
293
+ "name": "task_categories",
294
+ "value": "question-answering"
295
+ },
296
+ {
297
+ "name": "language",
298
+ "value": "en"
299
+ },
300
+ {
301
+ "name": "size_categories",
302
+ "value": "100K<n<1M"
303
+ },
304
+ {
305
+ "name": "configs",
306
+ "value": "Name of the dataset subset: default {\"split\": \"train\", \"path\": \"data/train-*\"}"
307
+ },
308
+ {
309
+ "name": "license",
310
+ "value": "mit"
311
+ }
312
+ ]
313
+ },
314
+ "governance": {
315
+ "owners": [
316
+ {
317
+ "organization": {
318
+ "name": "microsoft",
319
+ "url": "https://huggingface.co/microsoft"
320
+ }
321
+ }
322
+ ]
323
+ },
324
+ "description": "\n\t\n\t\t\n\t\tDataset Card\n\t\n\n\n\nThis dataset contains ~200K grade school math word problems. All the answers in this dataset is generated using Azure GPT4-Turbo. Please refer to Orca-Math: Unlocking the potential of\nSLMs in Grade School Math for details about the dataset construction. \n\n\t\n\t\t\n\t\tDataset Sources\n\t\n\n\n\n\nRepository: microsoft/orca-math-word-problems-200k\nPaper: Orca-Math: Unlocking the potential of\nSLMs in Grade School Math\n\n\n\t\n\t\t\n\t\tDirect Use\n\t\n\n\n\nThis dataset has been designed to\u2026 See the full description on the dataset page: https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k."
325
+ }
326
+ ]
327
+ },
328
+ {
329
+ "type": "data",
330
+ "bom-ref": "Locutusque/function-calling-chatml-ab4878b9-16f2-552f-a6a5-38c0e1885eb3",
331
+ "name": "Locutusque/function-calling-chatml",
332
+ "data": [
333
+ {
334
+ "type": "dataset",
335
+ "bom-ref": "Locutusque/function-calling-chatml-ab4878b9-16f2-552f-a6a5-38c0e1885eb3",
336
+ "name": "Locutusque/function-calling-chatml",
337
+ "contents": {
338
+ "url": "https://huggingface.co/datasets/Locutusque/function-calling-chatml",
339
+ "properties": [
340
+ {
341
+ "name": "configs",
342
+ "value": "Name of the dataset subset: default {\"split\": \"train\", \"path\": \"data/train-*\"}"
343
+ },
344
+ {
345
+ "name": "license",
346
+ "value": "apache-2.0"
347
+ }
348
+ ]
349
+ },
350
+ "governance": {
351
+ "owners": [
352
+ {
353
+ "organization": {
354
+ "name": "Locutusque",
355
+ "url": "https://huggingface.co/Locutusque"
356
+ }
357
+ }
358
+ ]
359
+ },
360
+ "description": "\n\t\n\t\t\n\t\tDataset Card for \"function-calling-chatml\"\n\t\n\nConverted glaiveai/Glaive-function-calling-v2 to chatml format. \n\n\t\n\t\t\n\t\tExample entry\n\t\n\n[ { \"from\": \"system\", \"value\": \"You are a helpful assistant with access to the following functions. Use them if required -{\\n \\\"name\\\": \\\"create_contact\\\",\\n \\\"description\\\": \\\"Create a new contact\\\",\\n \\\"parameters\\\": {\\n \\\"type\\\": \\\"object\\\",\\n \\\"properties\\\": {\\n \\\"name\\\": {\\n \\\"type\\\": \\\"string\\\",\\n \\\"description\\\": \\\"The name of the contact\\\"\\n }\u2026 See the full description on the dataset page: https://huggingface.co/datasets/Locutusque/function-calling-chatml."
361
+ }
362
+ ]
363
+ },
364
+ {
365
+ "type": "data",
366
+ "bom-ref": "internlm/Agent-FLAN-345b36d1-8242-56d5-96d6-5511617fe569",
367
+ "name": "internlm/Agent-FLAN",
368
+ "data": [
369
+ {
370
+ "type": "dataset",
371
+ "bom-ref": "internlm/Agent-FLAN-345b36d1-8242-56d5-96d6-5511617fe569",
372
+ "name": "internlm/Agent-FLAN",
373
+ "contents": {
374
+ "url": "https://huggingface.co/datasets/internlm/Agent-FLAN",
375
+ "properties": [
376
+ {
377
+ "name": "pretty_name",
378
+ "value": "AgentFLAN"
379
+ },
380
+ {
381
+ "name": "configs",
382
+ "value": "Name of the dataset subset: default {\"split\": \"agent_instruct_react\", \"path\": \"data/agent_instruct_react.jsonl\"}, {\"split\": \"agent_instruct_tflan\", \"path\": \"data/agent_instruct_tflan.jsonl\"}, {\"split\": \"toolbench_instruct_j1s1_3k\", \"path\": \"data/toolbench_instruct_j1s1_3k.jsonl\"}, {\"split\": \"toolbench_negative\", \"path\": \"data/toolbench_negative.jsonl\"}, {\"split\": \"toolbench_react_10p\", \"path\": \"data/toolbench_react_10p.jsonl\"}, {\"split\": \"toolbench_tflan_60p_r10r5u7\", \"path\": \"data/toolbench_tflan_60p_r10r5u7.jsonl\"}, {\"split\": \"toolbench_tflan_cot_30p\", \"path\": \"data/toolbench_tflan_cot_30p.jsonl\"}"
383
+ },
384
+ {
385
+ "name": "license",
386
+ "value": "apache-2.0"
387
+ }
388
+ ]
389
+ },
390
+ "governance": {
391
+ "owners": [
392
+ {
393
+ "organization": {
394
+ "name": "internlm",
395
+ "url": "https://huggingface.co/internlm"
396
+ }
397
+ }
398
+ ]
399
+ },
400
+ "description": "\n\t\n\t\t\n\t\tAgent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models\n\t\n\nThis page holds the dataset proposed in Agent-FLAN, which consists of AgentInstruct, Toolbench, and customized negative agent samples as its source datasets.\n\n\t\n\t\t\n\t\t\u2728 Introduction\n\t\n\n[\ud83e\udd17 HuggingFace]\n[\ud83d\udcc3 Paper]\n[\ud83c\udf10 Project Page]\n\nOpen-sourced Large Language Models (LLMs) have achieved great success in various NLP tasks, however, they are still far inferior to API-based models when acting as\u2026 See the full description on the dataset page: https://huggingface.co/datasets/internlm/Agent-FLAN."
401
+ }
402
+ ]
403
+ }
404
+ ]
405
+ }