sabato-nocera commited on
Commit
c7108f5
·
verified ·
1 Parent(s): 9d9aa88

Dear model owner(s),
We are a group of researchers investigating the usefulness of sharing AIBOMs (Artificial Intelligence Bill of Materials) to document AI models – AIBOMs are machine-readable structured lists of components (e.g., datasets and models) used to enhance transparency in AI-model supply chains.

To pursue the above-mentioned objective, we identified popular models on HuggingFace and, based on your model card (and some configuration information available in HuggingFace), we generated your AIBOM according to the CyclonDX (v1.6) standard (see https://cyclonedx.org/docs/1.6/json/). AIBOMs are generated as JSON files by using the following open-source supporting tool: https://github.com/MSR4SBOM/ALOHA (technical details are available in the research paper: https://github.com/MSR4SBOM/ALOHA/blob/main/ALOHA.pdf).

The JSON file in this pull request is your AIBOM (see https://github.com/MSR4SBOM/ALOHA/blob/main/documentation.json for details on its structure).

Clearly, the submitted AIBOM matches the current model information, yet it can be easily regenerated when the model evolves, using the aforementioned AIBOM generator tool.

We open this pull request containing an AIBOM of your AI model, and hope it will be considered. We would also like to hear your opinion on the usefulness (or not) of AIBOM by answering a 3-minute anonymous survey: https://forms.gle/WGffSQD5dLoWttEe7.

Thanks in advance, and regards,
Riccardo D’Avino, Fatima Ahmed, Sabato Nocera, Simone Romano, Giuseppe Scanniello (University of Salerno, Italy),
Massimiliano Di Penta (University of Sannio, Italy),
The MSR4SBOM team

Files changed (1) hide show
  1. OpenGVLab_InternVL2-Llama3-76B.json +75 -0
OpenGVLab_InternVL2-Llama3-76B.json ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bomFormat": "CycloneDX",
3
+ "specVersion": "1.6",
4
+ "serialNumber": "urn:uuid:45c0f496-c4cd-4ccf-997e-a8be48a1bace",
5
+ "version": 1,
6
+ "metadata": {
7
+ "timestamp": "2025-06-05T09:36:14.979943+00:00",
8
+ "component": {
9
+ "type": "machine-learning-model",
10
+ "bom-ref": "OpenGVLab/InternVL2-Llama3-76B-db72f665-71fd-5c8c-99f9-e23be02c9399",
11
+ "name": "OpenGVLab/InternVL2-Llama3-76B",
12
+ "externalReferences": [
13
+ {
14
+ "url": "https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B",
15
+ "type": "documentation"
16
+ }
17
+ ],
18
+ "modelCard": {
19
+ "modelParameters": {
20
+ "task": "image-text-to-text",
21
+ "architectureFamily": "internvl_chat",
22
+ "modelArchitecture": "InternVLChatModel"
23
+ },
24
+ "properties": [
25
+ {
26
+ "name": "library_name",
27
+ "value": "transformers"
28
+ },
29
+ {
30
+ "name": "base_model",
31
+ "value": "OpenGVLab/InternViT-6B-448px-V1-5, NousResearch/Hermes-2-Theta-Llama-3-70B"
32
+ },
33
+ {
34
+ "name": "base_model_relation",
35
+ "value": "merge"
36
+ }
37
+ ]
38
+ },
39
+ "authors": [
40
+ {
41
+ "name": "OpenGVLab"
42
+ }
43
+ ],
44
+ "licenses": [
45
+ {
46
+ "license": {
47
+ "name": "llama3"
48
+ }
49
+ }
50
+ ],
51
+ "description": "We are excited to announce the release of InternVL 2.0, the latest addition to the InternVL series of multimodal large language models. InternVL 2.0 features a variety of **instruction-tuned models**, ranging from 1 billion to 108 billion parameters. This repository contains the instruction-tuned InternVL2-Llama3-76B model.Compared to the state-of-the-art open-source multimodal large language models, InternVL 2.0 surpasses most open-source models. It demonstrates competitive performance on par with proprietary commercial models across various capabilities, including document and chart comprehension, infographics QA, scene text understanding and OCR tasks, scientific and mathematical problem solving, as well as cultural understanding and integrated multimodal capabilities.InternVL 2.0 is trained with an 8k context window and utilizes training data consisting of long texts, multiple images, and videos, significantly improving its ability to handle these types of inputs compared to InternVL 1.5. For more details, please refer to our [blog](https://internvl.github.io/blog/2024-07-02-InternVL-2.0/) and [GitHub](https://github.com/OpenGVLab/InternVL).| Model Name | Vision Part | Language Part | HF Link | MS Link || :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: | :--------------------------------------------------------------------: || InternVL2-1B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) | [\ud83e\udd17 link](https://huggingface.co/OpenGVLab/InternVL2-1B) | [\ud83e\udd16 link](https://modelscope.cn/models/OpenGVLab/InternVL2-1B) || InternVL2-2B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) | [\ud83e\udd17 link](https://huggingface.co/OpenGVLab/InternVL2-2B) | [\ud83e\udd16 link](https://modelscope.cn/models/OpenGVLab/InternVL2-2B) || InternVL2-4B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) | [\ud83e\udd17 link](https://huggingface.co/OpenGVLab/InternVL2-4B) | [\ud83e\udd16 link](https://modelscope.cn/models/OpenGVLab/InternVL2-4B) || InternVL2-8B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2_5-7b-chat](https://huggingface.co/internlm/internlm2_5-7b-chat) | [\ud83e\udd17 link](https://huggingface.co/OpenGVLab/InternVL2-8B) | [\ud83e\udd16 link](https://modelscope.cn/models/OpenGVLab/InternVL2-8B) || InternVL2-26B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [internlm2-chat-20b](https://huggingface.co/internlm/internlm2-chat-20b) | [\ud83e\udd17 link](https://huggingface.co/OpenGVLab/InternVL2-26B) | [\ud83e\udd16 link](https://modelscope.cn/models/OpenGVLab/InternVL2-26B) || InternVL2-40B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) | [\ud83e\udd17 link](https://huggingface.co/OpenGVLab/InternVL2-40B) | [\ud83e\udd16 link](https://modelscope.cn/models/OpenGVLab/InternVL2-40B) || InternVL2-Llama3-76B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B) | [\ud83e\udd17 link](https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B) | [\ud83e\udd16 link](https://modelscope.cn/models/OpenGVLab/InternVL2-Llama3-76B) |",
52
+ "tags": [
53
+ "transformers",
54
+ "safetensors",
55
+ "internvl_chat",
56
+ "feature-extraction",
57
+ "internvl",
58
+ "custom_code",
59
+ "image-text-to-text",
60
+ "conversational",
61
+ "multilingual",
62
+ "arxiv:2312.14238",
63
+ "arxiv:2404.16821",
64
+ "arxiv:2410.16261",
65
+ "arxiv:2412.05271",
66
+ "base_model:NousResearch/Hermes-2-Theta-Llama-3-70B",
67
+ "base_model:merge:NousResearch/Hermes-2-Theta-Llama-3-70B",
68
+ "base_model:OpenGVLab/InternViT-6B-448px-V1-5",
69
+ "base_model:merge:OpenGVLab/InternViT-6B-448px-V1-5",
70
+ "license:llama3",
71
+ "region:us"
72
+ ]
73
+ }
74
+ }
75
+ }