Commit ·
4de2cdf
verified ·
0
Parent(s):
Super-squash history to reclaim storage
Browse files- .gitattributes +81 -0
- InternVL3-8B-bf16.gguf +3 -0
- InternVL3-8B-bf16.mmproj +3 -0
- InternVL3-8B-bf16_q8_0.gguf +3 -0
- InternVL3-8B-f16.mmproj +3 -0
- InternVL3-8B-f16_q8_0.gguf +3 -0
- InternVL3-8B-f32.mmproj +3 -0
- InternVL3-8B-iq2_m.gguf +3 -0
- InternVL3-8B-iq2_s.gguf +3 -0
- InternVL3-8B-iq2_xs.gguf +3 -0
- InternVL3-8B-iq2_xxs.gguf +3 -0
- InternVL3-8B-iq3_m.gguf +3 -0
- InternVL3-8B-iq3_s.gguf +3 -0
- InternVL3-8B-iq3_xs.gguf +3 -0
- InternVL3-8B-iq3_xxs.gguf +3 -0
- InternVL3-8B-iq4_nl.gguf +3 -0
- InternVL3-8B-iq4_xs.gguf +3 -0
- InternVL3-8B-q2_k_m.gguf +3 -0
- InternVL3-8B-q2_k_s.gguf +3 -0
- InternVL3-8B-q3_k_m.gguf +3 -0
- InternVL3-8B-q3_k_s.gguf +3 -0
- InternVL3-8B-q4_0.gguf +3 -0
- InternVL3-8B-q4_1.gguf +3 -0
- InternVL3-8B-q4_k_m.gguf +3 -0
- InternVL3-8B-q4_k_s.gguf +3 -0
- InternVL3-8B-q5_0.gguf +3 -0
- InternVL3-8B-q5_1.gguf +3 -0
- InternVL3-8B-q5_k_m.gguf +3 -0
- InternVL3-8B-q5_k_s.gguf +3 -0
- InternVL3-8B-q6_k_m.gguf +3 -0
- InternVL3-8B-q8_0.gguf +3 -0
- InternVL3-8B-q8_0.mmproj +3 -0
- InternVL3-8B.imatrix +3 -0
- README.md +802 -0
.gitattributes
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
InternVL3-8B-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
InternVL3-8B-f16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
InternVL3-8B-bf16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
InternVL3-8B-f16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
InternVL3-8B-bf16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
InternVL3-8B-f16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
InternVL3-8B-bf16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
InternVL3-8B-q2_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
InternVL3-8B-q3_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
InternVL3-8B-q4_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
InternVL3-8B-q5_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
InternVL3-8B-q6_k_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 48 |
+
InternVL3-8B-q2_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 49 |
+
InternVL3-8B-q2_k_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
InternVL3-8B-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
InternVL3-8B-q3_k_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
InternVL3-8B-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 53 |
+
InternVL3-8B-q4_k_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 54 |
+
InternVL3-8B-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
InternVL3-8B-q5_k_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
InternVL3-8B-q6_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 57 |
+
InternVL3-8B-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 58 |
+
InternVL3-8B-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 59 |
+
InternVL3-8B-q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
InternVL3-8B-q4_0_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
InternVL3-8B-q4_1_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 62 |
+
InternVL3-8B-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 63 |
+
InternVL3-8B-q5_1.gguf filter=lfs diff=lfs merge=lfs -text
|
| 64 |
+
InternVL3-8B-q5_0_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 65 |
+
InternVL3-8B-q5_1_l.gguf filter=lfs diff=lfs merge=lfs -text
|
| 66 |
+
InternVL3-8B-iq2_xs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 67 |
+
InternVL3-8B-iq2_xxs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 68 |
+
InternVL3-8B-iq2_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 69 |
+
InternVL3-8B-iq2_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 70 |
+
InternVL3-8B-iq3_xs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 71 |
+
InternVL3-8B-iq3_xxs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 72 |
+
InternVL3-8B-iq3_s.gguf filter=lfs diff=lfs merge=lfs -text
|
| 73 |
+
InternVL3-8B-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 74 |
+
InternVL3-8B-iq4_xs.gguf filter=lfs diff=lfs merge=lfs -text
|
| 75 |
+
InternVL3-8B-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text
|
| 76 |
+
InternVL3-8B.imatrix filter=lfs diff=lfs merge=lfs -text
|
| 77 |
+
InternVL3-8B-q8_0.mmproj filter=lfs diff=lfs merge=lfs -text
|
| 78 |
+
InternVL3-8B-f32.mmproj filter=lfs diff=lfs merge=lfs -text
|
| 79 |
+
InternVL3-8B-bf16.gguf filter=lfs diff=lfs merge=lfs -text
|
| 80 |
+
InternVL3-8B-bf16.mmproj filter=lfs diff=lfs merge=lfs -text
|
| 81 |
+
InternVL3-8B-f16.mmproj filter=lfs diff=lfs merge=lfs -text
|
InternVL3-8B-bf16.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c4edefe0520cec0ca5c2e3e26483b60d6ef21356c70d85c3c917898e4b40c4f3
|
| 3 |
+
size 15232254112
|
InternVL3-8B-bf16.mmproj
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d6640356b365b69105aa1fc83bcd567f419f0c83ea81d4da1009c816465b87f6
|
| 3 |
+
size 666002976
|
InternVL3-8B-bf16_q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:39bab9d23f6bac869308458b4bc2146005206037d689925d37958b6ce789d70e
|
| 3 |
+
size 11282399392
|
InternVL3-8B-f16.mmproj
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d20a18c3755226ecff3bbacf4f7baf2014adef687bfe444b76309300045cadb9
|
| 3 |
+
size 666002976
|
InternVL3-8B-f16_q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:014ff2b32f4beb893afb3af3f73cbfd89a5b98978e1a0f5d42fe663c4382fdec
|
| 3 |
+
size 11282399392
|
InternVL3-8B-f32.mmproj
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:061abb8af8797fd84160de10f6cf79fecd09f18e21875a20ee7e270f7404e313
|
| 3 |
+
size 1325032992
|
InternVL3-8B-iq2_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6d0c5c82c9032155bb79858fa51d7b71abfa52a97af9c4fdc0d98f391b3aa876
|
| 3 |
+
size 3037191744
|
InternVL3-8B-iq2_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:57b30467754169a5886d021849e5319b7169fdbde8bfb37672447f43df3609b2
|
| 3 |
+
size 2911034944
|
InternVL3-8B-iq2_xs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c0da5b46cae87a62ced65bc60c95ea6848d3ce7e6e2ad3249f9793397c092014
|
| 3 |
+
size 2837405248
|
InternVL3-8B-iq2_xxs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8974776a90e6c9b0ef29393ca361a773fcd38cd2fea9f4a788edee504f039c86
|
| 3 |
+
size 2648972864
|
InternVL3-8B-iq3_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ab4daf1f5ef78b31051a1bfb43404426cf7a898eb9993f8f8446c659a23938c5
|
| 3 |
+
size 3601456704
|
InternVL3-8B-iq3_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ffa8168f2bbe47aba3e16e99d381fdbc1693705eb302e5b06dae1af4ea819f7
|
| 3 |
+
size 3563925056
|
InternVL3-8B-iq3_xs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2a9d7cf25d5aaa4cf9f3a2bde899b646990652984bfbfdf90413c1a3e74e4d11
|
| 3 |
+
size 3410988608
|
InternVL3-8B-iq3_xxs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c116e402e1db3c55471e335cfa26d017bcb980a0d00feabc3b20dc2668186544
|
| 3 |
+
size 3270725184
|
InternVL3-8B-iq4_nl.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1a2c20621f6d6e9f32a1b77cb1e4028386444c9001a8fb34d3dd9e3b0ea3e411
|
| 3 |
+
size 4435872672
|
InternVL3-8B-iq4_xs.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3661a3f5491118167b110ec5e03502c595ef5f89b964bf9166f90d7e291a7dd3
|
| 3 |
+
size 4216575552
|
InternVL3-8B-q2_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1320b4b717d899c1455a79c424baa2a83817e748a84c00a455913af060892744
|
| 3 |
+
size 3257193472
|
InternVL3-8B-q2_k_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6a780b7c8e8ea4d3299de75095dba23a2f42a797ddd30387121610ed1a338fa8
|
| 3 |
+
size 2932395584
|
InternVL3-8B-q3_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:62b91e3b16c272e77cf470b6273513449fede87bd2dae1d94fe718d754ae9327
|
| 3 |
+
size 3994551296
|
InternVL3-8B-q3_k_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b371065dca0e3a3a5c36f341c0882312127eb79d8701cbe19c9edba42ec857f0
|
| 3 |
+
size 3608882752
|
InternVL3-8B-q4_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:054ff8b613b846a0426370c83a5b01b8de0d289863b5fbf6d2677c9660b940f4
|
| 3 |
+
size 4289303360
|
InternVL3-8B-q4_1.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a3adec1fbece0095bd6a0eee687ad61b5bf3a18c1224cc5d9305d48dbd19a026
|
| 3 |
+
size 4765083840
|
InternVL3-8B-q4_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e3c3b35567da9c30a45b610094f9eef538cf605f32cbe182b46c7ba27dd2c4c1
|
| 3 |
+
size 4775347200
|
InternVL3-8B-q4_k_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3577f6d044fa58ac67003a09ea4c8f3aa07e8157c4e256e4ab735e3a17e51077
|
| 3 |
+
size 4631757824
|
InternVL3-8B-q5_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e55ba9794ef9a38e365d03507757861d77cb069d8bba47d40a21fef166f617d4
|
| 3 |
+
size 5240864320
|
InternVL3-8B-q5_1.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d30ecc3b842e05ca2ba6123911baae440a4dada04539735a4f249ae02e78de35
|
| 3 |
+
size 5716644800
|
InternVL3-8B-q5_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e0a05d22116503da7be9b55a0a476efb637fe596f991351071512845fb3c5851
|
| 3 |
+
size 5525148672
|
InternVL3-8B-q5_k_s.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:df110153dc77b0c4d1778cc1871d140e3babc95917f364c4df6c54280aade211
|
| 3 |
+
size 5451060224
|
InternVL3-8B-q6_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:924f7d62178dbf201d23f9f53480a3b01b0e328fb3b4b5e7f0881bda7d614fc1
|
| 3 |
+
size 6251897856
|
InternVL3-8B-q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1bd5467e180e9a7d47f37c6c3c201d2bec07bc5d508aac5769f006a9117d831c
|
| 3 |
+
size 8095546912
|
InternVL3-8B-q8_0.mmproj
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6af2280ff68490a660faf762145760dd11895f47996a798926b849ddb442a765
|
| 3 |
+
size 357082656
|
InternVL3-8B.imatrix
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6010680c2a8f67f2d407b11062e4582c76ebadede2ad0a1a85bb279b240f218a
|
| 3 |
+
size 4536712
|
README.md
ADDED
|
@@ -0,0 +1,802 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
license_name: qwen
|
| 4 |
+
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
|
| 5 |
+
pipeline_tag: image-text-to-text
|
| 6 |
+
library_name: transformers
|
| 7 |
+
base_model:
|
| 8 |
+
- OpenGVLab/InternVL3-8B-Instruct
|
| 9 |
+
base_model_relation: finetune
|
| 10 |
+
datasets:
|
| 11 |
+
- OpenGVLab/MMPR-v1.2
|
| 12 |
+
language:
|
| 13 |
+
- multilingual
|
| 14 |
+
tags:
|
| 15 |
+
- internvl
|
| 16 |
+
- custom_code
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# <span style="color: #7FFF7F;">InternVL3-8B GGUF Models</span>
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
## <span style="color: #7F7FFF;">Model Generation Details</span>
|
| 23 |
+
|
| 24 |
+
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`6adc3c3e`](https://github.com/ggerganov/llama.cpp/commit/6adc3c3ebc029af058ac950a8e2a825fdf18ecc6).
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
---
|
| 31 |
+
|
| 32 |
+
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
|
| 33 |
+
|
| 34 |
+
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
|
| 35 |
+
|
| 36 |
+
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
|
| 37 |
+
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
|
| 38 |
+
|
| 39 |
+
While this does increase model file size, it significantly improves precision for a given quantization level.
|
| 40 |
+
|
| 41 |
+
### **I'd love your feedback—have you tried this? How does it perform for you?**
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
|
| 49 |
+
Click here to get info on choosing the right GGUF model format
|
| 50 |
+
</a>
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
<!--Begin Original Model Card-->
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
# InternVL3-8B
|
| 60 |
+
|
| 61 |
+
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[📜 InternVL3\]](https://huggingface.co/papers/2504.10479)
|
| 62 |
+
|
| 63 |
+
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
|
| 64 |
+
|
| 65 |
+
<div align="center">
|
| 66 |
+
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
|
| 67 |
+
</div>
|
| 68 |
+
|
| 69 |
+
## Introduction
|
| 70 |
+
|
| 71 |
+
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
|
| 72 |
+
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
|
| 73 |
+
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
|
| 77 |
+
## InternVL3 Family
|
| 78 |
+
|
| 79 |
+
In the following table, we provide an overview of the InternVL3 series.
|
| 80 |
+
|
| 81 |
+
| Model Name | Vision Part | Language Part | HF Link |
|
| 82 |
+
| :-----------: | :-------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :------------------------------------------------------: |
|
| 83 |
+
| InternVL3-1B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-1B) |
|
| 84 |
+
| InternVL3-2B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-2B) |
|
| 85 |
+
| InternVL3-8B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-8B) |
|
| 86 |
+
| InternVL3-9B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-9B) |
|
| 87 |
+
| InternVL3-14B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-14B) |
|
| 88 |
+
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
|
| 89 |
+
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
## Model Architecture
|
| 94 |
+
|
| 95 |
+
As shown in the following figure, [InternVL3](https://internvl.github.io/blog/2025-04-11-InternVL-3/) retains the same model architecture as [InternVL 2.5](https://internvl.github.io/blog/2024-12-05-InternVL-2.5/) and its predecessors, InternVL 1.5 and 2.0, following the "ViT-MLP-LLM" paradigm. In this new version, we integrate a newly incrementally pre-trained InternViT with various pre-trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+

|
| 99 |
+
|
| 100 |
+
As in the previous version, we applied a pixel unshuffle operation, reducing the number of visual tokens to one-quarter of the original. Besides, we adopted a similar dynamic resolution strategy as InternVL 1.5, dividing images into tiles of 448×448 pixels. The key difference, starting from InternVL 2.0, is that we additionally introduced support for multi-image and video data.
|
| 101 |
+
|
| 102 |
+
Notably, in InternVL3, we integrate the [Variable Visual Position Encoding (V2PE)](https://arxiv.org/abs/2412.09616), which utilizes smaller, more flexible position increments for visual tokens. Benefiting from V2PE, InternVL3 exhibits better long context understanding capabilities compared to its predecessors.
|
| 103 |
+
|
| 104 |
+
## Training Strategy
|
| 105 |
+
|
| 106 |
+
### Native Multimodal Pre-Training
|
| 107 |
+
|
| 108 |
+
We propose a [Native Multimodal Pre-Training](https://huggingface.co/papers/2504.10479) approach that consolidates language and vision learning into a single pre-training stage.
|
| 109 |
+
In contrast to standard paradigms that first train a language-only model and subsequently adapt it to handle additional modalities, our method interleaves multimodal data (e.g., image-text, video-text, or image-text interleaved sequences) with large-scale textual corpora. This unified training scheme allows the model to learn both linguistic and multimodal representations simultaneously, ultimately enhancing its capability to handle vision-language tasks without the need for separate alignment or bridging modules.
|
| 110 |
+
Please see [our paper](https://huggingface.co/papers/2504.10479) for more details.
|
| 111 |
+
|
| 112 |
+
### Supervised Fine-Tuning
|
| 113 |
+
|
| 114 |
+
In this phase, the techniques of random JPEG compression, square loss re-weighting, and multimodal data packing proposed in [InternVL2.5](https://arxiv.org/abs/2412.05271) are also employed in the InternVL3 series.
|
| 115 |
+
The main advancement of the SFT phase in InternVL3 compared to InternVL2.5 lies in the use of higher-quality and more diverse training data.
|
| 116 |
+
Specifically, we further extend training samples for tool use, 3D scene understanding, GUI operations, long context tasks, video understanding, scientific diagrams, creative writing, and multimodal reasoning.
|
| 117 |
+
|
| 118 |
+
### Mixed Preference Optimization
|
| 119 |
+
|
| 120 |
+
During Pre-training and SFT, the model is trained to predict the next token conditioned on previous ground-truth tokens.
|
| 121 |
+
However, during inference, the model predicts each token based on its own prior outputs.
|
| 122 |
+
This discrepancy between ground-truth tokens and model-predicted tokens introduces a distribution shift, which can impair the model’s Chain-of-Thought (CoT) reasoning capabilities.
|
| 123 |
+
To mitigate this issue, we employ [MPO](https://arxiv.org/abs/2411.10442), which introduces additional supervision from both positive and negative samples to align the model response distribution with the ground-truth distribution, thereby improving reasoning performance.
|
| 124 |
+
Specifically, the training objective of MPO is a combination of
|
| 125 |
+
preference loss \\(\mathcal{L}_{\text{p}}\\),
|
| 126 |
+
quality loss \\(\mathcal{L}_{\text{q}}\\),
|
| 127 |
+
and generation loss \\(\mathcal{L}_{\text{g}}\\),
|
| 128 |
+
which can be formulated as follows:
|
| 129 |
+
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
where \\(w_{*}\\) represents the weight assigned to each loss component. Please see [our paper](https://arxiv.org/abs/2411.10442) for more details about MPO.
|
| 137 |
+
|
| 138 |
+
|
| 139 |
+
### Test-Time Scaling
|
| 140 |
+
|
| 141 |
+
Test-Time Scaling has been shown to be an effective method to enhance the reasoning abilities of LLMs and MLLMs.
|
| 142 |
+
In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B](https://huggingface.co/OpenGVLab/VisualPRM-8B) as the critic model to select the best response for reasoning and mathematics evaluation.
|
| 143 |
+
|
| 144 |
+
## Evaluation on Multimodal Capability
|
| 145 |
+
|
| 146 |
+
### Multimodal Reasoning and Mathematics
|
| 147 |
+
|
| 148 |
+

|
| 149 |
+
|
| 150 |
+
### OCR, Chart, and Document Understanding
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
|
| 154 |
+
### Multi-Image & Real-World Comprehension
|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
|
| 158 |
+
### Comprehensive Multimodal & Hallucination Evaluation
|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
|
| 162 |
+
### Visual Grounding
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
|
| 166 |
+
### Multimodal Multilingual Understanding
|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
|
| 170 |
+
### Video Understanding
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
|
| 174 |
+
### GUI Grounding
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
|
| 178 |
+
### Spatial Reasoning
|
| 179 |
+
|
| 180 |
+

|
| 181 |
+
|
| 182 |
+
## Evaluation on Language Capability
|
| 183 |
+
|
| 184 |
+
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3.
|
| 185 |
+
Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
|
| 186 |
+
Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
|
| 190 |
+
## Ablation Study
|
| 191 |
+
|
| 192 |
+
### Native Multimodal Pre-Training
|
| 193 |
+
|
| 194 |
+
We conduct experiments on the InternVL2-8B model while keeping its architecture, initialization parameters, and training data entirely unchanged. Traditionally, InternVL2-8B employs a training pipeline that begins with an MLP warmup phase for feature alignment followed by an Instruction Tuning stage. In our experiments, we substitute the conventional MLP warmup phase with a native multimodal pre-training process. This modification isolates the contribution of native multimodal pre-training to the overall multimodal capability of the model.
|
| 195 |
+
|
| 196 |
+
The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
|
| 200 |
+
### Mixed Preference Optimization
|
| 201 |
+
|
| 202 |
+
As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
|
| 206 |
+
### Variable Visual Position Encoding
|
| 207 |
+
|
| 208 |
+
As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+
## Quick Start
|
| 213 |
+
|
| 214 |
+
We provide an example code to run `InternVL3-8B` using `transformers`.
|
| 215 |
+
|
| 216 |
+
> Please use transformers>=4.37.2 to ensure the model works normally.
|
| 217 |
+
|
| 218 |
+
### Model Loading
|
| 219 |
+
|
| 220 |
+
#### 16-bit (bf16 / fp16)
|
| 221 |
+
|
| 222 |
+
```python
|
| 223 |
+
import torch
|
| 224 |
+
from transformers import AutoTokenizer, AutoModel
|
| 225 |
+
path = "OpenGVLab/InternVL3-8B"
|
| 226 |
+
model = AutoModel.from_pretrained(
|
| 227 |
+
path,
|
| 228 |
+
torch_dtype=torch.bfloat16,
|
| 229 |
+
low_cpu_mem_usage=True,
|
| 230 |
+
use_flash_attn=True,
|
| 231 |
+
trust_remote_code=True).eval().cuda()
|
| 232 |
+
```
|
| 233 |
+
|
| 234 |
+
#### BNB 8-bit Quantization
|
| 235 |
+
|
| 236 |
+
```python
|
| 237 |
+
import torch
|
| 238 |
+
from transformers import AutoTokenizer, AutoModel
|
| 239 |
+
path = "OpenGVLab/InternVL3-8B"
|
| 240 |
+
model = AutoModel.from_pretrained(
|
| 241 |
+
path,
|
| 242 |
+
torch_dtype=torch.bfloat16,
|
| 243 |
+
load_in_8bit=True,
|
| 244 |
+
low_cpu_mem_usage=True,
|
| 245 |
+
use_flash_attn=True,
|
| 246 |
+
trust_remote_code=True).eval()
|
| 247 |
+
```
|
| 248 |
+
|
| 249 |
+
#### Multiple GPUs
|
| 250 |
+
|
| 251 |
+
The reason for writing the code this way is to avoid errors that occur during multi-GPU inference due to tensors not being on the same device. By ensuring that the first and last layers of the large language model (LLM) are on the same device, we prevent such errors.
|
| 252 |
+
|
| 253 |
+
```python
|
| 254 |
+
import math
|
| 255 |
+
import torch
|
| 256 |
+
from transformers import AutoTokenizer, AutoModel
|
| 257 |
+
|
| 258 |
+
def split_model(model_name):
|
| 259 |
+
device_map = {}
|
| 260 |
+
world_size = torch.cuda.device_count()
|
| 261 |
+
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
|
| 262 |
+
num_layers = config.llm_config.num_hidden_layers
|
| 263 |
+
# Since the first GPU will be used for ViT, treat it as half a GPU.
|
| 264 |
+
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
|
| 265 |
+
num_layers_per_gpu = [num_layers_per_gpu] * world_size
|
| 266 |
+
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
|
| 267 |
+
layer_cnt = 0
|
| 268 |
+
for i, num_layer in enumerate(num_layers_per_gpu):
|
| 269 |
+
for j in range(num_layer):
|
| 270 |
+
device_map[f'language_model.model.layers.{layer_cnt}'] = i
|
| 271 |
+
layer_cnt += 1
|
| 272 |
+
device_map['vision_model'] = 0
|
| 273 |
+
device_map['mlp1'] = 0
|
| 274 |
+
device_map['language_model.model.tok_embeddings'] = 0
|
| 275 |
+
device_map['language_model.model.embed_tokens'] = 0
|
| 276 |
+
device_map['language_model.output'] = 0
|
| 277 |
+
device_map['language_model.model.norm'] = 0
|
| 278 |
+
device_map['language_model.model.rotary_emb'] = 0
|
| 279 |
+
device_map['language_model.lm_head'] = 0
|
| 280 |
+
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
|
| 281 |
+
|
| 282 |
+
return device_map
|
| 283 |
+
|
| 284 |
+
path = "OpenGVLab/InternVL3-8B"
|
| 285 |
+
device_map = split_model('InternVL3-8B')
|
| 286 |
+
model = AutoModel.from_pretrained(
|
| 287 |
+
path,
|
| 288 |
+
torch_dtype=torch.bfloat16,
|
| 289 |
+
low_cpu_mem_usage=True,
|
| 290 |
+
use_flash_attn=True,
|
| 291 |
+
trust_remote_code=True,
|
| 292 |
+
device_map=device_map).eval()
|
| 293 |
+
```
|
| 294 |
+
|
| 295 |
+
### Inference with Transformers
|
| 296 |
+
|
| 297 |
+
```python
|
| 298 |
+
import math
|
| 299 |
+
import numpy as np
|
| 300 |
+
import torch
|
| 301 |
+
import torchvision.transforms as T
|
| 302 |
+
from decord import VideoReader, cpu
|
| 303 |
+
from PIL import Image
|
| 304 |
+
from torchvision.transforms.functional import InterpolationMode
|
| 305 |
+
from transformers import AutoModel, AutoTokenizer
|
| 306 |
+
|
| 307 |
+
IMAGENET_MEAN = (0.485, 0.456, 0.406)
|
| 308 |
+
IMAGENET_STD = (0.229, 0.224, 0.225)
|
| 309 |
+
|
| 310 |
+
def build_transform(input_size):
|
| 311 |
+
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
|
| 312 |
+
transform = T.Compose([
|
| 313 |
+
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
|
| 314 |
+
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
|
| 315 |
+
T.ToTensor(),
|
| 316 |
+
T.Normalize(mean=MEAN, std=STD)
|
| 317 |
+
])
|
| 318 |
+
return transform
|
| 319 |
+
|
| 320 |
+
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
|
| 321 |
+
best_ratio_diff = float('inf')
|
| 322 |
+
best_ratio = (1, 1)
|
| 323 |
+
area = width * height
|
| 324 |
+
for ratio in target_ratios:
|
| 325 |
+
target_aspect_ratio = ratio[0] / ratio[1]
|
| 326 |
+
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
|
| 327 |
+
if ratio_diff < best_ratio_diff:
|
| 328 |
+
best_ratio_diff = ratio_diff
|
| 329 |
+
best_ratio = ratio
|
| 330 |
+
elif ratio_diff == best_ratio_diff:
|
| 331 |
+
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
|
| 332 |
+
best_ratio = ratio
|
| 333 |
+
return best_ratio
|
| 334 |
+
|
| 335 |
+
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
|
| 336 |
+
orig_width, orig_height = image.size
|
| 337 |
+
aspect_ratio = orig_width / orig_height
|
| 338 |
+
|
| 339 |
+
# calculate the existing image aspect ratio
|
| 340 |
+
target_ratios = set(
|
| 341 |
+
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
|
| 342 |
+
i * j <= max_num and i * j >= min_num)
|
| 343 |
+
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
|
| 344 |
+
|
| 345 |
+
# find the closest aspect ratio to the target
|
| 346 |
+
target_aspect_ratio = find_closest_aspect_ratio(
|
| 347 |
+
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
|
| 348 |
+
|
| 349 |
+
# calculate the target width and height
|
| 350 |
+
target_width = image_size * target_aspect_ratio[0]
|
| 351 |
+
target_height = image_size * target_aspect_ratio[1]
|
| 352 |
+
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
|
| 353 |
+
|
| 354 |
+
# resize the image
|
| 355 |
+
resized_img = image.resize((target_width, target_height))
|
| 356 |
+
processed_images = []
|
| 357 |
+
for i in range(blocks):
|
| 358 |
+
box = (
|
| 359 |
+
(i % (target_width // image_size)) * image_size,
|
| 360 |
+
(i // (target_width // image_size)) * image_size,
|
| 361 |
+
((i % (target_width // image_size)) + 1) * image_size,
|
| 362 |
+
((i // (target_width // image_size)) + 1) * image_size
|
| 363 |
+
)
|
| 364 |
+
# split the image
|
| 365 |
+
split_img = resized_img.crop(box)
|
| 366 |
+
processed_images.append(split_img)
|
| 367 |
+
assert len(processed_images) == blocks
|
| 368 |
+
if use_thumbnail and len(processed_images) != 1:
|
| 369 |
+
thumbnail_img = image.resize((image_size, image_size))
|
| 370 |
+
processed_images.append(thumbnail_img)
|
| 371 |
+
return processed_images
|
| 372 |
+
|
| 373 |
+
def load_image(image_file, input_size=448, max_num=12):
|
| 374 |
+
image = Image.open(image_file).convert('RGB')
|
| 375 |
+
transform = build_transform(input_size=input_size)
|
| 376 |
+
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
|
| 377 |
+
pixel_values = [transform(image) for image in images]
|
| 378 |
+
pixel_values = torch.stack(pixel_values)
|
| 379 |
+
return pixel_values
|
| 380 |
+
|
| 381 |
+
def split_model(model_name):
|
| 382 |
+
device_map = {}
|
| 383 |
+
world_size = torch.cuda.device_count()
|
| 384 |
+
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
|
| 385 |
+
num_layers = config.llm_config.num_hidden_layers
|
| 386 |
+
# Since the first GPU will be used for ViT, treat it as half a GPU.
|
| 387 |
+
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
|
| 388 |
+
num_layers_per_gpu = [num_layers_per_gpu] * world_size
|
| 389 |
+
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
|
| 390 |
+
layer_cnt = 0
|
| 391 |
+
for i, num_layer in enumerate(num_layers_per_gpu):
|
| 392 |
+
for j in range(num_layer):
|
| 393 |
+
device_map[f'language_model.model.layers.{layer_cnt}'] = i
|
| 394 |
+
layer_cnt += 1
|
| 395 |
+
device_map['vision_model'] = 0
|
| 396 |
+
device_map['mlp1'] = 0
|
| 397 |
+
device_map['language_model.model.tok_embeddings'] = 0
|
| 398 |
+
device_map['language_model.model.embed_tokens'] = 0
|
| 399 |
+
device_map['language_model.output'] = 0
|
| 400 |
+
device_map['language_model.model.norm'] = 0
|
| 401 |
+
device_map['language_model.model.rotary_emb'] = 0
|
| 402 |
+
device_map['language_model.lm_head'] = 0
|
| 403 |
+
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
|
| 404 |
+
|
| 405 |
+
return device_map
|
| 406 |
+
|
| 407 |
+
# If you set `load_in_8bit=True`, you will need two 80GB GPUs.
|
| 408 |
+
# If you set `load_in_8bit=False`, you will need at least three 80GB GPUs.
|
| 409 |
+
path = 'OpenGVLab/InternVL3-8B'
|
| 410 |
+
device_map = split_model('InternVL3-8B')
|
| 411 |
+
model = AutoModel.from_pretrained(
|
| 412 |
+
path,
|
| 413 |
+
torch_dtype=torch.bfloat16,
|
| 414 |
+
load_in_8bit=False,
|
| 415 |
+
low_cpu_mem_usage=True,
|
| 416 |
+
use_flash_attn=True,
|
| 417 |
+
trust_remote_code=True,
|
| 418 |
+
device_map=device_map).eval()
|
| 419 |
+
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
|
| 420 |
+
|
| 421 |
+
# set the max number of tiles in `max_num`
|
| 422 |
+
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 423 |
+
generation_config = dict(max_new_tokens=1024, do_sample=True)
|
| 424 |
+
|
| 425 |
+
# pure-text conversation (纯文本对话)
|
| 426 |
+
question = 'Hello, who are you?'
|
| 427 |
+
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
|
| 428 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 429 |
+
|
| 430 |
+
question = 'Can you tell me a story?'
|
| 431 |
+
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
|
| 432 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 433 |
+
|
| 434 |
+
# single-image single-round conversation (单图单轮对话)
|
| 435 |
+
question = '<image>\nPlease describe the image shortly.'
|
| 436 |
+
response = model.chat(tokenizer, pixel_values, question, generation_config)
|
| 437 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 438 |
+
|
| 439 |
+
# single-image multi-round conversation (单图多轮对话)
|
| 440 |
+
question = '<image>\nPlease describe the image in detail.'
|
| 441 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
|
| 442 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 443 |
+
|
| 444 |
+
question = 'Please write a poem according to the image.'
|
| 445 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
|
| 446 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 447 |
+
|
| 448 |
+
# multi-image multi-round conversation, combined images (多图多轮对话,拼接图像)
|
| 449 |
+
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 450 |
+
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 451 |
+
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
|
| 452 |
+
|
| 453 |
+
question = '<image>\nDescribe the two images in detail.'
|
| 454 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
|
| 455 |
+
history=None, return_history=True)
|
| 456 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 457 |
+
|
| 458 |
+
question = 'What are the similarities and differences between these two images.'
|
| 459 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
|
| 460 |
+
history=history, return_history=True)
|
| 461 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 462 |
+
|
| 463 |
+
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
|
| 464 |
+
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 465 |
+
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 466 |
+
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
|
| 467 |
+
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
|
| 468 |
+
|
| 469 |
+
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
|
| 470 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
|
| 471 |
+
num_patches_list=num_patches_list,
|
| 472 |
+
history=None, return_history=True)
|
| 473 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 474 |
+
|
| 475 |
+
question = 'What are the similarities and differences between these two images.'
|
| 476 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
|
| 477 |
+
num_patches_list=num_patches_list,
|
| 478 |
+
history=history, return_history=True)
|
| 479 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 480 |
+
|
| 481 |
+
# batch inference, single image per sample (单图批处理)
|
| 482 |
+
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 483 |
+
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 484 |
+
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
|
| 485 |
+
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
|
| 486 |
+
|
| 487 |
+
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
|
| 488 |
+
responses = model.batch_chat(tokenizer, pixel_values,
|
| 489 |
+
num_patches_list=num_patches_list,
|
| 490 |
+
questions=questions,
|
| 491 |
+
generation_config=generation_config)
|
| 492 |
+
for question, response in zip(questions, responses):
|
| 493 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 494 |
+
|
| 495 |
+
# video multi-round conversation (视频多轮对话)
|
| 496 |
+
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
|
| 497 |
+
if bound:
|
| 498 |
+
start, end = bound[0], bound[1]
|
| 499 |
+
else:
|
| 500 |
+
start, end = -100000, 100000
|
| 501 |
+
start_idx = max(first_idx, round(start * fps))
|
| 502 |
+
end_idx = min(round(end * fps), max_frame)
|
| 503 |
+
seg_size = float(end_idx - start_idx) / num_segments
|
| 504 |
+
frame_indices = np.array([
|
| 505 |
+
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
|
| 506 |
+
for idx in range(num_segments)
|
| 507 |
+
])
|
| 508 |
+
return frame_indices
|
| 509 |
+
|
| 510 |
+
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
|
| 511 |
+
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
|
| 512 |
+
max_frame = len(vr) - 1
|
| 513 |
+
fps = float(vr.get_avg_fps())
|
| 514 |
+
|
| 515 |
+
pixel_values_list, num_patches_list = [], []
|
| 516 |
+
transform = build_transform(input_size=input_size)
|
| 517 |
+
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
|
| 518 |
+
for frame_index in frame_indices:
|
| 519 |
+
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
|
| 520 |
+
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
|
| 521 |
+
pixel_values = [transform(tile) for tile in img]
|
| 522 |
+
pixel_values = torch.stack(pixel_values)
|
| 523 |
+
num_patches_list.append(pixel_values.shape[0])
|
| 524 |
+
pixel_values_list.append(pixel_values)
|
| 525 |
+
pixel_values = torch.cat(pixel_values_list)
|
| 526 |
+
return pixel_values, num_patches_list
|
| 527 |
+
|
| 528 |
+
video_path = './examples/red-panda.mp4'
|
| 529 |
+
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
|
| 530 |
+
pixel_values = pixel_values.to(torch.bfloat16).cuda()
|
| 531 |
+
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
|
| 532 |
+
question = video_prefix + 'What is the red panda doing?'
|
| 533 |
+
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
|
| 534 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
|
| 535 |
+
num_patches_list=num_patches_list, history=None, return_history=True)
|
| 536 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 537 |
+
|
| 538 |
+
question = 'Describe this video in detail.'
|
| 539 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
|
| 540 |
+
num_patches_list=num_patches_list, history=history, return_history=True)
|
| 541 |
+
print(f'User: {question}\nAssistant: {response}')
|
| 542 |
+
```
|
| 543 |
+
|
| 544 |
+
#### Streaming Output
|
| 545 |
+
|
| 546 |
+
Besides this method, you can also use the following code to get streamed output.
|
| 547 |
+
|
| 548 |
+
```python
|
| 549 |
+
from transformers import TextIteratorStreamer
|
| 550 |
+
from threading import Thread
|
| 551 |
+
|
| 552 |
+
# Initialize the streamer
|
| 553 |
+
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
|
| 554 |
+
# Define the generation configuration
|
| 555 |
+
generation_config = dict(max_new_tokens=1024, do_sample=False, streamer=streamer)
|
| 556 |
+
# Start the model chat in a separate thread
|
| 557 |
+
thread = Thread(target=model.chat, kwargs=dict(
|
| 558 |
+
tokenizer=tokenizer, pixel_values=pixel_values, question=question,
|
| 559 |
+
history=None, return_history=False, generation_config=generation_config,
|
| 560 |
+
))
|
| 561 |
+
thread.start()
|
| 562 |
+
|
| 563 |
+
# Initialize an empty string to store the generated text
|
| 564 |
+
generated_text = ''
|
| 565 |
+
# Loop through the streamer to get the new text as it is generated
|
| 566 |
+
for new_text in streamer:
|
| 567 |
+
if new_text == model.conv_template.sep:
|
| 568 |
+
break
|
| 569 |
+
generated_text += new_text
|
| 570 |
+
print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
|
| 571 |
+
```
|
| 572 |
+
|
| 573 |
+
## Finetune
|
| 574 |
+
|
| 575 |
+
Many repositories now support fine-tuning of the InternVL series models, including [InternVL](https://github.com/OpenGVLab/InternVL), [SWIFT](https://github.com/modelscope/ms-swift), [XTurner](https://github.com/InternLM/xtuner), and others. Please refer to their documentation for more details on fine-tuning.
|
| 576 |
+
|
| 577 |
+
## Deployment
|
| 578 |
+
|
| 579 |
+
### LMDeploy
|
| 580 |
+
|
| 581 |
+
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
|
| 582 |
+
|
| 583 |
+
```sh
|
| 584 |
+
# if lmdeploy<0.7.3, you need to explicitly set chat_template_config=ChatTemplateConfig(model_name='internvl2_5')
|
| 585 |
+
pip install lmdeploy>=0.7.3
|
| 586 |
+
```
|
| 587 |
+
|
| 588 |
+
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
|
| 589 |
+
|
| 590 |
+
#### A 'Hello, world' Example
|
| 591 |
+
|
| 592 |
+
```python
|
| 593 |
+
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 594 |
+
from lmdeploy.vl import load_image
|
| 595 |
+
|
| 596 |
+
model = 'OpenGVLab/InternVL3-8B'
|
| 597 |
+
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
| 598 |
+
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
|
| 599 |
+
response = pipe(('describe this image', image))
|
| 600 |
+
print(response.text)
|
| 601 |
+
```
|
| 602 |
+
|
| 603 |
+
If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
|
| 604 |
+
|
| 605 |
+
#### Multi-images Inference
|
| 606 |
+
|
| 607 |
+
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
|
| 608 |
+
|
| 609 |
+
```python
|
| 610 |
+
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 611 |
+
from lmdeploy.vl import load_image
|
| 612 |
+
from lmdeploy.vl.constants import IMAGE_TOKEN
|
| 613 |
+
|
| 614 |
+
model = 'OpenGVLab/InternVL3-8B'
|
| 615 |
+
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
|
| 616 |
+
|
| 617 |
+
image_urls=[
|
| 618 |
+
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
|
| 619 |
+
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
|
| 620 |
+
]
|
| 621 |
+
|
| 622 |
+
images = [load_image(img_url) for img_url in image_urls]
|
| 623 |
+
# Numbering images improves multi-image conversations
|
| 624 |
+
response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
|
| 625 |
+
print(response.text)
|
| 626 |
+
```
|
| 627 |
+
|
| 628 |
+
#### Batch Prompts Inference
|
| 629 |
+
|
| 630 |
+
Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
|
| 631 |
+
|
| 632 |
+
```python
|
| 633 |
+
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 634 |
+
from lmdeploy.vl import load_image
|
| 635 |
+
|
| 636 |
+
model = 'OpenGVLab/InternVL3-8B'
|
| 637 |
+
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
|
| 638 |
+
|
| 639 |
+
image_urls=[
|
| 640 |
+
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
|
| 641 |
+
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
|
| 642 |
+
]
|
| 643 |
+
prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
|
| 644 |
+
response = pipe(prompts)
|
| 645 |
+
print(response)
|
| 646 |
+
```
|
| 647 |
+
|
| 648 |
+
#### Multi-turn Conversation
|
| 649 |
+
|
| 650 |
+
There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
|
| 651 |
+
|
| 652 |
+
```python
|
| 653 |
+
from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig, ChatTemplateConfig
|
| 654 |
+
from lmdeploy.vl import load_image
|
| 655 |
+
|
| 656 |
+
model = 'OpenGVLab/InternVL3-8B'
|
| 657 |
+
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
|
| 658 |
+
|
| 659 |
+
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
|
| 660 |
+
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
|
| 661 |
+
sess = pipe.chat(('describe this image', image), gen_config=gen_config)
|
| 662 |
+
print(sess.response.text)
|
| 663 |
+
sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
|
| 664 |
+
print(sess.response.text)
|
| 665 |
+
```
|
| 666 |
+
|
| 667 |
+
#### Service
|
| 668 |
+
|
| 669 |
+
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
| 670 |
+
|
| 671 |
+
```shell
|
| 672 |
+
lmdeploy serve api_server OpenGVLab/InternVL3-8B --chat-template internvl2_5 --server-port 23333 --tp 1
|
| 673 |
+
```
|
| 674 |
+
|
| 675 |
+
To use the OpenAI-style interface, you need to install OpenAI:
|
| 676 |
+
|
| 677 |
+
```shell
|
| 678 |
+
pip install openai
|
| 679 |
+
```
|
| 680 |
+
|
| 681 |
+
Then, use the code below to make the API call:
|
| 682 |
+
|
| 683 |
+
```python
|
| 684 |
+
from openai import OpenAI
|
| 685 |
+
|
| 686 |
+
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
|
| 687 |
+
model_name = client.models.list().data[0].id
|
| 688 |
+
response = client.chat.completions.create(
|
| 689 |
+
model=model_name,
|
| 690 |
+
messages=[{
|
| 691 |
+
'role':
|
| 692 |
+
'user',
|
| 693 |
+
'content': [{
|
| 694 |
+
'type': 'text',
|
| 695 |
+
'text': 'describe this image',
|
| 696 |
+
}, {
|
| 697 |
+
'type': 'image_url',
|
| 698 |
+
'image_url': {
|
| 699 |
+
'url':
|
| 700 |
+
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
|
| 701 |
+
},
|
| 702 |
+
}],
|
| 703 |
+
}],
|
| 704 |
+
temperature=0.8,
|
| 705 |
+
top_p=0.8)
|
| 706 |
+
print(response)
|
| 707 |
+
```
|
| 708 |
+
|
| 709 |
+
## License
|
| 710 |
+
|
| 711 |
+
This project is released under the MIT License. This project uses the pre-trained Qwen2.5 as a component, which is licensed under the Apache-2.0 License.
|
| 712 |
+
|
| 713 |
+
## Citation
|
| 714 |
+
|
| 715 |
+
If you find this project useful in your research, please consider citing:
|
| 716 |
+
|
| 717 |
+
```BibTeX
|
| 718 |
+
@article{chen2024expanding,
|
| 719 |
+
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
|
| 720 |
+
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
|
| 721 |
+
journal={arXiv preprint arXiv:2412.05271},
|
| 722 |
+
year={2024}
|
| 723 |
+
}
|
| 724 |
+
@article{wang2024mpo,
|
| 725 |
+
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
|
| 726 |
+
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
|
| 727 |
+
journal={arXiv preprint arXiv:2411.10442},
|
| 728 |
+
year={2024}
|
| 729 |
+
}
|
| 730 |
+
@article{chen2024far,
|
| 731 |
+
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
|
| 732 |
+
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
|
| 733 |
+
journal={arXiv preprint arXiv:2404.16821},
|
| 734 |
+
year={2024}
|
| 735 |
+
}
|
| 736 |
+
@inproceedings{chen2024internvl,
|
| 737 |
+
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
|
| 738 |
+
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
|
| 739 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
| 740 |
+
pages={24185--24198},
|
| 741 |
+
year={2024}
|
| 742 |
+
}
|
| 743 |
+
```
|
| 744 |
+
|
| 745 |
+
<!--End Original Model Card-->
|
| 746 |
+
|
| 747 |
+
---
|
| 748 |
+
|
| 749 |
+
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
|
| 750 |
+
|
| 751 |
+
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
|
| 752 |
+
|
| 753 |
+
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
|
| 754 |
+
|
| 755 |
+
|
| 756 |
+
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
|
| 757 |
+
|
| 758 |
+
💬 **How to test**:
|
| 759 |
+
Choose an **AI assistant type**:
|
| 760 |
+
- `TurboLLM` (GPT-4.1-mini)
|
| 761 |
+
- `HugLLM` (Hugginface Open-source models)
|
| 762 |
+
- `TestLLM` (Experimental CPU-only)
|
| 763 |
+
|
| 764 |
+
### **What I’m Testing**
|
| 765 |
+
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
|
| 766 |
+
- **Function calling** against live network services
|
| 767 |
+
- **How small can a model go** while still handling:
|
| 768 |
+
- Automated **Nmap security scans**
|
| 769 |
+
- **Quantum-readiness checks**
|
| 770 |
+
- **Network Monitoring tasks**
|
| 771 |
+
|
| 772 |
+
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
|
| 773 |
+
- ✅ **Zero-configuration setup**
|
| 774 |
+
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
|
| 775 |
+
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
|
| 776 |
+
|
| 777 |
+
### **Other Assistants**
|
| 778 |
+
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
|
| 779 |
+
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
|
| 780 |
+
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
|
| 781 |
+
- **Real-time network diagnostics and monitoring**
|
| 782 |
+
- **Security Audits**
|
| 783 |
+
- **Penetration testing** (Nmap/Metasploit)
|
| 784 |
+
|
| 785 |
+
🔵 **HugLLM** – Latest Open-source models:
|
| 786 |
+
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
|
| 787 |
+
|
| 788 |
+
### 💡 **Example commands you could test**:
|
| 789 |
+
1. `"Give me info on my websites SSL certificate"`
|
| 790 |
+
2. `"Check if my server is using quantum safe encyption for communication"`
|
| 791 |
+
3. `"Run a comprehensive security audit on my server"`
|
| 792 |
+
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
|
| 793 |
+
|
| 794 |
+
### Final Word
|
| 795 |
+
|
| 796 |
+
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
|
| 797 |
+
|
| 798 |
+
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
|
| 799 |
+
|
| 800 |
+
I'm also open to job opportunities or sponsorship.
|
| 801 |
+
|
| 802 |
+
Thank you! 😊
|