XumengWen
commited on
Commit
·
b8afa0d
1
Parent(s):
5b1193b
update model card
Browse files
README.md
CHANGED
|
@@ -1,5 +1,12 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
## Model Summary
|
|
@@ -21,7 +28,7 @@ This model is designed to process and analyze diverse tabular data from various
|
|
| 21 |
|
| 22 |
### Tokenizer
|
| 23 |
|
| 24 |
-
|
| 25 |
|
| 26 |
### Prompt Examples
|
| 27 |
|
|
@@ -85,7 +92,7 @@ Answer: 2
|
|
| 85 |
|
| 86 |
### Recover full model checkpoint
|
| 87 |
|
| 88 |
-
Please follow the document
|
| 89 |
|
| 90 |
### Sample inference code
|
| 91 |
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
license_link: https://github.com/microsoft/Industrial-Foundation-Models/blob/main/LICENSE
|
| 4 |
+
|
| 5 |
+
tags:
|
| 6 |
+
- llm
|
| 7 |
+
- transfer learning
|
| 8 |
+
- in-context learning
|
| 9 |
+
- tabular data
|
| 10 |
---
|
| 11 |
|
| 12 |
## Model Summary
|
|
|
|
| 28 |
|
| 29 |
### Tokenizer
|
| 30 |
|
| 31 |
+
LLaMA-2-GTL supports a vocabulary size of up to `32000` tokens, which is same as the base model LLaMA2.
|
| 32 |
|
| 33 |
### Prompt Examples
|
| 34 |
|
|
|
|
| 92 |
|
| 93 |
### Recover full model checkpoint
|
| 94 |
|
| 95 |
+
Please follow the document to [prepare the model checkpoint](https://github.com/xumwen/Industrial-Foundation-Models/tree/merge_refactor?tab=readme-ov-file#prepare-the-model-checkpoint).
|
| 96 |
|
| 97 |
### Sample inference code
|
| 98 |
|