Update README.md
Browse files
README.md
CHANGED
|
@@ -27,53 +27,44 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
| 27 |
- **Funded by [optional]:** [More Information Needed]
|
| 28 |
- **Shared by [optional]:** [More Information Needed]
|
| 29 |
- **Model type:** [More Information Needed]
|
| 30 |
-
- **Language(s) (NLP):** [
|
| 31 |
-
- **License:** [
|
| 32 |
-
- **Finetuned from model [optional]:** [
|
| 33 |
|
| 34 |
### Model Sources [optional]
|
| 35 |
|
| 36 |
<!-- Provide the basic links for the model. -->
|
| 37 |
|
| 38 |
- **Repository:** [More Information Needed]
|
| 39 |
-
- **Paper [optional]:** [
|
| 40 |
- **Demo [optional]:** [More Information Needed]
|
| 41 |
|
| 42 |
## Uses
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
### Direct Use
|
| 49 |
-
|
| 50 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 51 |
-
|
| 52 |
-
[More Information Needed]
|
| 53 |
-
|
| 54 |
-
### Downstream Use [optional]
|
| 55 |
-
|
| 56 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 57 |
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
|
|
|
| 61 |
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
[More Information Needed]
|
| 65 |
|
| 66 |
-
## Bias, Risks, and Limitations
|
| 67 |
|
| 68 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 69 |
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
### Recommendations
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 77 |
|
| 78 |
## How to Get Started with the Model
|
| 79 |
|
|
|
|
| 27 |
- **Funded by [optional]:** [More Information Needed]
|
| 28 |
- **Shared by [optional]:** [More Information Needed]
|
| 29 |
- **Model type:** [More Information Needed]
|
| 30 |
+
- **Language(s) (NLP):** [English]
|
| 31 |
+
- **License:** [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/)
|
| 32 |
+
- **Finetuned from model [optional]:** [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)
|
| 33 |
|
| 34 |
### Model Sources [optional]
|
| 35 |
|
| 36 |
<!-- Provide the basic links for the model. -->
|
| 37 |
|
| 38 |
- **Repository:** [More Information Needed]
|
| 39 |
+
- **Paper [optional]:** [InMD-X](http://arxiv.org/abs/2402.11883)
|
| 40 |
- **Demo [optional]:** [More Information Needed]
|
| 41 |
|
| 42 |
## Uses
|
| 43 |
+
```python
|
| 44 |
+
import torch
|
| 45 |
+
from peft import PeftModel, PeftConfig
|
| 46 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
+
peft_model_id = "InMedData/InMD-X-CAR"
|
| 49 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
| 50 |
+
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
|
| 51 |
+
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|
| 52 |
|
| 53 |
+
# Load the Lora model
|
| 54 |
+
model = PeftModel.from_pretrained(model, peft_model_id)
|
|
|
|
| 55 |
|
|
|
|
| 56 |
|
|
|
|
| 57 |
|
| 58 |
+
```
|
| 59 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
|
| 60 |
|
| 61 |
+
### Experimental setup
|
| 62 |
+
- **Ubuntu 22.04.3 LTS**
|
| 63 |
+
- **GPU - NVIDIA A100(40GB)**
|
| 64 |
+
- **Python**: 3.10.12
|
| 65 |
+
- **Pytorch**:2.1.1+cu118
|
| 66 |
+
- **Transformer**:4.37.0.dev0
|
| 67 |
|
|
|
|
| 68 |
|
| 69 |
## How to Get Started with the Model
|
| 70 |
|