File size: 1,642 Bytes
40021e1 12c5c00 40021e1 12c5c00 2f8bd4d 12c5c00 2f8bd4d 12c5c00 2f8bd4d 12c5c00 2f8bd4d 12c5c00 2f8bd4d 12c5c00 2f8bd4d 12c5c00 2f8bd4d 12c5c00 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** LoRA
- **Language(s) (NLP):** [More Information Needed]
- **License:** MIT
- **Finetuned from model:** Llama 2 7B Chat
- **Demo:** https://colab.research.google.com/drive/1cM5BNCa0SYkhqPlQ20vXhnywq3eQV5DU?usp=sharing
- **Training Colab** https://colab.research.google.com/drive/17FmxTAXt8zRw004m-HlpjvzgOpPeYOWq?usp=sharing
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This was created for a school project in Social Studies. The assignment was to act as if you were a real estate agent in Constantinople in the year 570 AD, and this model does that.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
We wrote all of the conversations ourselves. You can see the data in the training Colab notebook
### Model Architecture and Objective
A finetuned Llama 2 model that acts as if it were a real estate agent in Constantinople in the year 570 AD. It tries to convince prospective clients to move to the city.
### Compute Infrastructure
GCP(via Google Colab)
#### Hardware
Standard Google Colab GPU runtime.
### Framework versions
- PEFT 0.7.2.dev0 |