|
|
--- |
|
|
library_name: peft |
|
|
base_model: meta-llama/Llama-2-7b-chat-hf |
|
|
--- |
|
|
|
|
|
# Model Card for Model ID |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
|
|
|
|
- **Model type:** LoRA |
|
|
- **Language(s) (NLP):** [More Information Needed] |
|
|
- **License:** MIT |
|
|
- **Finetuned from model:** Llama 2 7B Chat |
|
|
- **Demo:** https://colab.research.google.com/drive/1cM5BNCa0SYkhqPlQ20vXhnywq3eQV5DU?usp=sharing |
|
|
- **Training Colab** https://colab.research.google.com/drive/17FmxTAXt8zRw004m-HlpjvzgOpPeYOWq?usp=sharing |
|
|
## Uses |
|
|
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
This was created for a school project in Social Studies. The assignment was to act as if you were a real estate agent in Constantinople in the year 570 AD, and this model does that. |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
|
|
We wrote all of the conversations ourselves. You can see the data in the training Colab notebook |
|
|
|
|
|
### Model Architecture and Objective |
|
|
|
|
|
A finetuned Llama 2 model that acts as if it were a real estate agent in Constantinople in the year 570 AD. It tries to convince prospective clients to move to the city. |
|
|
|
|
|
### Compute Infrastructure |
|
|
|
|
|
GCP(via Google Colab) |
|
|
|
|
|
#### Hardware |
|
|
|
|
|
Standard Google Colab GPU runtime. |
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- PEFT 0.7.2.dev0 |