| | --- |
| | language: en |
| | license: mit |
| | pipeline_tag: text-generation |
| | tags: |
| | - gpt-neo |
| | - text-generation |
| | - conversational |
| | --- |
| | |
| | # Model Card for danilo |
| |
|
| | ## Model Details |
| | - **Model Name**: danilo |
| | - **Model Type**: GPT-Neo (Text Generation) |
| | - **Base Model**: EleutherAI/gpt-neo-125M |
| | - **Fine-Tuned**: Yes (Custom dataset) |
| | - **License**: MIT |
| |
|
| | ## Intended Use |
| | This model is designed for **text generation** tasks, such as answering questions, generating conversational responses, or completing text prompts. |
| |
|
| | ## Training Data |
| | The model was fine-tuned on a custom dataset of question-answer pairs to mimic a specific style of responses. |
| |
|
| | ## Limitations |
| | - The model may generate incorrect or nonsensical answers if the input is ambiguous or outside its training scope. |
| | - It may exhibit biases present in the training data. |
| |
|
| | ## Usage |
| | You can use this model with the Hugging Face `transformers` library: |
| |
|
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name") |
| | model = AutoModelForCausalLM.from_pretrained("your-username/your-model-name") |
| | |
| | inputs = tokenizer("How do you prioritize tasks when you’re overwhelmed?", return_tensors="pt") |
| | outputs = model.generate(**inputs, max_length=100) |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |