|
|
--- |
|
|
datasets: |
|
|
- laion/gpt4v-dataset |
|
|
library_name: transformers |
|
|
--- |
|
|
Model Card: AdemGPT |
|
|
1. General Information |
|
|
Model Name: AdemGPT |
|
|
Description: AdemGPT is a pre-trained generative language model that seeks to generate coherent and relevant text based on a wide spectrum of linguistic tasks. |
|
|
|
|
|
2. Authors and Affiliations |
|
|
Authors: [Trat80] |
|
|
Affiliations: [N/A] |
|
|
|
|
|
3. Model Functionality |
|
|
Supported Tasks: Text generation, Answering questions, Text to text, etc. |
|
|
Supported Languages: Mainly Spanish. |
|
|
Examples of Use: Generation of summaries, creative writing, informal conversation, among others. |
|
|
|
|
|
4. Dataset and Training |
|
|
Dataset Origin: Created from multiple sources of text in Spanish (books, online articles, conversations, etc.). |
|
|
Dataset Size: Contains millions of examples of text in Spanish. |
|
|
Training Procedures: The GPT-3 architecture was used and trained for several weeks in a high-performance environment. |
|
|
|
|
|
5. Model Performance |
|
|
Evaluation Metrics: Text coherence, precision in questions and answers, language fluency, etc. |
|
|
Results: Achieved high scores on text generation tests and language processing tasks. |
|
|
|
|
|
6. Ethical Considerations |
|
|
Bias Considerations: Efforts have been made to mitigate bias, but there may be some inherent biases in the training data. |
|
|
Privacy and Security: The model does not store user information and caution should be taken when using it with sensitive data. |
|
|
|
|
|
7. Limitations of the Model |
|
|
Known Limitations: Cannot provide information in other languages and may have difficulty with very specialized or technical concepts. |
|
|
|
|
|
8. License and Conditions of Use |
|
|
License: [cc-by-nc-sa4.0] |
|
|
Conditions of Use: The model is available for non-commercial and educational use. It is recommended to review the license terms. |
|
|
|
|
|
## Example |
|
|
|
|
|
Install request: |
|
|
|
|
|
pip install requests |
|
|
|
|
|
After that, put that in you python: |
|
|
|
|
|
import requests |
|
|
import json |
|
|
|
|
|
|
|
|
model_name = 'Trat80/AdemGPT' |
|
|
api_token = 'tu_api_token' # You token api |
|
|
|
|
|
|
|
|
input_text = "Hi! My Name Is AdemGPT!" |
|
|
|
|
|
|
|
|
headers = { |
|
|
'Authorization': f'Bearer {api_token}', |
|
|
'Content-Type': 'application/json' |
|
|
} |
|
|
|
|
|
data = { |
|
|
'inputs': input_text, |
|
|
'parameters': { |
|
|
'max_new_tokens': 100 |
|
|
} |
|
|
} |
|
|
|
|
|
response = requests.post(f'https://api-inference.huggingface.co/models/{model_name}', headers=headers, data=json.dumps(data)) |
|
|
|
|
|
if response.status_code == 200: |
|
|
generated_text = response.json().get('generated_text') |
|
|
print(generated_text) |
|
|
else: |
|
|
print("Error en la solicitud:", response.status_code, response.text) |
|
|
|