Model Card for AllyArc
This model card describes AllyArc, an educational chatbot designed to support autistic students with personalized learning experiences. AllyArc uses a fine-tuned Large Language Model to interact with users and provide educational content.
Model Details
Model Description
AllyArc is an innovative chatbot tailored for the educational support of autistic students. It leverages a fine-tuned LLM to provide interactive learning experiences, emotional support, and a platform for students to engage in conversational learning.
- Developed by: Zainab, a computer science student and MLH Top 50.
- Model type: Conversational Large Language Model
- Language(s) (NLP): Primarily English, with potential multilingual support.
- Finetuned from model: Mistral 7b.
Uses
Direct Use
AllyArc can be directly interacted with by students and educators through a conversational interface, providing instant responses to queries and aiding in learning.
Downstream Use
The model can be integrated into educational platforms or applications as a support tool for autistic students, offering personalized assistance.
Out-of-Scope Use
AllyArc is not designed for high-stakes decisions, medical advice, or any context outside of educational support.
Bias, Risks, and Limitations
While designed to be inclusive, there is a risk of unintended bias in responses due to the training data. The model may not fully understand or appropriately respond to all nuances of human emotion and communication.
Recommendations
Educators should monitor interactions and provide regular feedback to improve AllyArc's accuracy and sensitivity. Users should be aware of the model's limitations and not rely on it for critical decisions.
How to Get Started with the Model on Google Colab
To explore and interact with AllyArc using Google Colab:
- Open the AllyArc Interactive Colab Notebook.
- Go to
File > Save a copy in Driveto create a personal copy of the notebook. - Obtain a Hugging Face API token by creating an account or logging in at Hugging Face.
- In your copied notebook, replace
YOUR_HUGGING_FACE_TOKEN_HEREwith your actual Hugging Face token. - Follow the instructions in the notebook to install necessary libraries and dependencies.
- Run the cells step by step to initialize and interact with the AllyArc model.
Please ensure you have the appropriate permissions and quotas on Google Colab to run the model without interruption.
How to Get Started with the AllyArc Model Locally
To run the AllyArc model on your local machine, follow these steps:
- Ensure you have Python installed on your system.
- Install the necessary Python packages by running:
pip install transformers tokenizers sentencepiece
- Obtain a Hugging Face API token by creating an account or logging in at Hugging Face.
- Set an environment variable for your Hugging Face token. You can do this by running the following command in your terminal (replace
<your_hugging_face_token>with your actual token):
export HUGGING_FACE_API_KEY=<your_hugging_face_token>
- Create a new Python script or open a Python interactive shell and input the following code:
import os
from huggingface_hub import hf_hub_download
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
# Replace <hugging-face-api-key-goes-here> with your Hugging Face token
HUGGING_FACE_API_KEY = os.environ.get("HUGGING_FACE_API_KEY")
model_id = "ZainabF/allyarc_finetune_model_sample"
filenames = [
"pytorch_model.bin", "added_tokens.json", "config.json", "generation_config.json",
"special_tokens_map.json", "spiece.model", "tokenizer_config.json", "pytorch_model.bin.index.json"
]
for filename in filenames:
downloaded_model_path = hf_hub_download(
repo_id=model_id,
filename=filename,
token=HUGGING_FACE_API_KEY
)
print(f"Downloaded {filename} to {downloaded_model_path}")
# Initialize the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id, legacy=False)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
# Set up the pipeline for text generation
text_gen_pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer, max_length=1000)
# Generate a response
response = text_gen_pipeline("How I'm upset that I got low mark at math, please help me")
print(response)
- Execute the script to download the model and interact with it.
Please ensure that your environment variables are correctly set, and that the necessary packages are installed before running the script. The script will download the model files and then initialize the model for text generation, allowing you to input prompts and receive responses.
- Downloads last month
- 18