Instructions to use justsomerandomdude264/SocialScience_Homework_Solver_Llama318B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use justsomerandomdude264/SocialScience_Homework_Solver_Llama318B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="justsomerandomdude264/SocialScience_Homework_Solver_Llama318B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("justsomerandomdude264/SocialScience_Homework_Solver_Llama318B") model = AutoModelForCausalLM.from_pretrained("justsomerandomdude264/SocialScience_Homework_Solver_Llama318B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use justsomerandomdude264/SocialScience_Homework_Solver_Llama318B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "justsomerandomdude264/SocialScience_Homework_Solver_Llama318B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "justsomerandomdude264/SocialScience_Homework_Solver_Llama318B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/justsomerandomdude264/SocialScience_Homework_Solver_Llama318B
- SGLang
How to use justsomerandomdude264/SocialScience_Homework_Solver_Llama318B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "justsomerandomdude264/SocialScience_Homework_Solver_Llama318B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "justsomerandomdude264/SocialScience_Homework_Solver_Llama318B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "justsomerandomdude264/SocialScience_Homework_Solver_Llama318B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "justsomerandomdude264/SocialScience_Homework_Solver_Llama318B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use justsomerandomdude264/SocialScience_Homework_Solver_Llama318B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for justsomerandomdude264/SocialScience_Homework_Solver_Llama318B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for justsomerandomdude264/SocialScience_Homework_Solver_Llama318B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for justsomerandomdude264/SocialScience_Homework_Solver_Llama318B to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="justsomerandomdude264/SocialScience_Homework_Solver_Llama318B", max_seq_length=2048, ) - Docker Model Runner
How to use justsomerandomdude264/SocialScience_Homework_Solver_Llama318B with Docker Model Runner:
docker model run hf.co/justsomerandomdude264/SocialScience_Homework_Solver_Llama318B
Model Card: Math Homework Solver
This is a Large Language Model (LLM) fine-tuned to solve sst problems with detailed, step-by-step explanations and accurate answers. The base model used is Llama 3.1 with 8 billion parameters, which has been quantized to 4-bit using QLoRA (Quantized Low-Rank Adaptation) and PEFT (Parameter-Efficient Fine-Tuning) through the Unsloth framework.
Other Homework Solver Models include Math_Homework_Solver_Llama318B and Science_Homework_Solver_Llama318B
Model Details
- Base Model: Llama 3.1 (8 Billion parameters)
- Fine-tuning Method: PEFT (Parameter-Efficient Fine-Tuning) with QLoRA
- Quantization: 4-bit quantization for reduced memory usage
- Training Framework: Unsloth, optimized for efficient fine-tuning of large language models
- Training Environment: Google Colab (free tier), NVIDIA T4 GPU (16GB VRAM), 12GB RAM
- Dataset Used: Combination of ambrosfitz/10k_history_data_v4, adamo1139/basic_economics_questions_ts_test_1, adamo1139/basic_economics_questions_ts_test_2, adamo1139/basic_economics_questions_ts_test_3, adamo1139/basic_economics_questions_ts_test_4
- Git Repo: The git repo on my github account is justsomerandomdude264/Homework_Solver_LLM
Capabilities
The SocialScience Homework Solver model is designed to assist with a broad spectrum of evs and sst problems, from medieval history to adavanced economics. It provides clear and detailed explanations, making it an excellent resource for students, educators, and anyone looking to deepen their understanding of sst concepts.
By leveraging the Llama 3.1 base model and fine-tuning it using PEFT and QLoRA, this model achieves high-quality performance while maintaining a relatively small computational footprint, making it accessible even on limited hardware.
Getting Started
To start using the Math Homework Solver model, follow these steps:
Clone the repo
git clone https://huggingface.co/justsomerandomdude264/SocialScience_Homework_Solver-Llama3.18BRun inference
- This method is recommended as its reliable and accurate:
from unsloth import FastLanguageModel import torch # Define Your Question question = "Analyze the socio-political and economic factors that contributed to the rise and fall of the Byzantine Empire from the reign of Justinian I to the fall of Constantinople in 1453. How did internal conflicts, religious controversies, and external pressures from both Islamic caliphates and Western European powers shape the trajectory of the empire over this period?" # Example Question, You can change it with one of your own # Load the model model, tokenizer = FastLanguageModel.from_pretrained( model_name = "SocialScience_Homework_Solver_Llama318B/model_adapters", # The dir where the repo is cloned or "\\" for root max_seq_length = 2048, dtype = None, load_in_4bit = True, ) # Set the model in inference model FastLanguageModel.for_inference(model) # QA template qa_template = """Question: {} Answer: {}""" # Tokenize inputs inputs = tokenizer( [ qa_template.format( question, # Question "", # Answer - left blank for generation ) ], return_tensors = "pt").to("cuda") # Stream the answer/output of the model from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)- Another way to run inference is to use the merged adapters (not recommend as it gives inaccurate/different answers):
from transformers import LlamaForCausalLM, AutoTokenizer # Load the model model = LlamaForCausalLM.from_pretrained( "justsomerandomdude264/SocialScience_Homework_Solver_Llama318B", device_map="auto" ) # Load the tokenizer tokenizer = AutoTokenizer.from_pretrained("justsomerandomdude264/SocialScience_Homework_Solver_Llama318B") # Set the inputs up qa_template = """Question: {} Answer: {}""" inputs = tokenizer( [ qa_template.format( "Who was Akbar?", # Question "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") # Do a forward pass outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True) raw_output = str(tokenizer.batch_decode(outputs)) # Formtting the string # Removing the list brackets and splitting the string by newline characters formatted_string = raw_output.strip("[]").replace("<|begin_of_text|>", "").replace("<|eot_id|>", "").strip("''").split("\\n") # Print the lines one by one for line in formatted_string: print(line)
Citation
Please use the following citation if you reference the Math Homework Solver model:
BibTeX Citation
@misc{paliwal2024,
author = {Krishna Paliwal},
title = {Contributions to SocialScience_Homework_Solver},
year = {2024},
email = {krishna.plwl264@gmail.com}
}
APA Citation
Paliwal, Krishna (2024). Contributions to SocialScience_Homework_Solver. Email: krishna.plwl264@gmail.com .
- Downloads last month
- 6