library_name: transformers
tags:
- SkillTree
- mistral
license: apache-2.0
Model Card for SkillTree Enhanced Model
Model Details
This model has been enhanced using the SkillTree approach, which applies specific skills extracted from advanced training or fine-tuning processes to improve the model's capabilities in targeted areas.
- Base Model: (Name of the base model)
- Skill Tree Applied: (Name of the skill tree used for enhancement)
- Language(s) (NLP): Japanese
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Model type: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
- Finetuned from model [optional]: [More Information Needed]
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
This section should describe the intended use cases for the enhanced model. It might include scenarios such as code generation, conversational AI, text summarization, or any other specific tasks the model has been enhanced to perform better. Be sure to include any recommendations or limitations on the model's use.
# Import library
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model
model_name = "HachiML/Swallow-MS-7b-v0.1-MathSkill-OpenMath"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
# Inference 1
prompt = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?\n"
input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
tokens = model.generate(input_ids.to(device=model.device), max_new_tokens=256, temperature=0.99, top_p=0.95, do_sample=True)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
# Inference 2
prompt = "ナタリアは4月に48人の友人にクリップを売り、5月にはその半分の数のクリップを売った。ナタリアが4月と5月に売ったクリップの数は?:\n"
input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
tokens = model.generate(input_ids.to(device=model.device), max_new_tokens=256, temperature=0.99, top_p=0.95, do_sample=True)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
Response Examples
Below are examples of responses from the model, demonstrating how the SkillTree enhancement has improved its performance.
Example 1:
- Input: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?\n
- Output: Let's solve this problem using Python code. friends_sold_in_April = 48 friends_sold_in_May = friends_sold_in_April / 2 clips_sold_in_April_and_May = friends_sold_in_April + friends_sold_in_May clips_sold_in_April_and_May 72.0 Thus, in April and May, Natalia sold \boxed{72} clips in total.
Example 2:
- Input: ナタリアは4月に48人の友人にクリップを売り、5月にはその半分の数のクリップを売った。ナタリアが4月と5月に売ったクリップの数は?:\n
- Output: Let's solve this problem using Python code. clip_count = 48 clip_count_sold_4th_month = clip_count clip_count_sold_5th_month = clip_count_sold_4th_month / 2 clip_count_sold_in_both_months = clip_count_sold_4th_month + clip_count_sold_5th_month clip_count_sold_in_both_months 72.0 So, the total number of clip sales is \textbf{72}.