Instructions to use Chanjeans/scriptgenerate_musicrecommend with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Chanjeans/scriptgenerate_musicrecommend with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Chanjeans/scriptgenerate_musicrecommend") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Chanjeans/scriptgenerate_musicrecommend") model = AutoModelForCausalLM.from_pretrained("Chanjeans/scriptgenerate_musicrecommend") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Chanjeans/scriptgenerate_musicrecommend with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Chanjeans/scriptgenerate_musicrecommend" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Chanjeans/scriptgenerate_musicrecommend", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Chanjeans/scriptgenerate_musicrecommend
- SGLang
How to use Chanjeans/scriptgenerate_musicrecommend with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Chanjeans/scriptgenerate_musicrecommend" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Chanjeans/scriptgenerate_musicrecommend", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Chanjeans/scriptgenerate_musicrecommend" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Chanjeans/scriptgenerate_musicrecommend", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Chanjeans/scriptgenerate_musicrecommend with Docker Model Runner:
docker model run hf.co/Chanjeans/scriptgenerate_musicrecommend
Model Card for ScriptWave Gemma-2-2b-it
This model is designed to generate scripts based on user-provided scene descriptions and character names. It not only creates dialogues between characters but also analyzes the emotions within the generated script. After determining the emotional tone, the model recommends music that fits the identified emotions. These music suggestions make the tool useful for creative writing and content production by aligning dialogues with appropriate soundtracks.
Model Details
- Developed by: Chanjeans, mind22
- Model type: Causal Language Model (AutoModelForCausalLM)
- Language(s) (NLP): English
- Finetuned from model [optional]: google/gemma-2-2b-it
Model Sources [optional]
- Repository: https://github.com/minj22/scriptwave
Uses
Direct Use
Script Generation: Generates dialogue scripts based on user inputs including scene description, character names, and tone or genre.
Music Recommendation: Analyzes generated scripts to recommend music tracks that align with the emotional tone of the dialogue.
Downstream Use [optional]
Creative Writing: Can be utilized by writers for brainstorming and drafting scripts.
Content Creation: Useful in video production or gaming for character dialogue and scene settings.
Out-of-Scope Use
- The model should not be used to create harmful or misleading content, including hate speech, disinformation, or any adult content.
Bias, Risks, and Limitations
- Bias in Output: The model may reflect biases present in the training data, leading to stereotypical representations of characters or scenarios.
- Limitations in Context Understanding: The model may struggle with understanding nuanced emotional tones or context, impacting script quality.
- Music Recommendation Accuracy: Recommendations may not always align with user expectations, as they are based solely on emotion analysis.
Recommendations
Users should critically evaluate the generated content and be aware of the potential biases in character representations and emotional analyses. Manual oversight is recommended for sensitive topics.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Chanjeans/scriptgenerate_musicrecommend"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
scene_description = input("Describe the scene (e.g., A heated argument at a dinner party): ")
character_1 = input("Enter the name of the first character: ")
character_2 = input("Enter the name of the second character: ")
genre_or_tone = input("Describe the genre or tone (e.g., Romantic, Thriller, Comedy): ")
test_input = f"""
INT. LOCATION - DAY
{scene_description}
{character_1.upper()}
(in a {genre_or_tone.lower()} tone)
I never thought it would come to this...
{character_2.upper()}
(reacting in a {genre_or_tone.lower()} manner)
Well, here we are. What are you going to do about it?
{character_1.upper()}
(pausing, thinking)
I don't know... maybe it's time I finally did something about this.
"""
input_ids = tokenizer.encode(test_input, return_tensors="pt")
output = model.generate(
input_ids,
max_length=400,
num_return_sequences=1,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print("Generated script:\n", generated_text)
Training Details
Training Data
https://huggingface.co/datasets/li2017dailydialog/daily_dialog
Training Procedure
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["gate_proj", "up_proj", "down_proj"],
lora_dropout=0.2,
bias="none",
task_type=TaskType.CAUSAL_LM
)
training_args = TrainingArguments(
output_dir='./results',
per_device_train_batch_size=2,
num_train_epochs=1,
gradient_accumulation_steps=16,
fp16=True,
logging_steps=100,
save_steps=500,
save_total_limit=2,
learning_rate=5e-5,
warmup_steps=500,
lr_scheduler_type="linear"
)
Summary
The model demonstrates capability in generating contextually relevant scripts and making music recommendations based on emotional analysis, making it a valuable tool for creative writers and content creators.
- Downloads last month
- 4