marcoonorato91 commited on
Commit
81902ca
·
verified ·
1 Parent(s): d68bb20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -3
README.md CHANGED
@@ -1,3 +1,86 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - de
5
+ - fr
6
+ - it
7
+ - pt
8
+ - hi
9
+ - es
10
+ - th
11
+ library_name: transformers
12
+ pipeline_tag: text-generation
13
+ tags:
14
+ - facebook
15
+ - meta
16
+ - pytorch
17
+ - llama
18
+ - llama-3
19
+ - llamusic
20
+ - marcoonorato91
21
+ - smog98
22
+ license: mit
23
+
24
+
25
+ ## Model Information
26
+
27
+ The LLAMUsic is a finetuned version of Llama 3.2 instruction-tuned generative models in 3B size (text in/text out).
28
+
29
+ **Model Developer:** Marco Onorato, Riccardo Preite, Niccolò Monaco
30
+
31
+ **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
32
+
33
+ **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported.
34
+
35
+ **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
36
+
37
+ **Model Release Date:** Dec 20, 2024
38
+
39
+ **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
40
+
41
+ **License:** MIT License, please use this with conscience.
42
+
43
+ **Feedback:** You can contact info.llamusic@gmail.com
44
+
45
+ ## Intended Use
46
+
47
+ **Intended Use Cases:** Llama 3.2 is intended for personal and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
48
+
49
+ **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
50
+
51
+ ## How to use
52
+
53
+ ### Use with transformers
54
+
55
+ Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
56
+
57
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
58
+
59
+ ```python
60
+ import torch
61
+ from transformers import pipeline
62
+ model_id = "marcoonorato91/LLAMUsic"
63
+ pipe = pipeline(
64
+ "text-generation",
65
+ model=model_id,
66
+ torch_dtype=torch.bfloat16,
67
+ device_map="auto",
68
+ )
69
+ messages = [
70
+ {"role": "system", "content": "You are LLAMUsic, an artificial intelligence expert of music."},
71
+ {"role": "user", "content": "Who are you?"},
72
+ ]
73
+ outputs = pipe(
74
+ messages,
75
+ max_new_tokens=256,
76
+ )
77
+ print(outputs[0]["generated_text"][-1])
78
+ ```
79
+
80
+ ### Use with `ollama`
81
+
82
+ Please, follow the instructions here to install ollama [repository](https://ollama.com/)
83
+
84
+ Then you can pull from the public ollama hub [repository](https://ollama.com/llamusic)
85
+
86
+ Two models are available: the standard version and the Q4_K_M quantized version