Alexander27 commited on
Commit
0efdbb5
·
verified ·
1 Parent(s): d72579f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -2
README.md CHANGED
@@ -15,8 +15,93 @@ The above copyright notice and this permission notice shall be included in all c
15
  THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16
 
17
 
 
18
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- Contributing
 
21
 
22
- Contributions are welcome! Please fork the repository, make your changes, and submit a pull request. All contributions must follow the code style of the project.
 
 
15
  THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16
 
17
 
18
+ Contributions are welcome! Please fork the repository, make your changes, and submit a pull request. All contributions must follow the code style of the project.
19
  ---
20
+ license: mit
21
+ ---
22
+
23
+ # Nano-Butterfly Model
24
+
25
+ Welcome to the `Alexander27/Nano-Butterfly` model card! This is a Causal Language Model trained using Hugging Face AutoTrain.
26
+
27
+ ## 🚀 How to Use
28
+
29
+ You can easily run this model using the `transformers` library.
30
+
31
+ ### 1. Installation
32
+
33
+ First, make sure you have the required libraries installed.
34
+
35
+ ```bash
36
+ pip install transformers torch
37
+ ```
38
+
39
+ ### 2. Run the Model in Python
40
+
41
+ Save the following code as a Python file (e.g., `app.py`) and run it.
42
+
43
+ ```python
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM
45
+
46
+ # The name of your model on the Hugging Face Hub
47
+ model_name = "Alexander27/Nano-Butterfly"
48
+
49
+ # Load the tokenizer and model
50
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
51
+ model = AutoModelForCausalLM.from_pretrained(model_name)
52
+
53
+ # Define the prompt
54
+ prompt = "The future of artificial intelligence is "
55
+
56
+ # Prepare the input for the model
57
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
58
+
59
+ # Generate text
60
+ output_sequences = model.generate(
61
+ input_ids=input_ids,
62
+ max_length=100,
63
+ num_return_sequences=1
64
+ )
65
+
66
+ # Decode the output and print it
67
+ generated_text = tokenizer.decode(output_sequences[0], skip_special_tokens=True)
68
+
69
+ print(generated_text)
70
+ ```
71
+
72
+ Alternative in python:
73
+ # File: app.py
74
+
75
+ # 1. Install necessary libraries
76
+ # In your terminal, run: pip install transformers torch
77
+
78
+ from transformers import AutoTokenizer, AutoModelForCausalLM
79
+
80
+ # The name of your model on the Hugging Face Hub
81
+ model_name = "Alexander27/Nano-Butterfly"
82
+
83
+ # 2. Load the tokenizer and model
84
+ print(f"Loading model: {model_name}")
85
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
86
+ model = AutoModelForCausalLM.from_pretrained(model_name)
87
+ print("Model loaded successfully!")
88
+
89
+ # 3. Define the prompt (the input text for the model)
90
+ prompt = "The future of artificial intelligence is "
91
+
92
+ # 4. Prepare the input for the model
93
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
94
+
95
+ # 5. Generate text
96
+ # max_length controls how long the output will be
97
+ output_sequences = model.generate(
98
+ input_ids=input_ids,
99
+ max_length=100,
100
+ num_return_sequences=1
101
+ )
102
 
103
+ # 6. Decode the output and print it
104
+ generated_text = tokenizer.decode(output_sequences[0], skip_special_tokens=True)
105
 
106
+ print("\n--- Model Output ---")
107
+ print(generated_text)