shafire commited on
Commit
633e5a8
·
verified ·
1 Parent(s): 84b0060

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -19
README.md CHANGED
@@ -1,46 +1,68 @@
1
  ---
2
  tags:
3
  - autotrain
4
- - text-generation-inference
5
  - text-generation
 
6
  - peft
 
 
 
 
 
 
7
  library_name: transformers
8
  base_model: meta-llama/Llama-3.1-8B
9
- widget:
10
- - messages:
11
- - role: user
12
- content: What is your favorite condiment?
13
  license: other
 
 
 
14
  ---
15
 
16
- # Model Trained Using AutoTrain
17
 
18
- This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
19
 
20
- # Usage
21
 
22
- ```python
 
 
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  from transformers import AutoModelForCausalLM, AutoTokenizer
25
 
26
- model_path = "PATH_TO_THIS_REPO"
27
 
28
  tokenizer = AutoTokenizer.from_pretrained(model_path)
29
  model = AutoModelForCausalLM.from_pretrained(
30
  model_path,
31
  device_map="auto",
32
- torch_dtype='auto'
33
  ).eval()
34
 
35
- # Prompt content: "hi"
36
- messages = [
37
- {"role": "user", "content": "hi"}
38
- ]
 
 
 
 
39
 
40
- input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
41
- output_ids = model.generate(input_ids.to('cuda'))
42
  response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
43
 
44
- # Model response: "Hello! How can I assist you today?"
45
  print(response)
46
- ```
 
1
  ---
2
  tags:
3
  - autotrain
 
4
  - text-generation
5
+ - text-generation-inference
6
  - peft
7
+ - llama-3
8
+ - finance
9
+ - crypto
10
+ - agents
11
+ - workflow-automation
12
+ - soul-ai
13
  library_name: transformers
14
  base_model: meta-llama/Llama-3.1-8B
 
 
 
 
15
  license: other
16
+ widget:
17
+ - text: "Ask me something about AI agents or crypto."
18
+ - text: "What kind of automation can LLMs perform?"
19
  ---
20
 
21
+ # 🧠 $SOUL AI — Llama 3.1 Fine-Tuned for Finance & Autonomous Agents
22
 
23
+ **$SOUL AI** is a purpose-tuned LLM based on Meta's Llama 3.1–8B, trained on domain-specific data focused on **financial logic**, **LLM agent workflows**, and **automated task generation**. Designed to power on-chain AI agents, it's part of the broader $SOUL ecosystem for monetized intelligence.
24
 
25
+ ---
26
 
27
+ ## 📂 Dataset Summary
28
+
29
+ This model was fine-tuned on over 10,000+ instruction-style samples simulating:
30
 
31
+ - Financial queries and tokenomics reasoning
32
+ - LLM-agent interaction patterns
33
+ - Crypto automation logic
34
+ - DeFi, trading signals, news interpretation
35
+ - Smart contract and API-triggered tasks
36
+ - Natural language prompts for dynamic workflow creation
37
+
38
+ The format follows a custom instruction-based structure optimized for reasoning tasks and agentic workflows—not just casual conversation.
39
+
40
+ ---
41
+
42
+ ## 💻 Usage (via Transformers)
43
+
44
+ ```python
45
  from transformers import AutoModelForCausalLM, AutoTokenizer
46
 
47
+ model_path = "YOUR_HF_USERNAME/YOUR_MODEL_NAME"
48
 
49
  tokenizer = AutoTokenizer.from_pretrained(model_path)
50
  model = AutoModelForCausalLM.from_pretrained(
51
  model_path,
52
  device_map="auto",
53
+ torch_dtype="auto"
54
  ).eval()
55
 
56
+ messages = [{"role": "user", "content": "How do autonomous LLM agents work?"}]
57
+
58
+ input_ids = tokenizer.apply_chat_template(
59
+ conversation=messages,
60
+ tokenize=True,
61
+ add_generation_prompt=True,
62
+ return_tensors="pt"
63
+ )
64
 
65
+ output_ids = model.generate(input_ids.to("cuda"), max_new_tokens=256)
 
66
  response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
67
 
 
68
  print(response)