algorythmtechnologies commited on
Commit
6770a93
·
verified ·
1 Parent(s): e02c575

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -44
README.md CHANGED
@@ -1,63 +1,32 @@
1
- ---
2
  library_name: peft
3
- model_name: zenith-lora
4
  tags:
5
  - base_model:adapter:DeepSeek-Coder-V2-Lite-Instruct
6
  - lora
7
  - sft
8
  - transformers
9
  - trl
10
- licence: license
11
  base_model: DeepSeek-Coder-V2-Lite-Instruct
12
  pipeline_tag: text-generation
13
- license: apache-2.0
14
  ---
15
 
16
- # Model Card for zenith-lora
 
 
 
17
 
18
- This model is a fine-tuned version of https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
19
- It has been trained using [TRL](https://github.com/huggingface/trl).
20
 
21
- ## Quick start
22
 
23
  ```python
24
  from transformers import pipeline
25
 
26
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
27
- generator = pipeline("text-generation", model="None", device="cuda")
28
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
29
- print(output["generated_text"])
30
- ```
31
-
32
- ## Training procedure
33
-
34
-
35
-
36
-
37
- This model was trained with SFT.
38
-
39
- ### Framework versions
40
-
41
- - PEFT 0.17.1
42
- - TRL: 0.24.0
43
- - Transformers: 4.56.1
44
- - Pytorch: 2.8.0+cu128
45
- - Datasets: 4.3.0
46
- - Tokenizers: 0.22.1
47
-
48
- ## Citations
49
-
50
 
 
 
51
 
52
- Cite TRL as:
53
-
54
- ```bibtex
55
- @misc{vonwerra2022trl,
56
- title = {{TRL: Transformer Reinforcement Learning}},
57
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
58
- year = 2020,
59
- journal = {GitHub repository},
60
- publisher = {GitHub},
61
- howpublished = {\url{https://github.com/huggingface/trl}}
62
- }
63
- ```
 
 
1
  library_name: peft
2
+ model_name: Zenith Copilot V1
3
  tags:
4
  - base_model:adapter:DeepSeek-Coder-V2-Lite-Instruct
5
  - lora
6
  - sft
7
  - transformers
8
  - trl
9
+ license: apache-2.0
10
  base_model: DeepSeek-Coder-V2-Lite-Instruct
11
  pipeline_tag: text-generation
 
12
  ---
13
 
14
+ # Zenith-LoRA: The Autonomous AI Development Partner
15
+
16
+ **Zenith** is the world's first fully autonomous AI development copilot, built on top of the powerful [DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct).
17
+ This LoRA-adapted model brings **Zenith’s project orchestration, optimization, and real-time system mastery** straight to your Python environment.
18
 
19
+ Whether you are a founder, developer, or CEO, Zenith empowers you to **build, optimize, and deploy complete software solutions** autonomously, with cutting-edge memory and performance profiling.
 
20
 
21
+ ## Quick Start
22
 
23
  ```python
24
  from transformers import pipeline
25
 
26
+ # Initialize the text-generation pipeline with Zenith LoRA
27
+ generator = pipeline("text-generation", model="AlgoRythmTechnologies/zenith_coder_v1.1", device="cuda")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
+ question = "Design a secure, scalable microservices architecture for a fintech startup."
30
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
31
 
32
+ print(output["generated_text"])