File size: 935 Bytes
7edc192
 
 
 
 
 
 
 
cab08fd
7edc192
 
 
 
 
 
 
1709ad1
015c6a8
7edc192
988f42a
 
 
 
2c70761
015c6a8
7edc192
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
language:
- en
pipeline_tag: text-generation
tags:
- pytorch
- causal-lm
- custom
- general
---

# Sky610TX


## Model Details
- **Architecture:** GPT-2 Style (Custom Ascendant Config)
- **Parameters:** ~389 Million
- **Training tokens:** 1.3 Billion
- **Context Window:** 1024 Tokens
- **50k iterations**

## The Future

**Work has started on a new, 1.2B parameter model. It will be much better at coding, reasoning, facts, and conversation with over 10B tokens! It is currently in development and is expected to release soon**


## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("8BitStudio/Sky610TX")
tokenizer = AutoTokenizer.from_pretrained("8BitStudio/Sky610TX")

input_text = "User: Hello\nAssistant:"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))