Text Generation
English
File size: 2,675 Bytes
cfd09fb
 
 
 
 
 
 
 
0e11ff4
 
 
 
 
 
 
 
 
 
 
 
e47f9d5
0e11ff4
 
 
 
 
 
 
 
 
 
 
5a9917f
0e11ff4
 
 
 
 
5a9917f
0e11ff4
 
 
 
 
5a9917f
0e11ff4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a955cd
0e11ff4
 
 
f54d104
0e11ff4
5a9917f
0e11ff4
5a9917f
0e11ff4
 
 
 
 
 
 
f54d104
 
0e11ff4
 
 
5a9917f
0e11ff4
 
 
5a9917f
0e11ff4
 
 
5a9917f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: mit
datasets:
- HuggingFaceFW/fineweb-edu
- open-web-math/open-web-math
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** xTimeCrystal
- **Model type:** RWKV 7 **(NOTE: the decay is computed using -F.softplus instead of -0.606*torch.sigmoid, all LoRAs use Tanh, LoRA weights are stored like nn.Linear)**
- **Language(s) (NLP):** English
- **License:** MIT

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

Fast autocomplete model. 

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

Don't use it for anything serious, it lacks any form of intelligence. 

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Limited to ~couple exaFLOPs of compute, don't expect anything coherent beyond a couple sentences. 

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

50B Bytes of custom FineWeb Edu & Open Web Math mixture. 

#### Training Hyperparameters

- **Training regime:** bf16 non-mixed precision, used own version of Muon with lr from 5e-3 to 1e-3. <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

#### Speeds, Sizes, Times

Throughput = 350 characters/second using unoptimized inference code. Prompt processing is basically instantaneous, so generation is likely bottlenecked by bandwidth and overhead. 

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Results

Bits-per-byte: ~1
HellaSwag Accuracy: 33.4% (removed Wikihow entries)

#### Summary

## Technical Specifications

### Model Architecture and Objective

Modded RWKV 7 (see top)

### Compute Infrastructure

1 x RTX 4080 for 1 week