mrdmnd commited on
Commit
e4b8175
·
verified ·
1 Parent(s): bd78f29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -11
README.md CHANGED
@@ -7,31 +7,25 @@ pipeline_tag: text-generation
7
 
8
  Baby's first adventure with the diffusion language model. Had to quantize this so it would fit on a 3080TI - all I've got!
9
 
10
- Used modal to do so:
11
-
12
- ```
13
- """
14
- This script uses Modal to quantize the LLaDA family of models.
15
 
16
  First, install modal and log into the CLI.
17
 
18
- ```
19
  uv add modal
20
  uv run modal login
21
- ```
22
 
23
  Then, add an environment and a volume to the project.
24
 
25
- ```
26
  uv run modal volume create quantized-model-output
27
- ```
28
 
29
  Then, run the quantization script:
30
 
31
- ```
32
  uv run modal run scripts/quantize_llada.py
 
33
  ```
34
- """
35
 
36
  import modal
37
 
 
7
 
8
  Baby's first adventure with the diffusion language model. Had to quantize this so it would fit on a 3080TI - all I've got!
9
 
10
+ Used modal to do so. If you want to replicate what I did, try this:
 
 
 
 
11
 
12
  First, install modal and log into the CLI.
13
 
 
14
  uv add modal
15
  uv run modal login
 
16
 
17
  Then, add an environment and a volume to the project.
18
 
19
+
20
  uv run modal volume create quantized-model-output
21
+
22
 
23
  Then, run the quantization script:
24
 
 
25
  uv run modal run scripts/quantize_llada.py
26
+
27
  ```
28
+ # scripts/quantize_llada.py
29
 
30
  import modal
31