Update README.md
Browse files
README.md
CHANGED
|
@@ -23,6 +23,12 @@ Check official [hqq](https://github.com/mobiusml/hqq)
|
|
| 23 |
私はソースからTransformersとhqqをインストールしないと動かす事ができませんでした。
|
| 24 |
I couldn't get it to work without installing Transformers and hqq from source.
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
```
|
| 27 |
# transformers 4.41.1
|
| 28 |
pip install transformers==4.41.1
|
|
@@ -103,11 +109,6 @@ Sleeping mats: 8,500 pieces<eos>
|
|
| 103 |
|
| 104 |
### Sample code for High-speed inference (For NVIDIA Ampere or later, A100 or RTX 3090, etc.)
|
| 105 |
|
| 106 |
-
A100や30x0シリーズのような新しいGPUではバックエンドを指定する事で高速に実行できます。
|
| 107 |
-
残念ながらColabの無料版では動きません
|
| 108 |
-
|
| 109 |
-
Newer GPUs such as the A100 and 30x0 series can run faster by specifying a backend.
|
| 110 |
-
Unfortunately, this does not work with the free version of Colab.
|
| 111 |
|
| 112 |
```
|
| 113 |
import torch, os
|
|
|
|
| 23 |
私はソースからTransformersとhqqをインストールしないと動かす事ができませんでした。
|
| 24 |
I couldn't get it to work without installing Transformers and hqq from source.
|
| 25 |
|
| 26 |
+
A100や30x0シリーズのような新しいGPUが必要です。
|
| 27 |
+
残念ながらColabの無料版(T4)では動きません
|
| 28 |
+
|
| 29 |
+
We need Newer GPUs such as the A100 and 30x0 series.
|
| 30 |
+
Unfortunately, this does not work with the free version of Colab(T4).
|
| 31 |
+
|
| 32 |
```
|
| 33 |
# transformers 4.41.1
|
| 34 |
pip install transformers==4.41.1
|
|
|
|
| 109 |
|
| 110 |
### Sample code for High-speed inference (For NVIDIA Ampere or later, A100 or RTX 3090, etc.)
|
| 111 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 112 |
|
| 113 |
```
|
| 114 |
import torch, os
|