kawasumi commited on
Commit
f580e09
·
verified ·
1 Parent(s): acf1ba9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -16,6 +16,7 @@ tags:
16
  - uncensored
17
  - non-censored
18
  - unfiltered
 
19
  ---
20
 
21
 
@@ -32,7 +33,7 @@ Tema_Q-R3.1 is an improved Large Language Model (LLM) tailored for Japanese, Eng
32
  It is designed to generate more flexible and useful responses, even for prompts that the standard Gemma 2 might find challenging to answer. It is ideal for users who wish to maximize the potential of AI in all fields, including creative writing, complex programming tasks, and deep knowledge exploration.
33
 
34
  GGUFファイルは以下より入手できます。
35
- https://huggingface.co/kawasumi/Tema_Q-R3.1/blob/main/Tema_Q-R3.1-Q4_K_M.gguf
36
 
37
  | 項目 | 詳細 |
38
  | :--- | :--- |
@@ -60,6 +61,8 @@ https://huggingface.co/kawasumi/Tema_Q-R3.1/blob/main/Tema_Q-R3.1-Q4_K_M.gguf
60
 
61
  ※ **推奨環境:** Google Colabの**T4 GPU**またはそれ以上のVRAMを持つ環境
62
 
 
 
63
  ```python
64
  # 必要なライブラリをインストールします
65
  !pip install -qU transformers accelerate bitsandbytes
 
16
  - uncensored
17
  - non-censored
18
  - unfiltered
19
+ - GGUF
20
  ---
21
 
22
 
 
33
  It is designed to generate more flexible and useful responses, even for prompts that the standard Gemma 2 might find challenging to answer. It is ideal for users who wish to maximize the potential of AI in all fields, including creative writing, complex programming tasks, and deep knowledge exploration.
34
 
35
  GGUFファイルは以下より入手できます。
36
+ https://huggingface.co/kawasumi/Tema_Q-R3.1-GGUF
37
 
38
  | 項目 | 詳細 |
39
  | :--- | :--- |
 
61
 
62
  ※ **推奨環境:** Google Colabの**T4 GPU**またはそれ以上のVRAMを持つ環境
63
 
64
+ ※GGUFからアクセスした方が高速かつ安定した推論が可能です。GGUFモデル配布ページに迂回してください。
65
+
66
  ```python
67
  # 必要なライブラリをインストールします
68
  !pip install -qU transformers accelerate bitsandbytes