Update README.md
Browse files
README.md
CHANGED
|
@@ -8,23 +8,27 @@ base_model:
|
|
| 8 |
- lianghsun/gemma-3-tw-270m-it
|
| 9 |
- lianghsun/gemma-3-tw-270m
|
| 10 |
library_name: transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
-
# Model Card for
|
| 14 |
|
| 15 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 16 |
|
| 17 |

|
|
|
|
| 18 |
|
| 19 |
-
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
| 20 |
|
| 21 |
## Model Details
|
| 22 |
|
| 23 |
### Model Description
|
| 24 |
|
| 25 |
<!-- Provide a longer summary of what this model is. -->
|
| 26 |
-
|
| 27 |
-
|
| 28 |
|
| 29 |
- **Developed by:** [Liang Hsun Huang](https://www.linkedin.com/in/lianghsunhuang/?locale=en_US)
|
| 30 |
- **Funded by:** [APMIC](http://apmic.ai/)
|
|
@@ -43,106 +47,139 @@ This modelcard aims to be a base template for new models. It has been generated
|
|
| 43 |
## Uses
|
| 44 |
|
| 45 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
### Direct Use
|
| 48 |
|
| 49 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
-
|
| 52 |
|
| 53 |
-
### Downstream Use
|
| 54 |
|
| 55 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 56 |
-
|
| 57 |
-
[More Information Needed]
|
| 58 |
|
| 59 |
### Out-of-Scope Use
|
| 60 |
|
| 61 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 62 |
|
| 63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
|
| 65 |
## Bias, Risks, and Limitations
|
| 66 |
|
| 67 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 68 |
|
| 69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
## How to Get Started with the Model
|
| 78 |
|
| 79 |
Use the code below to get started with the model.
|
| 80 |
|
| 81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
## Training Details
|
| 84 |
|
| 85 |
### Training Data
|
| 86 |
|
| 87 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 88 |
-
|
| 89 |
-
[More Information Needed]
|
| 90 |
|
| 91 |
### Training Procedure
|
| 92 |
|
| 93 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 94 |
|
| 95 |
-
#### Preprocessing [optional]
|
| 96 |
-
|
| 97 |
-
[More Information Needed]
|
| 98 |
-
|
| 99 |
-
|
| 100 |
#### Training Hyperparameters
|
| 101 |
|
| 102 |
-
- **Training
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
## Model Examination [optional]
|
| 112 |
-
|
| 113 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 114 |
-
|
| 115 |
-
[More Information Needed]
|
| 116 |
-
|
| 117 |
-
## Environmental Impact
|
| 118 |
-
|
| 119 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 120 |
-
|
| 121 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 122 |
-
|
| 123 |
-
- **Hardware Type:** [More Information Needed]
|
| 124 |
-
- **Hours used:** [More Information Needed]
|
| 125 |
-
- **Cloud Provider:** [More Information Needed]
|
| 126 |
-
- **Compute Region:** [More Information Needed]
|
| 127 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 128 |
-
|
| 129 |
-
## Technical Specifications [optional]
|
| 130 |
-
|
| 131 |
-
### Model Architecture and Objective
|
| 132 |
-
|
| 133 |
-
[More Information Needed]
|
| 134 |
-
|
| 135 |
-
### Compute Infrastructure
|
| 136 |
-
|
| 137 |
-
[More Information Needed]
|
| 138 |
-
|
| 139 |
-
#### Hardware
|
| 140 |
-
|
| 141 |
-
[More Information Needed]
|
| 142 |
-
|
| 143 |
-
#### Software
|
| 144 |
-
|
| 145 |
-
[More Information Needed]
|
| 146 |
|
| 147 |
## Citation
|
| 148 |
|
|
@@ -153,11 +190,12 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 153 |
[More Information Needed]
|
| 154 |
|
| 155 |
|
| 156 |
-
## Glossary
|
| 157 |
|
| 158 |
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
|
|
|
| 159 |
|
| 160 |
-
|
| 161 |
|
| 162 |
|
| 163 |
## Model Card Authors
|
|
|
|
| 8 |
- lianghsun/gemma-3-tw-270m-it
|
| 9 |
- lianghsun/gemma-3-tw-270m
|
| 10 |
library_name: transformers
|
| 11 |
+
tags:
|
| 12 |
+
- Taiwan
|
| 13 |
+
- PTT
|
| 14 |
+
- keyboard-warrior
|
| 15 |
+
- DevFest
|
| 16 |
---
|
| 17 |
|
| 18 |
+
# Model Card for keyboard-warrior
|
| 19 |
|
| 20 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 21 |
|
| 22 |

|
| 23 |
+
本模型為 [Google DevFest Taipei 2025](https://devfest-taipei.gdg.tw/2025/sessions/?id=1047544) 的實作結果:我們製作這個模型,是為了從零開始——基於在地的中文社群語言文化(例如 PTT)與「鍵盤酸民」風格,自行訓練一個輕量級地端模型,使其能模擬這種酸民語氣。目的是提供研究者/開發者一個工具,用於探索語言風格、多樣化對話行為,以及測試模型對「非建設性」、偏情緒化語言的回應與穩定性。
|
| 24 |
|
|
|
|
| 25 |
|
| 26 |
## Model Details
|
| 27 |
|
| 28 |
### Model Description
|
| 29 |
|
| 30 |
<!-- Provide a longer summary of what this model is. -->
|
| 31 |
+
當下雖然已有強大的大型模型(例如「Gemini 3」)可供使用,但這些模型的設計與語言風格往往偏向中立、禮貌或經過過濾,不一定符合某些在地中文社群(例如臺灣 BBS 社群/匿名論壇)中常見的「鍵盤酸民」語言習慣。為了彌補這個差距,我們從零開始打造一個專屬自己的地端模型 —— 透過蒐集臺灣本地社群對話資料(如 PTT),讓模型能學習並模仿那種帶有諷刺、激烈、挑釁、情緒化的回覆風格。同時,我們把這個模型設計成輕量級(270M 參數),方便在本地、私有環境中運行,而不是依賴雲端或外部 API。Just for fun!🍻
|
|
|
|
| 32 |
|
| 33 |
- **Developed by:** [Liang Hsun Huang](https://www.linkedin.com/in/lianghsunhuang/?locale=en_US)
|
| 34 |
- **Funded by:** [APMIC](http://apmic.ai/)
|
|
|
|
| 47 |
## Uses
|
| 48 |
|
| 49 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 50 |
+
本模型旨在生成模仿臺灣網路社群中「鍵盤酸民」語言風格的回覆,包括諷刺、挑釁、否定與情緒性批評等語氣特徵。模型的設計目的並非鼓勵攻擊性言論,而是在研究與實驗情境中,提供可用於毒性語言偵測、對抗測試(red-teaming)及安全性對照實驗的語料來源。
|
| 51 |
+
|
| 52 |
+
模型的產出風格反映臺灣常見匿名社群(如 PTT)中「酸民」評論的語言特徵,可能包含負面、情緒化或具有挑釁性的內容。
|
| 53 |
|
| 54 |
### Direct Use
|
| 55 |
|
| 56 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 57 |
+
本模型適用於受控環境中的研究使用情境,包括:
|
| 58 |
+
* **毒性語言偵測基準測試**:可作為合成語料產生器,用於建立困難樣本,以訓練或評估毒性偵測模型、內容審核系統與安全性過濾機制。
|
| 59 |
+
* **安全性對抗測試(Alignment / Red-Teaming)**:可作為對抗角色,用於壓力測試對話式 AI,評估模型面對挑釁、不合作語氣時的回應策略與穩定性。
|
| 60 |
+
* **社會語言學研究**:支援解讀臺灣網路社群中的語言風格、修辭策略與文化特性,包括諷刺語言與衝突式互動模式。
|
| 61 |
+
* **語言風格對照與轉換研究**:可用於比較「中立語氣」與「酸民語氣」,以開發反酸民(counter-speech)策略或毒性降低演算法。
|
| 62 |
|
| 63 |
+
本模型不建議在未設置安全機制的情況下直接應用於公開服務或面向一般使用者的產品環境。若需超出研究範圍使用,應搭配內容過濾、審核工具或人工審查。
|
| 64 |
|
| 65 |
+
### Downstream Use
|
| 66 |
|
| 67 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 68 |
+
本模型可作為基礎模型(base model),經由微調(fine-tuning)或整合至更大型系統後,用於與「鍵盤酸民語言風格」相關的研究、測試與系統開發。其下游應用仍以安全性研究與對抗性測試為主,而非直接服務一般使用者。
|
|
|
|
| 69 |
|
| 70 |
### Out-of-Scope Use
|
| 71 |
|
| 72 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 73 |
|
| 74 |
+
本模型並非設計為一般用途的對話助理,也不適合在真實社群環境中模擬或參與對話。本模型的主要目的是支持研究、測試與安全性相關工作,超出此範疇的使用方式可能導致不預期的風險或傷害。
|
| 75 |
+
以下情境均屬於超出模型預期範疇,不建議或明確禁止使用:
|
| 76 |
+
|
| 77 |
+
1. 恶意或傷害性用途(Misuse / Malicious Use)
|
| 78 |
+
* **用於攻擊或貶低特定個人、群體或社群**
|
| 79 |
+
* **作為網路霸凌、散播仇恨言論的工具**
|
| 80 |
+
* **用於操縱情緒、挑起衝突或傳播錯誤資訊**
|
| 81 |
+
* **以任何方式放大毒性語言並影響真實社群**
|
| 82 |
+
|
| 83 |
+
2. 未受控的公開部署(Uncontrolled Deployment)
|
| 84 |
+
* **作為公開聊天室、論壇或社群平台的自動回覆系統**
|
| 85 |
+
* **作為沒有審查機制的 AI 助理或聊天機器人**
|
| 86 |
+
* **整合至未設置內容過濾與安全防護的大型系統**
|
| 87 |
+
|
| 88 |
+
3. 高風險場景(High-Risk Contexts)
|
| 89 |
+
* **用於心理支持、教育、客服、諮詢等需要負責任回應的情境**
|
| 90 |
+
* **用於政治、社會議題傳播或社會運動相關活動**
|
| 91 |
+
* **用於影響公眾輿論、群體決策或敏感議題討論**
|
| 92 |
+
|
| 93 |
+
4. 與模型能力不符的使用(Performance Limitations)
|
| 94 |
+
本模型模仿的是特定文化脈絡下的「酸民語言」風格,並不適合:
|
| 95 |
+
* **一般自然語言理解、資訊查詢、摘要等任務**
|
| 96 |
+
* **需要準確、可靠事實資訊的使用情境**
|
| 97 |
+
* **多語言、跨文化的語言生成測試**
|
| 98 |
+
* **涉及專業領域(醫療、法律等)的建議內容**
|
| 99 |
|
| 100 |
## Bias, Risks, and Limitations
|
| 101 |
|
| 102 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 103 |
|
| 104 |
+
本模型旨在模仿臺灣網路匿名社群中常見的「鍵盤酸民」語言風格,因此在訓練資料與生成內容中,可能涉及特定文化脈絡下的語言偏見與負面表達方式。使用者在評估本模型時,應理解其語言模式不代表客觀事實,亦不代表所有社群使用者的行為。
|
| 105 |
+
|
| 106 |
+
### 偏見與語言表達偏差(Bias)
|
| 107 |
+
* **文化與語境偏差:模型反映特定社群(如 PTT)使用的語言風格,包含用語習慣、情緒表達方式及隱性社會規範,並不適用於其他文化語境。**
|
| 108 |
+
* **語言偏見與刻板印象:部分語料可能包含對族群、事件或社會議題的偏見性陳述,模型生成內容可能延續或放大此類偏見。**
|
| 109 |
+
* **匿名語境影響:基於匿名社群的語言特徵,模型表達可能缺乏建設性、強調情緒輸出與衝突性互動。**
|
| 110 |
|
| 111 |
+
### 技術限制(Technical Limitations)
|
| 112 |
+
* **不可靠的資訊來源:模型沒有事實驗證能力,其生成內容不應被視為真實資訊、建議或判斷。**
|
| 113 |
+
* **語境不穩定:酸民語言常依賴上下文、情緒與語氣線索,模型可能在缺乏完整上下文時生成不相關或過度激進的語句。**
|
| 114 |
+
* **噪音與語意模糊:由於訓練目標偏向模仿風格,模型可能產生語意不明、非理性或缺乏邏輯連貫的內容。**
|
| 115 |
+
* **缺乏多語言能力:模型主要專注於繁體中文與特定社群語境,無法自然地遷移到其他語言或文化場景。**
|
| 116 |
|
| 117 |
+
### 社會與倫理風險(Sociotechnical Risks)
|
| 118 |
+
* **情緒傷害與騷擾風險:若輸出內容被誤用於真實互動,可能對個人或群體造成心理傷害或情緒困擾。**
|
| 119 |
+
* **錯誤資訊傳播:模型可能生成帶有自信語氣的不正確敘述,若未經審查,易造成誤導。**
|
| 120 |
+
* **偏見再製與放大:模型可能將訓練語料中的偏見放大,使社會偏見在合成語料中持續存在或更為顯著。**
|
| 121 |
+
* **脫離語境的誤用:酸民語言高度依賴社群語境,如被帶入一般對話或公共環境,容易造成溝通誤解。**
|
| 122 |
|
| 123 |
+
### 使用限制(Usage Limitations)
|
| 124 |
+
* **本模型僅適用於研究與受控環境測試,不建議在公開平台、產品或真實社群中使用。**
|
| 125 |
+
* **模型輸出不適合用於提供建議、判斷或內容導向角色。**
|
| 126 |
+
* **未經多層安全機制,不應直接部署至高風險場景(如教育、客服、心理諮詢)。**
|
| 127 |
+
* **使用者應避免將其輸出視為中立觀點或社群共識。**
|
| 128 |
+
|
| 129 |
+
### 緩解措施建議(Mitigation Suggestions)
|
| 130 |
+
|
| 131 |
+
為降低相關風險,建議開發者或研究者:
|
| 132 |
+
* **在使用過程中加入內容審查或毒性過濾機制**
|
| 133 |
+
* **建立清晰的使用場景限制與警示**
|
| 134 |
+
* **對生成語料進行人工審閱與標註**
|
| 135 |
+
* **在研究報告中明確標示偏見來源與語言文化背景**
|
| 136 |
|
| 137 |
## How to Get Started with the Model
|
| 138 |
|
| 139 |
Use the code below to get started with the model.
|
| 140 |
|
| 141 |
+
```python
|
| 142 |
+
# Load model directly
|
| 143 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 144 |
+
|
| 145 |
+
tokenizer = AutoTokenizer.from_pretrained("lianghsun/keyboard-warrior")
|
| 146 |
+
model = AutoModelForCausalLM.from_pretrained("lianghsun/keyboard-warrior")
|
| 147 |
+
messages = [
|
| 148 |
+
{"role": "user", "content": "這次威力彩頭獎上看10億耶,好想中喔!"},
|
| 149 |
+
]
|
| 150 |
+
inputs = tokenizer.apply_chat_template(
|
| 151 |
+
messages,
|
| 152 |
+
add_generation_prompt=True,
|
| 153 |
+
tokenize=True,
|
| 154 |
+
return_dict=True,
|
| 155 |
+
return_tensors="pt",
|
| 156 |
+
).to(model.device)
|
| 157 |
+
|
| 158 |
+
outputs = model.generate(**inputs, max_new_tokens=40)
|
| 159 |
+
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
|
| 160 |
+
```
|
| 161 |
|
| 162 |
## Training Details
|
| 163 |
|
| 164 |
### Training Data
|
| 165 |
|
| 166 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 167 |
+
[lianghsun/tw-ptt-keyboard-warrior-chat](https://huggingface.co/datasets/lianghsun/tw-ptt-keyboard-warrior-chat)
|
|
|
|
| 168 |
|
| 169 |
### Training Procedure
|
| 170 |
|
| 171 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 172 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 173 |
#### Training Hyperparameters
|
| 174 |
|
| 175 |
+
- **Training details (summary):**
|
| 176 |
+
- Fine-tuning type:full
|
| 177 |
+
- Precision:`bf16`(bf16 mixed precision)
|
| 178 |
+
- Max sequence length(cutoff_len):384,non-packing
|
| 179 |
+
- Batch:per-device batch size = 32,gradient_accumulation_steps = 8
|
| 180 |
+
- Epochs:50
|
| 181 |
+
- Optimizer:`adamw_torch_fused`,weight decay = 0.01,max_grad_norm = 0.5
|
| 182 |
+
- Learning rate:1e-5,cosine with min lr scheduler(min_lr = 3e-7,warmup_ratio = 0.02)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 183 |
|
| 184 |
## Citation
|
| 185 |
|
|
|
|
| 190 |
[More Information Needed]
|
| 191 |
|
| 192 |
|
| 193 |
+
## Glossary
|
| 194 |
|
| 195 |
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 196 |
+
* **PTT**:PTT(批踢踢實業坊)是臺灣歷史悠久的大型線上討論平台,起源於 BBS(電子布告欄)文化。使用者透過文字介面發表文章、回應討論,並以「看板」(Board)區分主題,例如科技、時事、生活娛樂等。PTT 在臺灣網路社群中具有重要影響力,許多公共議題、流行文化與社會討論都會在 PTT 上發酵與傳播。
|
| 197 |
|
| 198 |
+
* **鍵盤酸民(Keyboard Warrior)**:「鍵盤酸民」是一種網路社群用語,指在網路環境中以匿名或半匿名方式發表帶有貶抑、嘲諷、攻擊性或非建設性評論的使用者。其行為常包含情緒化批評、散播負面言論或針對他人進行人身攻擊。該用語反映了網路匿名文化可能導致的偏激言論現象,但並非所有負面意見都屬於「酸民」——理性批評仍是社群討論的重要部分。
|
| 199 |
|
| 200 |
|
| 201 |
## Model Card Authors
|