CycleCore-Technologies commited on
Commit
43a35f3
·
verified ·
1 Parent(s): 6d143d9

Upload Maaza-MLM-135M-JSON-v1 - v1.0.0 production release

Browse files
Files changed (1) hide show
  1. README.md +17 -2
README.md CHANGED
@@ -1,3 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # CycleCore Maaza MLM-135M-JSON v1.0.0
2
 
3
  Micro Language Model (135M parameters) specialized for JSON extraction on edge devices.
@@ -227,8 +244,6 @@ print(result)
227
 
228
  ## Model Comparison
229
 
230
- For guidance on choosing between MLM-135M and SLM-360M, see our [Model Comparison Guide](https://github.com/CycleCore/SLMBench/blob/main/docs/MODEL_COMPARISON.md).
231
-
232
  **Quick Decision**:
233
  - **Use MLM-135M** if: Ultra-low latency required, simple schemas (2-4 fields), <500MB deployment size
234
  - **Use SLM-360M** if: Higher accuracy needed, medium/complex schemas, willing to use ~1GB deployment size
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ base_model: HuggingFaceTB/SmolLM2-135M
6
+ tags:
7
+ - json
8
+ - structured-output
9
+ - edge-ai
10
+ - iot
11
+ - micro-language-model
12
+ - peft
13
+ - lora
14
+ library_name: transformers
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
  # CycleCore Maaza MLM-135M-JSON v1.0.0
19
 
20
  Micro Language Model (135M parameters) specialized for JSON extraction on edge devices.
 
244
 
245
  ## Model Comparison
246
 
 
 
247
  **Quick Decision**:
248
  - **Use MLM-135M** if: Ultra-low latency required, simple schemas (2-4 fields), <500MB deployment size
249
  - **Use SLM-360M** if: Higher accuracy needed, medium/complex schemas, willing to use ~1GB deployment size