Add model 1.11.0
Browse files- README.md +1 -1
- models/tokenize/combined_charlm.pt +2 -2
- models/tokenize/combined_nocharlm.pt +1 -1
README.md
CHANGED
|
@@ -12,4 +12,4 @@ Find more about it in [our website](https://stanfordnlp.github.io/stanza) and ou
|
|
| 12 |
|
| 13 |
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
|
| 14 |
|
| 15 |
-
Last updated 2026-01-
|
|
|
|
| 12 |
|
| 13 |
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
|
| 14 |
|
| 15 |
+
Last updated 2026-01-11 08:37:02.745
|
models/tokenize/combined_charlm.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:59df74adac69cfbec0802ae0adaaa9a2d4d80322c212927814ff4b4035c3c6fd
|
| 3 |
+
size 3597176
|
models/tokenize/combined_nocharlm.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1499466
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eb7dd9bcfa821279c5ff12dca750f32781cc3b7a5ea24671b5d271f2450d0105
|
| 3 |
size 1499466
|