Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Chen42
/
test_upload
like
0
License:
gpl-3.0
Model card
Files
Files and versions
xet
Community
1
3f179fd
test_upload
19.5 GB
Ctrl+K
Ctrl+K
1 contributor
History:
8 commits
Chen42
Add files using upload-large-folder tool
3f179fd
verified
over 1 year ago
__pycache__
Upload folder using huggingface_hub
over 1 year ago
bert-base-chinese-tokenizer
Add files using upload-large-folder tool
over 1 year ago
checkpoint-32000
Add files using upload-large-folder tool
over 1 year ago
checkpoint-48000
Add files using upload-large-folder tool
over 1 year ago
checkpoint-64000
Add files using upload-large-folder tool
over 1 year ago
checkpoint-72000
Add files using upload-large-folder tool
over 1 year ago
checkpoint-76000
Add files using upload-large-folder tool
over 1 year ago
checkpoint-80000
Add files using upload-large-folder tool
over 1 year ago
lilt-roberta-en-base
Add files using upload-large-folder tool
over 1 year ago
model.v1
Upload model.v1/model_and_train.py with huggingface_hub
over 1 year ago
.gitattributes
Safe
1.81 kB
Add files using upload-large-folder tool
over 1 year ago
README.md
Safe
201 Bytes
Update README.md
over 1 year ago
filt_result_by_bleu.py
Safe
311 Bytes
Upload folder using huggingface_hub
over 1 year ago
make_comet_hyp_and_ref.py
Safe
1.4 kB
Upload folder using huggingface_hub
over 1 year ago
make_jsonl.py
Safe
4.38 kB
Upload folder using huggingface_hub
over 1 year ago
make_text_src_list.py
Safe
1.76 kB
Upload folder using huggingface_hub
over 1 year ago
model_and_train.py
Safe
10.6 kB
Upload folder using huggingface_hub
over 1 year ago
old_model_and_train.py
Safe
8.87 kB
Upload folder using huggingface_hub
over 1 year ago
sample_generate.py
Safe
2.97 kB
Upload folder using huggingface_hub
over 1 year ago
test.pkl
pickle
Detected Pickle imports (21)
"transformers.models.bert.tokenization_bert.BasicTokenizer"
,
"regex._regex.compile"
,
"transformers.models.layoutlmv3.tokenization_layoutlmv3.LayoutLMv3Tokenizer"
,
"transformers.models.bert.tokenization_bert.WordpieceTokenizer"
,
"pandas.core.indexes.base._new_Index"
,
"transformers.models.bert.tokenization_bert.BertTokenizer"
,
"tokenizers.AddedToken"
,
"numpy.dtype"
,
"pandas.core.internals.managers.BlockManager"
,
"torch.utils.data.dataset.Subset"
,
"numpy.ndarray"
,
"pandas.core.indexes.base.Index"
,
"pandas.core.indexes.range.RangeIndex"
,
"transformers.tokenization_utils.Trie"
,
"numpy.core.multiarray._reconstruct"
,
"builtins.slice"
,
"builtins.range"
,
"pandas.core.frame.DataFrame"
,
"collections.OrderedDict"
,
"model_and_train.MyDataset"
,
"pandas._libs.internals._unpickle_block"
How to fix it?
109 MB
xet
Upload folder using huggingface_hub
over 1 year ago
test_bleu.py
Safe
3.68 kB
Upload folder using huggingface_hub
over 1 year ago
test_bleu_chrf.py
Safe
5.79 kB
Upload folder using huggingface_hub
over 1 year ago
utils.py
Safe
1.83 kB
Upload folder using huggingface_hub
over 1 year ago