Datasets:

Modalities:
Text
Formats:
json
Languages:
Czech
Libraries:
Datasets
pandas
License:
mfajcik commited on
Commit
e3bf509
·
verified ·
1 Parent(s): e8c8d94

Upload 3 files

Browse files
Files changed (3) hide show
  1. convert_ner_court_decisions.py +1 -1
  2. test.jsonl +2 -2
  3. train.jsonl +0 -0
convert_ner_court_decisions.py CHANGED
@@ -33,7 +33,7 @@ def whitespace_tokenize_with_offsets(text):
33
  return tokens, start_tok_offsets, end_tok_offsets
34
 
35
 
36
- def proc_dataset(dataset, max_text_length=300):
37
  r = []
38
  for doc in dataset:
39
  text = doc["text"]
 
33
  return tokens, start_tok_offsets, end_tok_offsets
34
 
35
 
36
+ def proc_dataset(dataset, max_text_length=200):
37
  r = []
38
  for doc in dataset:
39
  text = doc["text"]
test.jsonl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd96862b8f4827e07935276723d3f80d4a749afc3cbdca43c43b4bc2a2c661b0
3
- size 9412715
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f197d034f23d8560442620711b0a279938eecdff10be27a514082dcb36d5237
3
+ size 7966402
train.jsonl CHANGED
The diff for this file is too large to render. See raw diff