Malikeh1375 commited on
Commit
38a70b5
·
verified ·
1 Parent(s): 60328ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -137
README.md CHANGED
@@ -778,141 +778,6 @@ configs:
778
  - split: test
779
  path: tokenizer_robustness_completion_general_unusual_formatting/test-*
780
  ---
 
781
 
782
- # Dataset Card for Tokenization Robustness
783
-
784
- <!-- Provide a quick summary of the dataset. -->
785
-
786
- A comprehensive evaluation dataset for testing robustness of different tokenization strategies.
787
-
788
- ## Dataset Details
789
-
790
- ### Dataset Description
791
-
792
- <!-- Provide a longer summary of what this dataset is. -->
793
-
794
- This dataset evaluates how robust language models are to different tokenization strategies and edge cases. It includes text completion questions with multiple choice answers designed to test various aspects of tokenization handling.
795
-
796
- - **Curated by:** R3
797
- - **Funded by [optional]:** [More Information Needed]
798
- - **Shared by [optional]:** [More Information Needed]
799
- - **Language(s) (NLP):** [More Information Needed]
800
- - **License:** cc
801
-
802
- ### Dataset Sources [optional]
803
-
804
- <!-- Provide the basic links for the dataset. -->
805
-
806
- - **Repository:** [More Information Needed]
807
- - **Paper [optional]:** [More Information Needed]
808
- - **Demo [optional]:** [More Information Needed]
809
-
810
- ## Uses
811
-
812
- <!-- Address questions around how the dataset is intended to be used. -->
813
-
814
- ### Direct Use
815
-
816
- <!-- This section describes suitable use cases for the dataset. -->
817
-
818
- [More Information Needed]
819
-
820
- ### Out-of-Scope Use
821
-
822
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
823
-
824
- [More Information Needed]
825
-
826
- ## Dataset Structure
827
-
828
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
829
-
830
- The dataset contains multiple-choice questions with associated metadata about tokenization types and categories.
831
-
832
- ## Dataset Creation
833
-
834
- ### Curation Rationale
835
-
836
- <!-- Motivation for the creation of this dataset. -->
837
-
838
- [More Information Needed]
839
-
840
- ### Source Data
841
-
842
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
843
-
844
- #### Data Collection and Processing
845
-
846
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
847
-
848
- [More Information Needed]
849
-
850
- #### Who are the source data producers?
851
-
852
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
853
-
854
- [More Information Needed]
855
-
856
- ### Annotations [optional]
857
-
858
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
859
-
860
- #### Annotation process
861
-
862
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
863
-
864
- [More Information Needed]
865
-
866
- #### Who are the annotators?
867
-
868
- <!-- This section describes the people or systems who created the annotations. -->
869
-
870
- [More Information Needed]
871
-
872
- #### Personal and Sensitive Information
873
-
874
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
875
-
876
- [More Information Needed]
877
-
878
- ## Bias, Risks, and Limitations
879
-
880
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
881
-
882
- The dataset focuses primarily on English text and may not generalize to other languages or tokenization schemes not covered in the evaluation.
883
-
884
- ### Recommendations
885
-
886
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
887
-
888
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
889
-
890
- ## Citation [optional]
891
-
892
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
893
-
894
- **BibTeX:**
895
-
896
- [More Information Needed]
897
-
898
- **APA:**
899
-
900
- [More Information Needed]
901
-
902
- ## Glossary [optional]
903
-
904
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
905
-
906
- [More Information Needed]
907
-
908
- ## More Information [optional]
909
-
910
- [More Information Needed]
911
-
912
- ## Dataset Card Authors [optional]
913
-
914
- [More Information Needed]
915
-
916
- ## Dataset Card Contact
917
-
918
- [More Information Needed]
 
778
  - split: test
779
  path: tokenizer_robustness_completion_general_unusual_formatting/test-*
780
  ---
781
+ ## TokSuite Bonus Benchmarks (General Collection)
782
 
783
+ This dataset provides a **bonus set of TokSuite benchmarks** designed to probe tokenizer robustness under **language-agnostic, cross-domain surface-form perturbations** that commonly occur in real-world text. The General collection includes canonical questions alongside targeted perturbations such as abbreviations, character deletion, currency symbol usage, diverse date formats, and unusual or non-standard formatting. Unlike language-specific TokSuite subsets, these benchmarks focus on **universal tokenization stressors** that arise across languages, domains, and writing contexts, offering a compact but high-signal evaluation suite for analyzing how tokenizers handle formatting irregularities, symbol-heavy text, and noisy inputs independent of linguistic morphology.