deshanksuman's picture
Update README.md
a6b6206 verified
metadata
license: apache-2.0

FEWS and Semcor Dataset for Word Sense Disambiguation (WSD) hanling corner cases which are difficult to disambiguate by GPT 4 Turbo Model.

This repository contains a formatted and cleaned version of the FEWS and Semcor dataset, specifically arranged for model fine-tuning for Word Sense Disambiguation (WSD) tasks.

Dataset Description

The FEWS and Semcor dataset has been preprocessed and formatted to be directly usable for training and fine-tuning language models for word sense disambiguation. Each ambiguous word in the context is enclosed with <WSD> tags to clearly indicate which word requires disambiguation.

For example:

Original: The bank had a strong security system.
Tagged: The <WSD>bank</WSD> had a strong security system.

This tagging system allows models to focus on the specific ambiguous words during training and inference.

Data Format

The dataset is organized to suit for the alpaca_prompt:

  • Instruction
  • Input
  • Output

Usage

This dataset is intended for:

  1. Fine-tuning language models for word sense disambiguation tasks
  2. Evaluating WSD performance
  3. Research on cross-lingual semantic disambiguation

Citation

If you use this dataset in your research, please cite the original FEWS dataset.

@inproceedings{ blevins2021fews, title={FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary}, author={Terra Blevins and Mandar Joshi and Luke Zettlemoyer}, booktitle={Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics}, year={2021}, url={https://blvns.github.io/papers/eacl2021.pdf} }

@inproceedings{miller1994using, title={Using a semantic concordance for sense identification}, author={Miller, George A and Chodorow, Martin and Landes, Shari and Leacock, Claudia and Thomas, Robert G}, booktitle={Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994}, year={1994} }

License

This dataset is made available under the Apache License 2.0.