l3cube-pune commited on
Commit
f099705
·
verified ·
1 Parent(s): b29fb4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -13
README.md CHANGED
@@ -15,7 +15,7 @@ task_categories:
15
 
16
  # L3Cube-MahaParaphrase Dataset
17
 
18
- Paper: [MahaParaphrase: A Marathi Paraphrase Detection Corpus and BERT-based Models](https://huggingface.co/papers/2508.17444)
19
  Code: https://github.com/l3cube-pune/MarathiNLP
20
 
21
  ## Overview:
@@ -44,22 +44,12 @@ The dataset is ideal for training and evaluating NLP models for:
44
  ## Model Benchmarks:
45
  Standard transformer-based models like **BERT** have been evaluated on this dataset, providing a performance baseline for future research.
46
 
47
- ## Sample Usage
48
-
49
- This dataset is part of the `mahaNLP` library. You can install it via pip:
50
-
51
- ```bash
52
- pip install mahaNLP
53
- ```
54
-
55
- For usage examples, please refer to the [L3Cube-MahaNLP Colab demo](https://colab.research.google.com/drive/1POx3Bi1cML6-s3Z3u8g8VpqzpoYCyv2q).
56
-
57
  ## Citation:
58
  If you use this dataset, please cite the original work as follows:
59
  ```bibtex
60
- @article{joshi2025mahaparaphrase,
61
  title={MahaParaphrase: A Marathi Paraphrase Detection Corpus and BERT-based Models},
62
- author={Joshi, Raviraj},
63
  journal={arXiv preprint arXiv:2508.17444},
64
  year={2025}
65
  }
 
15
 
16
  # L3Cube-MahaParaphrase Dataset
17
 
18
+ Paper: [MahaParaphrase: A Marathi Paraphrase Detection Corpus and BERT-based Models](https://huggingface.co/papers/2508.17444) <br>
19
  Code: https://github.com/l3cube-pune/MarathiNLP
20
 
21
  ## Overview:
 
44
  ## Model Benchmarks:
45
  Standard transformer-based models like **BERT** have been evaluated on this dataset, providing a performance baseline for future research.
46
 
 
 
 
 
 
 
 
 
 
 
47
  ## Citation:
48
  If you use this dataset, please cite the original work as follows:
49
  ```bibtex
50
+ @article{jadhav2025mahaparaphrase,
51
  title={MahaParaphrase: A Marathi Paraphrase Detection Corpus and BERT-based Models},
52
+ author={Jadhav, Suramya and Shanbhag, Abhay and Thakurdesai, Amogh and Sinare, Ridhima and Joshi, Ananya and Joshi, Raviraj},
53
  journal={arXiv preprint arXiv:2508.17444},
54
  year={2025}
55
  }