SLPG commited on
Commit
ee01450
·
verified ·
1 Parent(s): f6b5839

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: translation
3
+ task_categories:
4
+ - translation
5
+ language:
6
+ - en
7
+ - fr
8
+ tags:
9
+ - english
10
+ - french
11
+ - translation corpus
12
+ - en-fr MT
13
+ - english to french biomedical corpus
14
+ - transliteration system
15
+ - biomedical machine translation
16
+ ---
17
+
18
+ ### Biomedical Domain Corpus for EN-FR
19
+
20
+ This repository contains biomedical domain data scraped from Wikipedia for the French-English language pair. We first scraped in-domain data and extracted parallel sentences using three similarity thresholds i.e. Threshold 90, 85, and 80 (as repo folders present their respective threshold). In this first development phase, we had three data files ( Threshold 90, 85, and 80). As this data had many out-domain sentences, we applied a second in-domain filter to this data with a focus on pertaining biomedical domain sentences. In this filter, we retrieved in-domain sentences based on their proximity with in-domain data (Medline titles) and again retrieved using three different thresholds: Threshold 20, 10, and 0. So we have three data files here against each threshold file (developed at the first data collection phase) i.e.
21
+
22
+ Threshold90: biofiltered t20,t10, and t0.
23
+
24
+ Threshold85: biofiltered t20,t10, and t0.
25
+
26
+ Threshold80: biofiltered t20,t10, and t0.
27
+
28
+ For a more in-depth exploration of our work, please refer to our **[paper](https://aclanthology.org/2023.wmt-1.26.pdf)**:
29
+
30
+ ## Corpus Details
31
+ - **Total Sentences:** 6.3 million
32
+ - Threshold-90: 136,854 sentences
33
+ - Threshold-85: 498,776 sentences
34
+ - Threshold-80: 801,268 sentences
35
+ - **Domains Covered:** Biomedical Domain.
36
+ - **Test Corpus:** Medline 20 Test Sets
37
+
38
+
39
+ ## Usage
40
+ These resources are intended to facilitate research and development in the field of Biomedical domain MT.
41
+ They can be used to train new models or improve existing ones, enabling high-quality domain-specific
42
+ machine translation between English and French scripts.
43
+
44
+ ## Citation
45
+
46
+ **If you use our model, kindly cite our [paper](https://aclanthology.org/2023.wmt-1.26.pdf)**:
47
+ ```
48
+ @inproceedings{firdous-rauf-2023-biomedical,
49
+ title = "Biomedical Parallel Sentence Retrieval Using Large Language Models",
50
+ author = "Firdous, Sheema and
51
+ Rauf, Sadaf Abdul",
52
+ editor = "Koehn, Philipp and
53
+ Haddow, Barry and
54
+ Kocmi, Tom and
55
+ Monz, Christof",
56
+ booktitle = "Proceedings of the Eighth Conference on Machine Translation",
57
+ month = dec,
58
+ year = "2023",
59
+ address = "Singapore",
60
+ publisher = "Association for Computational Linguistics",
61
+ url = "https://aclanthology.org/2023.wmt-1.26",
62
+ pages = "263--270",
63
+ abstract = "We have explored the effect of in domain knowledge during parallel sentence filtering from in domain corpora. Models built with sentences mined from in domain corpora without domain knowledge performed poorly, whereas model performance improved by more than 2.3 BLEU points on average with further domain centric filtering. We have used Large Language Models for selecting similar and domain aligned sentences. Our experiments show the importance of inclusion of domain knowledge in sentence selection methodologies even if the initial comparable corpora are in domain.",
64
+ }
65
+ ```