Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
parquet-converter commited on
Commit
8dc94dd
·
1 Parent(s): 2d1c9d2

Update parquet files

Browse files
.gitignore DELETED
@@ -1,4 +0,0 @@
1
- NELL
2
- nell.tar.gz
3
- Wiki
4
- wiki.tar.gz
 
 
 
 
 
README.md DELETED
@@ -1,55 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- license:
5
- - other
6
- multilinguality:
7
- - monolingual
8
- size_categories:
9
- - n<1K
10
- pretty_name: link_prediction_nell_one
11
- ---
12
-
13
- # Dataset Card for "relbert/link_prediction_nell_one"
14
- ## Dataset Description
15
- - **Repository:** [https://github.com/xwhan/One-shot-Relational-Learning](https://github.com/xwhan/One-shot-Relational-Learning)
16
- - **Paper:** [https://aclanthology.org/D18-1223/](https://aclanthology.org/D18-1223/)
17
- - **Dataset:** Few-shots link prediction
18
-
19
- ### Dataset Summary
20
- This is NELL-ONE dataset for the few-shots link prediction.
21
-
22
- ## Dataset Structure
23
- ### Data Instances
24
- An example of `test` looks as follows.
25
- ```
26
- {
27
- "relation": "concept:sportsgamesport",
28
- "head": "concept:sportsgame:n1937_world_series",
29
- "tail": "concept:sport:baseball"
30
- }
31
- ```
32
-
33
- ### Citation Information
34
- ```
35
- @inproceedings{xiong-etal-2018-one,
36
- title = "One-Shot Relational Learning for Knowledge Graphs",
37
- author = "Xiong, Wenhan and
38
- Yu, Mo and
39
- Chang, Shiyu and
40
- Guo, Xiaoxiao and
41
- Wang, William Yang",
42
- booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
43
- month = oct # "-" # nov,
44
- year = "2018",
45
- address = "Brussels, Belgium",
46
- publisher = "Association for Computational Linguistics",
47
- url = "https://aclanthology.org/D18-1223",
48
- doi = "10.18653/v1/D18-1223",
49
- pages = "1980--1990",
50
- abstract = "Knowledge graphs (KG) are the key components of various natural language processing applications. To further expand KGs{'} coverage, previous studies on knowledge graph completion usually require a large number of positive examples for each relation. However, we observe long-tail relations are actually more common in KGs and those newly added relations often do not have many known triples for training. In this work, we aim at predicting new facts under a challenging setting where only one training instance is available. We propose a one-shot relational learning framework, which utilizes the knowledge distilled by embedding models and learns a matching metric by considering both the learned embeddings and one-hop graph structures. Empirically, our model yields considerable performance improvements over existing embedding models, and also eliminates the need of re-training the embedding models when dealing with newly added relations.",
51
- }
52
- ```
53
-
54
- ### LICENSE
55
- TBA
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/nell.train.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:59fcc5f746777fcda4058a0bf8cb6a55e03bde3b3ffdf6716a259a0fe2740374
3
- size 1071208
 
 
 
 
data/nell.vocab.txt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:29af9862df4777ce613d72e94c460ade35e60477704fa7a48274cc803ca7ea4f
3
- size 2114701
 
 
 
 
data/wiki.train.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7f50246ef47702b72d633e1807b79d4915e2e2054bdc6cb4c4c2aa9b9ab1b13f
3
- size 3726937
 
 
 
 
data/wiki.validation.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c9097f4c1d08f56b050cddddbc15dba0c282412ec519bf994a768930db825316
3
- size 401621
 
 
 
 
data/wiki.vocab.txt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8491e7a8cd96a74c74667adfa3a8403d6040a1aaf265f8b8d2161fb2c684d119
3
- size 46168647
 
 
 
 
link_prediction_nell_one.py DELETED
@@ -1,85 +0,0 @@
1
- import json
2
- import datasets
3
-
4
-
5
- logger = datasets.logging.get_logger(__name__)
6
- _DESCRIPTION = """NELL-One, a few shots link prediction dataset. """
7
- _NAME = "link_prediction"
8
- _VERSION = "0.0.0"
9
- _CITATION = """
10
- @inproceedings{xiong-etal-2018-one,
11
- title = "One-Shot Relational Learning for Knowledge Graphs",
12
- author = "Xiong, Wenhan and
13
- Yu, Mo and
14
- Chang, Shiyu and
15
- Guo, Xiaoxiao and
16
- Wang, William Yang",
17
- booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
18
- month = oct # "-" # nov,
19
- year = "2018",
20
- address = "Brussels, Belgium",
21
- publisher = "Association for Computational Linguistics",
22
- url = "https://aclanthology.org/D18-1223",
23
- doi = "10.18653/v1/D18-1223",
24
- pages = "1980--1990",
25
- abstract = "Knowledge graphs (KG) are the key components of various natural language processing applications. To further expand KGs{'} coverage, previous studies on knowledge graph completion usually require a large number of positive examples for each relation. However, we observe long-tail relations are actually more common in KGs and those newly added relations often do not have many known triples for training. In this work, we aim at predicting new facts under a challenging setting where only one training instance is available. We propose a one-shot relational learning framework, which utilizes the knowledge distilled by embedding models and learns a matching metric by considering both the learned embeddings and one-hop graph structures. Empirically, our model yields considerable performance improvements over existing embedding models, and also eliminates the need of re-training the embedding models when dealing with newly added relations.",
26
- }
27
- """
28
-
29
- _HOME_PAGE = "https://github.com/asahi417/relbert"
30
- _URL = f'https://huggingface.co/datasets/relbert/{_NAME}/resolve/main/data'
31
- _URLS = {
32
- str(datasets.Split.TRAIN): [f'{_URL}/train.jsonl'],
33
- str(datasets.Split.VALIDATION): [f'{_URL}/valid.jsonl'],
34
- str(datasets.Split.TEST): [f'{_URL}/test.jsonl']
35
- }
36
-
37
-
38
- class LinkPredictionConfig(datasets.BuilderConfig):
39
- """BuilderConfig"""
40
-
41
- def __init__(self, **kwargs):
42
- """BuilderConfig.
43
- Args:
44
- **kwargs: keyword arguments forwarded to super.
45
- """
46
- super(LinkPredictionConfig, self).__init__(**kwargs)
47
-
48
-
49
- class LinkPrediction(datasets.GeneratorBasedBuilder):
50
- """Dataset."""
51
-
52
- BUILDER_CONFIGS = [
53
- LinkPredictionConfig(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION)
54
- ]
55
-
56
- def _split_generators(self, dl_manager):
57
- downloaded_file = dl_manager.download_and_extract(_URLS)
58
- return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
59
- for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]]
60
-
61
- def _generate_examples(self, filepaths):
62
- _key = 0
63
- for filepath in filepaths:
64
- logger.info(f"generating examples from = {filepath}")
65
- with open(filepath, encoding="utf-8") as f:
66
- _list = [i for i in f.read().split('\n') if len(i) > 0]
67
- for i in _list:
68
- data = json.loads(i)
69
- yield _key, data
70
- _key += 1
71
-
72
- def _info(self):
73
- return datasets.DatasetInfo(
74
- description=_DESCRIPTION,
75
- features=datasets.Features(
76
- {
77
- "relation": datasets.Value("string"),
78
- "head": datasets.Value("string"),
79
- "tail": datasets.Value("string"),
80
- }
81
- ),
82
- supervised_keys=None,
83
- homepage=_HOME_PAGE,
84
- citation=_CITATION,
85
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
process.py DELETED
@@ -1,63 +0,0 @@
1
- """
2
- - Wiki-One https://sites.cs.ucsb.edu/~xwhan/datasets/wiki.tar.gz
3
- - NELL-One https://sites.cs.ucsb.edu/~xwhan/datasets/nell.tar.gz
4
-
5
- wget https://sites.cs.ucsb.edu/~xwhan/datasets/nell.tar.gz
6
- tar -xzf nell.tar.gz
7
-
8
- wget https://sites.cs.ucsb.edu/~xwhan/datasets/wiki.tar.gz
9
- tar -xzf wiki.tar.gz
10
-
11
- """
12
- import os
13
- import json
14
- from itertools import chain
15
-
16
- data_dir_nell = "NELL"
17
- data_dir_wiki = "Wiki"
18
- os.makedirs("data", exist_ok=True)
19
-
20
- if not os.path.exists(data_dir_nell):
21
- raise ValueError("Please download the dataset first\n"
22
- "wget https://sites.cs.ucsb.edu/~xwhan/datasets/nell.tar.gz\n"
23
- "tar -xzf nell.tar.gz")
24
-
25
- if not os.path.exists(data_dir_wiki):
26
- raise ValueError("Please download the dataset first\n"
27
- "wget https://sites.cs.ucsb.edu/~xwhan/datasets/wiki.tar.gz\n"
28
- "tar -xzf wiki.tar.gz")
29
-
30
-
31
- def read_file(_file):
32
- with open(_file, 'r') as f_reader:
33
- tmp = json.load(f_reader)
34
- flatten = list(chain(*[[{"relation": r, "head": h, "tail": t} for (h, r, t) in v] for v in tmp.values()]))
35
- # flatten = {}
36
- # for k, v in tmp.items():
37
- # flatten[k] = [{"relation": r, "head": h, "tail": t} for (h, r, t) in v]
38
- return flatten
39
-
40
-
41
- def read_vocab(_file):
42
- with open(_file) as f_reader:
43
- ent2ids = json.load(f_reader)
44
- return sorted(list(ent2ids.keys()))
45
-
46
-
47
- if __name__ == '__main__':
48
- vocab = read_vocab(f"{data_dir_nell}/ent2ids")
49
- with open("data/nell.vocab.txt", 'w') as f:
50
- f.write("\n".join(vocab))
51
-
52
- vocab = read_vocab(f"{data_dir_wiki}/ent2ids")
53
- with open("data/wiki.vocab.txt", 'w') as f:
54
- f.write("\n".join(vocab))
55
-
56
- for i, s in zip(['dev_tasks.json', 'test_tasks.json', 'train_tasks.json'], ['validation', 'test', 'train']):
57
- d = read_file(f"{data_dir_nell}/{i}")
58
- with open(f"data/nell.{s}.jsonl", "w") as f:
59
- f.write("\n".join([json.dumps(_d) for _d in d]))
60
-
61
- d = read_file(f"{data_dir_wiki}/{i}")
62
- with open(f"data/wiki.{s}.jsonl", "w") as f:
63
- f.write("\n".join([json.dumps(_d) for _d in d]))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/nell.validation.jsonl → relbert--fewshot_link_prediction/json-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dce6d2c53f2d4d9e7390033fcc787b5b405a06d13141e5affa5b3b43561657f5
3
- size 116970
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82cb1d57ba72cbb4042dd70d22c662ff526a3790ecd3bde586880313bc8c0d7f
3
+ size 226843
data/wiki.test.jsonl → relbert--fewshot_link_prediction/json-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c7791ec3b056017a9c6bf0ca70714c695459840bd8bb86362c0c81ea46b7ab46
3
- size 938201
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa01b1ca8891b41b46b7b5adf1e5ae63aadca5127019a3f55a2f5fc8159d0b20
3
+ size 943552
data/nell.test.jsonl → relbert--fewshot_link_prediction/json-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7a94b344bf4b4b721f9ca1b96813b497c1f87eac795e393e321d370c5cb1dd1e
3
- size 275455
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7be31e09199efb2c56eaae54ae2ce069bd3202a143595f0970339e236d439159
3
+ size 89114
wiki.tar.gz.1 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6eb426b1dd72890f6d0b1f55cfcf10da386272d6586f7753c7531cac1330dfeb
3
- size 5275648