Files changed (1) hide show
  1. README.md +89 -1
README.md CHANGED
@@ -5,4 +5,92 @@ language:
5
  - en
6
  size_categories:
7
  - 10K<n<100K
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - en
6
  size_categories:
7
  - 10K<n<100K
8
+ ---
9
+
10
+ # Magic the gathering dataset
11
+
12
+ This dataset contains text of all magic the gathering cards.
13
+ Example usage:
14
+
15
+ ```python
16
+ from datasets import load_dataset
17
+
18
+ dataset = load_dataset('augustoperes/mtg_text')
19
+ dataset
20
+
21
+ # outputs:
22
+ # DatasetDict({
23
+ # train: Dataset({
24
+ # features: ['card_name', 'type_line', 'oracle_text'],
25
+ # num_rows: 20063
26
+ # })
27
+ # validation: Dataset({
28
+ # features: ['card_name', 'type_line', 'oracle_text'],
29
+ # num_rows: 5016
30
+ # })
31
+ # })
32
+ ```
33
+
34
+ Elements of the dataset are, for example:
35
+
36
+ ```python
37
+ train_dataset = dataset['train']
38
+ train_dataset[0]
39
+
40
+ # Outputs
41
+ # {'card_name': 'Recurring Insight',
42
+ # 'type_line': 'Sorcery',
43
+ # 'oracle_text': "Draw cards equal to the number of cards in target opponent's hand.\nRebound (If you cast this spell from your hand, exile it as it resolves. At the beginning of your next upkeep, you may cast this card from exile without paying its mana cost.)"}
44
+ ```
45
+
46
+ # Example usage with Pytorch
47
+
48
+ You can easily tokenize, convert and pad this dataset to be usable in pytorch with:
49
+
50
+ ```python
51
+ from transformers import AutoTokenizer
52
+
53
+ import torch
54
+ from torch.nn.utils.rnn import pad_sequence
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
57
+
58
+ def tokenize(sample):
59
+ sample["card_name"] = tokenizer(sample["card_name"])["input_ids"]
60
+ sample["type_line"] = tokenizer(sample["type_line"])["input_ids"]
61
+ sample["oracle_text"] = tokenizer(sample["oracle_text"])["input_ids"]
62
+ return sample
63
+
64
+ tokenized_dataset = train_dataset.map(tokenize)
65
+
66
+ def collate_fn(sequences):
67
+ # Pad the sequences to the maximum length in the batch
68
+ card_names = [torch.tensor(sequence['card_name']) for sequence in sequences]
69
+ type_line = [torch.tensor(sequence['type_line']) for sequence in sequences]
70
+ oracle_text = [torch.tensor(sequence['oracle_text']) for sequence in sequences]
71
+
72
+ padded_card_name = pad_sequence(card_names, batch_first=True, padding_value=0)
73
+ padded_type_line = pad_sequence(type_line, batch_first=True, padding_value=0)
74
+ padded_oracle_text = pad_sequence(oracle_text, batch_first=True, padding_value=0)
75
+
76
+ return {'card_name': padded_card_name, 'type_line': padded_type_line, 'padded_oracle_text': padded_oracle_text}
77
+
78
+ loader = torch.utils.data.DataLoader(tokenized_dataset, collate_fn=collate_fn, batch_size=4)
79
+
80
+ for e in loader:
81
+ print(e)
82
+ break
83
+
84
+ # Will output:
85
+ # {'card_name': tensor([[ 101, 10694, 12369, 102, 0],
86
+ # [ 101, 3704, 9881, 102, 0],
87
+ # [ 101, 22639, 20066, 7347, 102],
88
+ # [ 101, 25697, 1997, 6019, 102]]),
89
+ # 'type_line': tensor([[ 101, 2061, 19170, 2854, 102, 0, 0],
90
+ # [ 101, 6492, 1517, 4743, 102, 0, 0],
91
+ # [ 101, 6492, 1517, 22639, 102, 0, 0],
92
+ # [ 101, 4372, 14856, 21181, 1517, 15240, 102]]),
93
+ # 'padded_oracle_text': [ommited for readability])}
94
+
95
+
96
+ ```