augustoperes commited on
Commit
fb5f078
·
1 Parent(s): 1c83892

Create README.md

Browse files

# Magic the gathering dataset

This dataset contains text of all magic the gathering cards.
Example usage:

```python
from datasets import load_dataset

dataset = load_dataset('augustoperes/mtg_text')
dataset

# outputs:
# DatasetDict({
# train: Dataset({
# features: ['card_name', 'type_line', 'oracle_text'],
# num_rows: 20063
# })
# validation: Dataset({
# features: ['card_name', 'type_line', 'oracle_text'],
# num_rows: 5016
# })
# })
```

Elements of the dataset are, for example:

```python
train_dataset = dataset['train']
train_dataset[0]

# Outputs
# {'card_name': 'Recurring Insight',
# 'type_line': 'Sorcery',
# 'oracle_text': "Draw cards equal to the number of cards in target opponent's hand.\nRebound (If you cast this spell from your hand, exile it as it resolves. At the beginning of your next upkeep, you may cast this card from exile without paying its mana cost.)"}
```

# Example usage with Pytorch

You can easily tokenize, convert and pad this dataset to be usable in pytorch with:

```python
from transformers import AutoTokenizer

import torch
from torch.nn.utils.rnn import pad_sequence

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")

def tokenize(sample):
sample["card_name"] = tokenizer(sample["card_name"])["input_ids"]
sample["type_line"] = tokenizer(sample["type_line"])["input_ids"]
sample["oracle_text"] = tokenizer(sample["oracle_text"])["input_ids"]
return sample

tokenized_dataset = train_dataset.map(tokenize)

def collate_fn(sequences):
# Pad the sequences to the maximum length in the batch
card_names = [torch.tensor(sequence['card_name']) for sequence in sequences]
type_line = [torch.tensor(sequence['type_line']) for sequence in sequences]
oracle_text = [torch.tensor(sequence['oracle_text']) for sequence in sequences]

padded_card_name = pad_sequence(card_names, batch_first=True, padding_value=0)
padded_type_line = pad_sequence(type_line, batch_first=True, padding_value=0)
padded_oracle_text = pad_sequence(oracle_text, batch_first=True, padding_value=0)

return {'card_name': padded_card_name, 'type_line': padded_type_line, 'padded_oracle_text': padded_oracle_text}

loader = torch.utils.data.DataLoader(tokenized_dataset, collate_fn=collate_fn, batch_size=4)

for e in loader:
print(e)
break

# Will output:
# {'card_name': tensor([[ 101, 10694, 12369, 102, 0],
# [ 101, 3704, 9881, 102, 0],
# [ 101, 22639, 20066, 7347, 102],
# [ 101, 25697, 1997, 6019, 102]]),
# 'type_line': tensor([[ 101, 2061, 19170, 2854, 102, 0, 0],
# [ 101, 6492, 1517, 4743, 102, 0, 0],
# [ 101, 6492, 1517, 22639, 102, 0, 0],
# [ 101, 4372, 14856, 21181, 1517, 15240, 102]]),
# 'padded_oracle_text': [ommited for readability])}


```

Files changed (1) hide show
  1. README.md +8 -0
README.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-generation
4
+ language:
5
+ - en
6
+ size_categories:
7
+ - 10K<n<100K
8
+ ---