boris commited on
Commit
2be4997
·
1 Parent(s): 3d712ee

New model from https://wandb.ai/wandb/huggingtweets/runs/3bd5uy64

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- thumbnail: http://www.huggingtweets.com/discord/1650121123874/predictions.png
4
  tags:
5
  - huggingtweets
6
  widget:
@@ -10,7 +10,7 @@ widget:
10
  <div class="inline-flex flex-col" style="line-height: 1.5;">
11
  <div class="flex">
12
  <div
13
- style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1392864511669854217/dBymBmGq_400x400.jpg&#39;)">
14
  </div>
15
  <div
16
  style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)">
@@ -44,18 +44,18 @@ The model was trained on tweets from Discord.
44
  | --- | --- |
45
  | Tweets downloaded | 3250 |
46
  | Retweets | 0 |
47
- | Short tweets | 339 |
48
- | Tweets kept | 2911 |
49
 
50
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2bdgd0nt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discord's tweets.
55
 
56
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1uiu02xb) for full transparency and reproducibility.
57
 
58
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1uiu02xb/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
 
1
  ---
2
  language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
  tags:
5
  - huggingtweets
6
  widget:
 
10
  <div class="inline-flex flex-col" style="line-height: 1.5;">
11
  <div class="flex">
12
  <div
13
+ style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1587494876320448512/XH7swTWQ_400x400.jpg&#39;)">
14
  </div>
15
  <div
16
  style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)">
 
44
  | --- | --- |
45
  | Tweets downloaded | 3250 |
46
  | Retweets | 0 |
47
+ | Short tweets | 85 |
48
+ | Tweets kept | 3165 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h46ojex3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discord's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bd5uy64) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bd5uy64/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
config.json CHANGED
@@ -37,7 +37,7 @@
37
  }
38
  },
39
  "torch_dtype": "float32",
40
- "transformers_version": "4.18.0",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
 
37
  }
38
  },
39
  "torch_dtype": "float32",
40
+ "transformers_version": "4.24.0",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:49dccc1299d2316abca418adcb3ab0a3a7f0d482de9198050dd333aeb4538070
3
  size 510396521
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:472109a55d524b3aaefbf6e94bffabffeb43748f5a75cacd29bf86835e1a86e0
3
  size 510396521
special_tokens_map.json CHANGED
@@ -1 +1,5 @@
1
- {"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer_config.json CHANGED
@@ -1 +1,10 @@
1
- {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2", "tokenizer_class": "GPT2Tokenizer"}
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "model_max_length": 1024,
6
+ "name_or_path": "gpt2",
7
+ "special_tokens_map_file": null,
8
+ "tokenizer_class": "GPT2Tokenizer",
9
+ "unk_token": "<|endoftext|>"
10
+ }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:984f89a92a0d1d916f5e7a8dc92d032b550567ce1715b1e36b39c8c89c64005a
3
- size 3055
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71fc0058e52a60752d7809fde7bf20dba2eba025bdbdcd42b1b7c6ee003fbb06
3
+ size 3375