New model from https://wandb.ai/wandb/huggingtweets/runs/35ni1gxu
Browse files- README.md +7 -7
- pytorch_model.bin +1 -1
- tokenizer.json +6 -3
- training_args.bin +1 -1
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
---
|
| 2 |
language: en
|
| 3 |
-
thumbnail:
|
| 4 |
tags:
|
| 5 |
- huggingtweets
|
| 6 |
widget:
|
|
@@ -42,20 +42,20 @@ The model was trained on tweets from Geoff Evamy Hill.
|
|
| 42 |
|
| 43 |
| Data | Geoff Evamy Hill |
|
| 44 |
| --- | --- |
|
| 45 |
-
| Tweets downloaded |
|
| 46 |
-
| Retweets |
|
| 47 |
| Short tweets | 123 |
|
| 48 |
-
| Tweets kept |
|
| 49 |
|
| 50 |
-
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/
|
| 51 |
|
| 52 |
## Training procedure
|
| 53 |
|
| 54 |
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gceh's tweets.
|
| 55 |
|
| 56 |
-
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/
|
| 57 |
|
| 58 |
-
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/
|
| 59 |
|
| 60 |
## How to use
|
| 61 |
|
|
|
|
| 1 |
---
|
| 2 |
language: en
|
| 3 |
+
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
|
| 4 |
tags:
|
| 5 |
- huggingtweets
|
| 6 |
widget:
|
|
|
|
| 42 |
|
| 43 |
| Data | Geoff Evamy Hill |
|
| 44 |
| --- | --- |
|
| 45 |
+
| Tweets downloaded | 3194 |
|
| 46 |
+
| Retweets | 1490 |
|
| 47 |
| Short tweets | 123 |
|
| 48 |
+
| Tweets kept | 1581 |
|
| 49 |
|
| 50 |
+
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1r5q702i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
|
| 51 |
|
| 52 |
## Training procedure
|
| 53 |
|
| 54 |
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gceh's tweets.
|
| 55 |
|
| 56 |
+
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35ni1gxu) for full transparency and reproducibility.
|
| 57 |
|
| 58 |
+
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35ni1gxu/artifacts) is logged and versioned.
|
| 59 |
|
| 60 |
## How to use
|
| 61 |
|
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 510396521
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d2aafa8bed69570682442aa7c349621981747fa5349c5c257116f5c8e6c73941
|
| 3 |
size 510396521
|
tokenizer.json
CHANGED
|
@@ -17,17 +17,20 @@
|
|
| 17 |
"pre_tokenizer": {
|
| 18 |
"type": "ByteLevel",
|
| 19 |
"add_prefix_space": false,
|
| 20 |
-
"trim_offsets": true
|
|
|
|
| 21 |
},
|
| 22 |
"post_processor": {
|
| 23 |
"type": "ByteLevel",
|
| 24 |
"add_prefix_space": true,
|
| 25 |
-
"trim_offsets": false
|
|
|
|
| 26 |
},
|
| 27 |
"decoder": {
|
| 28 |
"type": "ByteLevel",
|
| 29 |
"add_prefix_space": true,
|
| 30 |
-
"trim_offsets": true
|
|
|
|
| 31 |
},
|
| 32 |
"model": {
|
| 33 |
"type": "BPE",
|
|
|
|
| 17 |
"pre_tokenizer": {
|
| 18 |
"type": "ByteLevel",
|
| 19 |
"add_prefix_space": false,
|
| 20 |
+
"trim_offsets": true,
|
| 21 |
+
"use_regex": true
|
| 22 |
},
|
| 23 |
"post_processor": {
|
| 24 |
"type": "ByteLevel",
|
| 25 |
"add_prefix_space": true,
|
| 26 |
+
"trim_offsets": false,
|
| 27 |
+
"use_regex": true
|
| 28 |
},
|
| 29 |
"decoder": {
|
| 30 |
"type": "ByteLevel",
|
| 31 |
"add_prefix_space": true,
|
| 32 |
+
"trim_offsets": true,
|
| 33 |
+
"use_regex": true
|
| 34 |
},
|
| 35 |
"model": {
|
| 36 |
"type": "BPE",
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 3055
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c442f4b244c715d81066333d29848b25f64b33f0550379317885592d79607339
|
| 3 |
size 3055
|