update wording
Browse files
README.md
CHANGED
|
@@ -53,7 +53,7 @@ The domain-adapted model will attempt to fill the mask with a token relevant to
|
|
| 53 |
Use the code below to get started with the model. Users pass a `text` string detailing a sentence with a `[MASK]` token. The model will provide options
|
| 54 |
to fill the mask based on the sentence context and its background of knowledge. Note - the DistilBERT base model was trained on a very large general corpus of text.
|
| 55 |
In our training, we have fine-tuned the model on the large IMDB movie review dataset. That is, the model is now accustomed to filling `[MASK]` tokens with words related to
|
| 56 |
-
the domain of movies/tv/films. To see the model's afinity for cinematic lingo, it is best to be considerate in one's prompt engineering.
|
| 57 |
one should ideally pass a masked `text` string that could reasonably be found in someone's review of a movie. See the example below:
|
| 58 |
|
| 59 |
``` python
|
|
|
|
| 53 |
Use the code below to get started with the model. Users pass a `text` string detailing a sentence with a `[MASK]` token. The model will provide options
|
| 54 |
to fill the mask based on the sentence context and its background of knowledge. Note - the DistilBERT base model was trained on a very large general corpus of text.
|
| 55 |
In our training, we have fine-tuned the model on the large IMDB movie review dataset. That is, the model is now accustomed to filling `[MASK]` tokens with words related to
|
| 56 |
+
the domain of movies/tv/films. To see the model's afinity for cinematic lingo, it is best to be considerate in one's prompt engineering. Specifically, to most likely generate movie related text,
|
| 57 |
one should ideally pass a masked `text` string that could reasonably be found in someone's review of a movie. See the example below:
|
| 58 |
|
| 59 |
``` python
|