Librarian Bot: Add base_model information to model
Browse filesThis pull request aims to enrich the metadata of your model by adding [`bert-base-cased`](https://huggingface.co/bert-base-cased) as a `base_model` field, situated in the `YAML` block of your model's `README.md`.
How did we find this information? We performed a regular expression match on your `README.md` file to determine the connection.
**Why add this?** Enhancing your model's metadata in this way:
- **Boosts Discoverability** - It becomes straightforward to trace the relationships between various models on the Hugging Face Hub.
- **Highlights Impact** - It showcases the contributions and influences different models have within the community.
For a hands-on example of how such metadata can play a pivotal role in mapping model connections, take a look at [librarian-bots/base_model_explorer](https://huggingface.co/spaces/librarian-bots/base_model_explorer).
This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bot). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to [@davanstrien](https://huggingface.co/davanstrien). Your input is invaluable to us!
|
@@ -9,23 +9,24 @@ datasets:
|
|
| 9 |
metrics:
|
| 10 |
- accuracy
|
| 11 |
- f1
|
|
|
|
| 12 |
model-index:
|
| 13 |
- name: glue-mrpc
|
| 14 |
results:
|
| 15 |
- task:
|
| 16 |
-
name: Text Classification
|
| 17 |
type: text-classification
|
|
|
|
| 18 |
dataset:
|
| 19 |
name: GLUE MRPC
|
| 20 |
type: glue
|
| 21 |
args: mrpc
|
| 22 |
metrics:
|
| 23 |
-
-
|
| 24 |
-
type: accuracy
|
| 25 |
value: 0.8553921568627451
|
| 26 |
-
|
| 27 |
-
|
| 28 |
value: 0.897391304347826
|
|
|
|
| 29 |
- task:
|
| 30 |
type: natural-language-inference
|
| 31 |
name: Natural Language Inference
|
|
@@ -35,29 +36,29 @@ model-index:
|
|
| 35 |
config: mrpc
|
| 36 |
split: validation
|
| 37 |
metrics:
|
| 38 |
-
-
|
| 39 |
-
type: accuracy
|
| 40 |
value: 0.8553921568627451
|
|
|
|
| 41 |
verified: true
|
| 42 |
-
-
|
| 43 |
-
type: precision
|
| 44 |
value: 0.8716216216216216
|
|
|
|
| 45 |
verified: true
|
| 46 |
-
-
|
| 47 |
-
type: recall
|
| 48 |
value: 0.9247311827956989
|
|
|
|
| 49 |
verified: true
|
| 50 |
-
-
|
| 51 |
-
type: auc
|
| 52 |
value: 0.90464282737351
|
|
|
|
| 53 |
verified: true
|
| 54 |
-
-
|
| 55 |
-
type: f1
|
| 56 |
value: 0.897391304347826
|
|
|
|
| 57 |
verified: true
|
| 58 |
-
-
|
| 59 |
-
type: loss
|
| 60 |
value: 0.6564616560935974
|
|
|
|
| 61 |
verified: true
|
| 62 |
---
|
| 63 |
|
|
|
|
| 9 |
metrics:
|
| 10 |
- accuracy
|
| 11 |
- f1
|
| 12 |
+
base_model: bert-base-cased
|
| 13 |
model-index:
|
| 14 |
- name: glue-mrpc
|
| 15 |
results:
|
| 16 |
- task:
|
|
|
|
| 17 |
type: text-classification
|
| 18 |
+
name: Text Classification
|
| 19 |
dataset:
|
| 20 |
name: GLUE MRPC
|
| 21 |
type: glue
|
| 22 |
args: mrpc
|
| 23 |
metrics:
|
| 24 |
+
- type: accuracy
|
|
|
|
| 25 |
value: 0.8553921568627451
|
| 26 |
+
name: Accuracy
|
| 27 |
+
- type: f1
|
| 28 |
value: 0.897391304347826
|
| 29 |
+
name: F1
|
| 30 |
- task:
|
| 31 |
type: natural-language-inference
|
| 32 |
name: Natural Language Inference
|
|
|
|
| 36 |
config: mrpc
|
| 37 |
split: validation
|
| 38 |
metrics:
|
| 39 |
+
- type: accuracy
|
|
|
|
| 40 |
value: 0.8553921568627451
|
| 41 |
+
name: Accuracy
|
| 42 |
verified: true
|
| 43 |
+
- type: precision
|
|
|
|
| 44 |
value: 0.8716216216216216
|
| 45 |
+
name: Precision
|
| 46 |
verified: true
|
| 47 |
+
- type: recall
|
|
|
|
| 48 |
value: 0.9247311827956989
|
| 49 |
+
name: Recall
|
| 50 |
verified: true
|
| 51 |
+
- type: auc
|
|
|
|
| 52 |
value: 0.90464282737351
|
| 53 |
+
name: AUC
|
| 54 |
verified: true
|
| 55 |
+
- type: f1
|
|
|
|
| 56 |
value: 0.897391304347826
|
| 57 |
+
name: F1
|
| 58 |
verified: true
|
| 59 |
+
- type: loss
|
|
|
|
| 60 |
value: 0.6564616560935974
|
| 61 |
+
name: loss
|
| 62 |
verified: true
|
| 63 |
---
|
| 64 |
|