Buckets:

rtrm's picture
|
download
raw
2.41 kB
# Hub methods
Methods for using the Hugging Face Hub:
## Push to hub [[evaluate.push_to_hub]]
#### evaluate.push_to_hub[[evaluate.push_to_hub]]
[Source](https://github.com/huggingface/evaluate/blob/main/src/evaluate/hub.py#L14)
Pushes the result of a metric to the metadata of a model repository in the Hub.
Example:
```python
>>> push_to_hub(
... model_id="huggingface/gpt2-wikitext2",
... metric_value=0.5
... metric_type="bleu",
... metric_name="BLEU",
... dataset_name="WikiText",
... dataset_type="wikitext",
... dataset_split="test",
... task_type="text-generation",
... task_name="Text Generation"
... )
```
**Parameters:**
model_id (`str`) : Model id from https://hf.co/models.
task_type (`str`) : Task id, refer to the [Hub allowed tasks](https://github.com/huggingface/evaluate/blob/main/src/evaluate/config.py#L154) for allowed values.
dataset_type (`str`) : Dataset id from https://hf.co/datasets.
dataset_name (`str`) : Pretty name for the dataset.
metric_type (`str`) : Metric id from https://hf.co/metrics.
metric_name (`str`) : Pretty name for the metric.
metric_value (`float`) : Computed metric value.
task_name (`str`, *optional*) : Pretty name for the task.
dataset_config (`str`, *optional*) : Dataset configuration used in [load_dataset](https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset). See [load_dataset](https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset) for more info.
dataset_split (`str`, *optional*) : Name of split used for metric computation.
dataset_revision (`str`, *optional*) : Git hash for the specific version of the dataset.
dataset_args (`dict[str, int]`, *optional*) : Additional arguments passed to [load_dataset](https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset).
metric_config (`str`, *optional*) : Configuration for the metric (e.g. the GLUE metric has a configuration for each subset).
metric_args (`dict[str, int]`, *optional*) : Arguments passed during [compute()](/docs/evaluate/main/en/package_reference/main_classes#evaluate.EvaluationModule.compute).
overwrite (`bool`, *optional*, defaults to `False`) : If set to `True` an existing metric field can be overwritten, otherwise attempting to overwrite any existing fields will cause an error.

Xet Storage Details

Size:
2.41 kB
·
Xet hash:
3fa6b90a6bc6ba695f53e4aa04c9e755fcc3bce25bbb69f25ea406091e14049f

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.