We are testing our new automated report generation tool for training and inference, it's still in development so feedback is more than welcome.

BoAmps org

The schema is valid.

Why did you add an "items" field in the dataset when you already have "dataQuantity" ?

I noticed that in the infrastructure section, you used the "share" field. Was anyone else using the same machine as the one you used during the fine tuning ? Then, how did you get this share value ? If you were the only one using this equipment, "share" should be set to 1.

Also, you did not add data relating to the calibration phase. Do you not have the data at all or can you add it ?

Otherwise the rest looks very good to me !

BoAmps org

Oh, one more thing.

In each of the reports, you added two datasets, but they have the same size and volume. Is that normal ?

Thanks for the feedback.

I havent' done any calibration yet, but I will do it for future uploads.

Regarding the datasets, there are two because there is a train / validation split, but due to some limitations in the huggingface API, I wasn't able to resolve the size of each split individually so they both reference the total. I will remove the items field.

As for the share value, I thought this was meant to represent the percentage of total power consumption for the element, my bad, I will set it to 1 in the future.

Should I re-export the reports and re-submit them after those changes ?

Alright ! Sorry for the delay in the reply.

After some internal discussion with the BoAmps working group, we think it's better to merge the two input datasets together, so that there is no ambiguity regarding the amount of tokens. So that the dataset are correctly referenced, we will add a subset field, to allow specifying the subsets of the dataset that were used.

Don't worry about it, I'm going to make the modifications needed myself (for this time) !

Thank you for taking the time to upload the data :D

Saauan changed pull request status to merged

Sign up or log in to comment