Update README.md
Browse files
README.md
CHANGED
|
@@ -97,15 +97,17 @@ display(results)
|
|
| 97 |
|
| 98 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 99 |
|
| 100 |
-
|
| 101 |
|
| 102 |
This is the training dataset used.
|
| 103 |
-
It consists of 30 original
|
| 104 |
|
| 105 |
### Training Procedure
|
| 106 |
|
| 107 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 108 |
-
This model was trained with an AutoML process with accuracy as the main metrics.
|
|
|
|
|
|
|
| 109 |
|
| 110 |
|
| 111 |
#### Training Hyperparameters
|
|
@@ -129,18 +131,23 @@ The testing data was the 'original' split, the 30 original images in this set.
|
|
| 129 |
|
| 130 |
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 131 |
|
| 132 |
-
This dataset is evaluating whether the
|
| 133 |
|
| 134 |
#### Metrics
|
| 135 |
|
| 136 |
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 137 |
|
| 138 |
The testing metric used was accuracy to ensure the highest accuracy of the model possible.
|
|
|
|
| 139 |
|
| 140 |
|
| 141 |
### Results
|
| 142 |
|
| 143 |
-
After training with the initial dataset, this model reached an accuracy of
|
|
|
|
|
|
|
|
|
|
|
|
|
| 144 |
|
| 145 |
#### Summary
|
| 146 |
|
|
|
|
| 97 |
|
| 98 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 99 |
|
| 100 |
+
EricCRX/books-tabular-dataset
|
| 101 |
|
| 102 |
This is the training dataset used.
|
| 103 |
+
It consists of 30 original measurements used for validation along with 300 synthetic pieces of data from training.
|
| 104 |
|
| 105 |
### Training Procedure
|
| 106 |
|
| 107 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 108 |
+
This model was trained with an AutoML process with accuracy as the main metrics.
|
| 109 |
+
This model used a max time_limit of 300 seconds to reduce training time and "best_quality" to improve results
|
| 110 |
+
|
| 111 |
|
| 112 |
|
| 113 |
#### Training Hyperparameters
|
|
|
|
| 131 |
|
| 132 |
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 133 |
|
| 134 |
+
This dataset is evaluating whether the books are hardcovers "1", or softcovers "0"
|
| 135 |
|
| 136 |
#### Metrics
|
| 137 |
|
| 138 |
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 139 |
|
| 140 |
The testing metric used was accuracy to ensure the highest accuracy of the model possible.
|
| 141 |
+
Training time was also considered to ensure final models were not computationally infeasible.
|
| 142 |
|
| 143 |
|
| 144 |
### Results
|
| 145 |
|
| 146 |
+
After training with the initial dataset, this model reached an accuracy of 97% in validation.
|
| 147 |
+
It also had an individual prediction time of 0.12 seconds making it fast with a high accuracy.
|
| 148 |
+
|
| 149 |
+
This validation should not be taken as a metric for robustness. Due to the small dataset, this cannot be confirmed to work for outside mearements.
|
| 150 |
+
Expanding this dataset could find issues or improvements to this model.
|
| 151 |
|
| 152 |
#### Summary
|
| 153 |
|