Fetching metadata from the HF Docker repository...
Delete App.py
28502b8 verified - 1.52 kB initial commit
- 620 Bytes Update Dockerfile
- 501 Bytes readme
- 1.43 kB Update app.py
- 449 Bytes docker
- 415 Bytes init
- 9.63 kB Update infer.py
- 8.68 kB Update inference.py
- 238 Bytes Update requirements.txt
stacked_age_model.joblib Detected Pickle imports (13)
- "sklearn.preprocessing._data.StandardScaler",
- "sklearn.pipeline.Pipeline",
- "sklearn.ensemble._forest.RandomForestClassifier",
- "sklearn.linear_model._logistic.LogisticRegression",
- "sklearn.neighbors._classification.KNeighborsClassifier",
- "numpy.ndarray",
- "sklearn.preprocessing._label.LabelEncoder",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "sklearn.tree._classes.DecisionTreeClassifier",
- "xgboost.sklearn.XGBClassifier",
- "sklearn.ensemble._stacking.StackingClassifier",
- "sklearn.svm._classes.SVC",
- "numpy.dtype"
How to fix it?
171 MB init stacked_gender_model.joblib Detected Pickle imports (13)
- "sklearn.ensemble._forest.RandomForestClassifier",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "sklearn.neighbors._classification.KNeighborsClassifier",
- "sklearn.preprocessing._label.LabelEncoder",
- "sklearn.ensemble._stacking.StackingClassifier",
- "numpy.ndarray",
- "sklearn.pipeline.Pipeline",
- "sklearn.linear_model._logistic.LogisticRegression",
- "sklearn.preprocessing._data.StandardScaler",
- "sklearn.svm._classes.SVC",
- "sklearn.tree._classes.DecisionTreeClassifier",
- "numpy.dtype",
- "xgboost.sklearn.XGBClassifier"
How to fix it?
81.1 MB init