Fetching metadata from the HF Docker repository... Mateusz Paszynski
fix paths
7faaa90 - models publish website
- 1.52 kB initial commit
KNNInsuranceModel.joblib Detected Pickle imports (7)
- "numpy.dtype",
- "_codecs.encode",
- "sklearn.neighbors._regression.KNeighborsRegressor",
- "numpy.core.multiarray._reconstruct",
- "numpy.ndarray",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "__main__.KNNInsuranceModel"
How to fix it?
75.5 kB publish website NuSVRInsuranceModel.joblib Detected Pickle imports (12)
- "sklearn.pipeline.Pipeline",
- "__main__.NuSVRInsuranceModel",
- "sklearn.compose._column_transformer.ColumnTransformer",
- "__main__.NuSVRInsuranceModel.MultiplyScaler",
- "numpy.dtype",
- "_codecs.encode",
- "numpy.core.multiarray._reconstruct",
- "numpy.float64",
- "numpy.ndarray",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "sklearn.preprocessing._data.StandardScaler",
- "sklearn.preprocessing._encoders.OneHotEncoder"
How to fix it?
90.9 kB publish website - 329 Bytes initial commit
RandomForestInsuranceModel.joblib Detected Pickle imports (11)
- "sklearn.pipeline.Pipeline",
- "numpy.dtype",
- "sklearn.compose._column_transformer.ColumnTransformer",
- "_codecs.encode",
- "numpy.core.multiarray._reconstruct",
- "__main__.RandomForestInsuranceModel",
- "numpy.float64",
- "numpy.ndarray",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "sklearn.preprocessing._data.StandardScaler",
- "sklearn.preprocessing._encoders.OneHotEncoder"
How to fix it?
225 kB publish website XGBoostInsuranceModel.joblib Detected Pickle imports (7)
- "sklearn.impute._base.SimpleImputer",
- "numpy.dtype",
- "_codecs.encode",
- "numpy.core.multiarray._reconstruct",
- "numpy.ndarray",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "__main__.XGBoostInsuranceModel"
How to fix it?
184 kB publish website - 4.5 kB fix paths
- 1.1 kB added requirements.txt