markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Query table and import results in a data-frame
val sqlStatement = """ SELECT fare_amount, passenger_count, tip_amount, tipped FROM taxi_train WHERE passenger_count > 0 AND passenger_count < 7 AND fare_amount > 0 AND fare_amount < 200 AND payment_type in ('CSH', 'CRD') AND tip_amount > 0 AND tip_amount < 25 """ val sqlResultsDF = sqlContext....
+-----------+---------------+----------+------+ |fare_amount|passenger_count|tip_amount|tipped| +-----------+---------------+----------+------+ | 13.5| 1.0| 2.9| 1.0| | 16.0| 2.0| 3.4| 1.0| | 10.5| 2.0| 1.0| 1.0| +-----------+---------------+---...
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
2. DATA EXPLORATION AND VISUALIZATION: Plotting of target variables and features Query a table and import results into data-frame, create a local pandas data-frame, and then visualize using Jupyter's autovisualization feature NOTE: You can use the `%%local` magic to run your code locally on the Jupyter server, whic...
%%sql -q -o sqlResults SELECT fare_amount, passenger_count, tip_amount, tipped FROM taxi_train WHERE passenger_count > 0 AND passenger_count < 7 AND fare_amount > 0 AND fare_amount < 200 AND payment_type in ('CSH', 'CRD') AND tip_amount > 0 AND tip_amount < 25
_____no_output_____
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Visualize using Jupyter autovisualization feature
%%local sqlResults
_____no_output_____
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
One can plot using Python code once the data-frame is in local context as pandas data-frame
%%local import matplotlib.pyplot as plt %matplotlib inline # TIP BY PAYMENT TYPE AND PASSENGER COUNT ax1 = sqlResults[['tip_amount']].plot(kind='hist', bins=25, facecolor='lightblue') ax1.set_title('Tip amount distribution') ax1.set_xlabel('Tip Amount ($)') ax1.set_ylabel('Counts') plt.suptitle('') plt.show() # TIP B...
_____no_output_____
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
3. CREATING FEATURES, TRANSFORMATION OF FEATURES, AND DATA PREP FOR INPUT INTO MODELING FUNCTIONS Create a new feature by binning hours into traffic time buckets
/* CREATE FOUR BUCKETS FOR TRAFFIC TIMES */ val sqlStatement = """ SELECT *, CASE WHEN (pickup_hour <= 6 OR pickup_hour >= 20) THEN "Night" WHEN (pickup_hour >= 7 AND pickup_hour <= 10) THEN "AMRush" WHEN (pickup_hour >= 11 AND pickup_hour <= 15) THEN "Afternoon" WHEN (pickup_hour >= 16 AN...
res35: Long = 126050
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Indexing and one-hot encoding of categorical features Here we only transform four variables to show examples, which are character strings. Other variables, such as week-day, which are represented by numerical valies, can also be indexed as categorical variables.For indexing, we used stringIndexer, and for one-hot enco...
// HERE WE CREATE INDEXES, AND ONE-HOT ENCODED VECTORS FOR SEVERAL CATEGORICAL FEATURES val starttime = Calendar.getInstance().getTime() val stringIndexer = new StringIndexer().setInputCol("vendor_id").setOutputCol("vendorIndex").fit(taxi_df_train_with_newFeatures) val indexed = stringIndexer.transform(taxi_df_train_w...
Time taken to run the above cell: 4 seconds.
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Split data-set into training and test. Add a random number (between 0 and 1) to reach row (in "rand" column). The rand column can be used to select cross-validation folds during training
val starttime = Calendar.getInstance().getTime() val samplingFraction = 0.25; val trainingFraction = 0.75; val testingFraction = (1-trainingFraction); val seed = 1234; val encodedFinalSampledTmp = encodedFinal.sample(withReplacement = false, fraction = samplingFraction, seed = seed) val sampledDFcount = encodedFinalS...
Time taken to run the above cell: 2 seconds.
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Specify target (dependant) variable and features to be used training. Create indexed or one-hot encoded training and testing input LabeledPoint RDDs or Data-Frames.
val starttime = Calendar.getInstance().getTime() // MAP NAMES OF FEATURES AND TARGETS FOR CLASSIFICATION AND REGRESSION PROBLEMS. val featuresIndOneHot = List("paymentVec", "vendorVec", "rateVec", "TrafficTimeBinsVec", "pickup_hour", "weekday", "passenger_count", "trip_time_in_secs", "trip_distance", "fare_amount").ma...
Time taken to run the above cell: 4 seconds.
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Automaticall categorizing and vectorizing features and target for use as input in tree-based modeling functions in Spark MLProperly categorize target and features for use in tree-based modeling functions in Spark ML. 1. Target for binary classification (tipped - 0/1) is binarized based on threshold of 0.52. Features a...
// CATEGORIZE FEATURES AND BINARIZE TARGET FOR BINARY CLASSIFICATION PROBLEM // //Train data val indexer = new VectorIndexer().setInputCol("features").setOutputCol("featuresCat").setMaxCategories(32) val indexerModel = indexer.fit(indexedTRAINbinaryDF) val indexedTrainwithCatFeat = indexerModel.transform(indexedTRAINb...
indexedTESTwithCatFeat: org.apache.spark.sql.DataFrame = [label: double, features: vector, featuresCat: vector]
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
4. BINARY CLASSIFICATION MODEL TRAINING: Predicting tip or no tip (target: tipped = 1/0) Create a Logistic regression model using SparkML's LogisticRession function, save model in blob, and predict on test data
// Create Logistic regression model val lr = new LogisticRegression().setLabelCol("tipped").setFeaturesCol("features").setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8) val lrModel = lr.fit(OneHotTRAIN) // Predict on test data-set val predictions = lrModel.transform(OneHotTEST) // Select BinaryClassificationEv...
_____no_output_____
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Example: Load saved model and score test data-set
val starttime = Calendar.getInstance().getTime() val savedModel = org.apache.spark.ml.classification.LogisticRegressionModel.load(filename) println(s"Coefficients: ${savedModel.coefficients} Intercept: ${savedModel.intercept}") // score the model on test data. val predictions = savedModel.transform(OneHotTEST).select...
ROC on test data = 0.9827381497557599
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Example: Use Python on local pandas data-frames to plot ROC curve
%%sql -q -o sqlResults select tipped, probability from testResults %%local %matplotlib inline from sklearn.metrics import roc_curve,auc sqlResults['probFloat'] = sqlResults.apply(lambda row: row['probability'].values()[0][1], axis=1) predictions_pddf = sqlResults[["tipped","probFloat"]] #predictions_pddf = sqlResults...
_____no_output_____
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Create Random Forest classification model using Spark ML RandomForestClassifier function, and evaluate model on test-data
val starttime = Calendar.getInstance().getTime() // Random Forest Classifier with Spark ML val rf = new RandomForestClassifier().setLabelCol("labelBin").setFeaturesCol("featuresCat").setNumTrees(10).setSeed(1234) // Fit the model val rfModel = rf.fit(indexedTRAINwithCatFeatBinTarget) val predictions = rfModel.transfo...
ROC on test data = 0.9847103571552683
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Create Gradient boosting tree classification model using MLlib's GradientBoostedTrees function, and evaluate model on test-data
// Train a GBT Classiication model using MLlib and LabeledPoint val starttime = Calendar.getInstance().getTime() val boostingStrategy = BoostingStrategy.defaultParams("Classification") boostingStrategy.numIterations = 20 boostingStrategy.treeStrategy.numClasses = 2 boostingStrategy.treeStrategy.maxDepth = 5 boostingSt...
Area under ROC curve: 0.9846895479241554
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
5. REGRESSION MODEL TRAINING: Predicting tip amount Create Linear Regression model using Spark ML LinearRegression function, save model and evaluate model on test-data
// Create Regularized Linear Regression model using Spark ML function and data-frame val starttime = Calendar.getInstance().getTime() val lr = new LinearRegression().setLabelCol("tip_amount").setFeaturesCol("features").setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8) // Fit the model using data-frame val lrMod...
Time taken to run the above cell: 13 seconds.
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
EXAMPLE: Load a saved LinearRegression model from blob and score test data-set
val starttime = Calendar.getInstance().getTime() val savedModel = org.apache.spark.ml.regression.LinearRegressionModel.load(filename) println(s"Coefficients: ${savedModel.coefficients} Intercept: ${savedModel.intercept}") // score the model on test data. val predictions = savedModel.transform(OneHotTEST).select("tip_...
R-sqr on test data = 0.5960320470835743
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Example: Query test results as data-frame and visualize using Jupyter autoviz & Python matplotlib
%%sql -q -o sqlResults select * from testResults %%local sqlResults
_____no_output_____
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Create plots using Python matplotlib
%%local sqlResults %matplotlib inline import numpy as np ax = sqlResults.plot(kind='scatter', figsize = (6,6), x='tip_amount', y='prediction', color='blue', alpha = 0.25, label='Actual vs. predicted'); fit = np.polyfit(sqlResults['tip_amount'], sqlResults['prediction'], deg=1) ax.set_title('Actual vs. Predicted Tip Am...
_____no_output_____
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Create Gradient boosting tree regression model using Spark ML GBTRegressor function, and evaluate model on test-data
val starttime = Calendar.getInstance().getTime() // Train a GBT Regression model. val gbt = new GBTRegressor().setLabelCol("label").setFeaturesCol("featuresCat").setMaxIter(10) val gbtModel = gbt.fit(indexedTRAINwithCatFeat) // Make predictions. val predictions = gbtModel.transform(indexedTESTwithCatFeat) // Compute...
Test R-sqr is: 0.7667229448874853
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
6. ADVANCED MODELING UTILITIES: In this section, we show ML utilities that are frequently used for model optimizationWe show three different ways to optimize ML models using parameter sweeping:1. Split data into train & validation sets, optimize model using hyper-parameter sweeeping on training set and evaluation on ...
val starttime = Calendar.getInstance().getTime() // Rename tip_amount as label val OneHotTRAINLabeled = OneHotTRAIN.select("tip_amount","features").withColumnRenamed(existingName="tip_amount",newName="label") val OneHotTESTLabeled = OneHotTEST.select("tip_amount","features").withColumnRenamed(existingName="tip_amount...
Test R-sqr is: 0.6229443508226747
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Optimize model using cross-validation and hyper-parameter sweeping, using Spark ML's CrossValidator function (Binary Classification)
val starttime = Calendar.getInstance().getTime() // Create data-frames with properly labeled columns for use with train/test split val indexedTRAINwithCatFeatBinTargetRF = indexedTRAINwithCatFeatBinTarget.select("labelBin","featuresCat").withColumnRenamed(existingName="labelBin",newName="label").withColumnRenamed(exis...
Time taken to run the above cell: 35 seconds.
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Optimize model using custom cross-validation and parameter-sweeping code to utilize any ML function and parameter-set (Linear Regression) Optimize model using custom code. Then identify best model parameters based on highest accuracy, create final model, and evaluate model on test data. Save model in blog. Load model ...
val starttime = Calendar.getInstance().getTime() // Define parameter grid and number of folds val paramGrid = new ParamGridBuilder().addGrid(rf.maxDepth, Array(5,10)).addGrid(rf.numTrees, Array(10,25,50)).build() val nFolds = 3 val numModels = paramGrid.size val numParamsinGrid = 2 // Specify the number of categorie...
Time taken to run the above cell: 61 seconds.
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Load best RF model from blob and score test file
val savedRFModel = RandomForestModel.load(sc, filename) val labelAndPreds = indexedTESTreg.map { point => val prediction = savedRFModel.predict(point.features) ( prediction, point.label ) } val test_r...
test_rsqr: Double = 0.7847314211279889
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
7. AUTOMATICALLY CONSUMING SPARK-BUILT ML MODELSWe have previously published a description and code (pySpark) to show how one can automatically load and score new data-sets with ML models built in Spark and saved in Azure blobs. We do not repeat that description here, but simply point the users to the previously publ...
/* GET TIME TO RUN THE NOTEBOOK */ val finalTime = Calendar.getInstance().getTime() val totalTime = ((finalTime.getTime() - beginningTime.getTime())/1000).toString; println("Time taken to run the above cell: " + totalTime + " seconds.");
Time taken to run the above cell: 295 seconds.
CC-BY-4.0
Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb
jfindlay/Azure-MachineLearning-DataScience
Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func from sqlalchemy import Column, Integer, String, Float engine = create_engine("sqlite:///Resources/hawaii.sqlite") conn=engine....
_____no_output_____
ADSL
Climate_starter.ipynb
lovenalee/sqlalchemy-challenge
Exploratory Climate Analysis
# Design a query to retrieve the last 12 months of precipitation data and plot the results # Calculate the date 1 year ago from the last data point in the database last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()[0] last_date = dt.datetime.strptime(last_date, "%Y-%m-%d") last_year_...
_____no_output_____
ADSL
Climate_starter.ipynb
lovenalee/sqlalchemy-challenge
Bonus Challenge Assignment
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date ...
_____no_output_____
ADSL
Climate_starter.ipynb
lovenalee/sqlalchemy-challenge
Generating random surfaces - Perez methodOften in tribology we will want to generate random surfaces with particular properties, we can use these ase roughness in simulations to investigate how our contact changes with specific roughness parameters. Slippy contains several methods for making randomly rough surfaces. T...
%matplotlib inline import slippy.surface as s # surface generation and manipulation import numpy as np # numerical functions import scipy.stats as stats # statistical distributions import matplotlib.pyplot as plt # plotting np.random.seed(1)
_____no_output_____
MIT
examples/Generating random surfaces - perez method.ipynb
FrictionTribologyEnigma/SlipPY
RandomPerezSurfaceThe RandomPerezSurface class implements the method described in the reference below:Francesc Pérez-Ràfols, Andreas Almqvist,Generating randomly rough surfaces with given height probability distribution and power spectrum,Tribology International,Volume 131,2019,Pages 591-604,ISSN 0301-679X,https://doi...
beta = 10 # the drop off length of the acf sigma = 1 # the roughness of the surface qx = np.arange(-128,128) qy = np.arange(-128,128) Qx, Qy = np.meshgrid(qx, qy) Cq = sigma**2*beta/(2*np.pi*(beta**2+Qx**2+Qy**2)**0.5) # the PSD of the surface Cq = np.fft.fftshift(Cq) plt.imshow(Cq) plt.colorbar()
_____no_output_____
MIT
examples/Generating random surfaces - perez method.ipynb
FrictionTribologyEnigma/SlipPY
Next we will generate a height probability density function:
a = 0.5 height_distribution = stats.lognorm(a) x = np.linspace(stats.gamma.ppf(0.01, a), stats.gamma.ppf(0.99, a), 100) plt.plot(x, height_distribution.pdf(x)) _ = plt.gca().set_title("Set height probability density function")
_____no_output_____
MIT
examples/Generating random surfaces - perez method.ipynb
FrictionTribologyEnigma/SlipPY
Now we can make a surface realisation:
my_surface = s.RandomPerezSurface(target_psd = Cq, height_distribution=height_distribution, grid_spacing=1, generate=True) _ = my_surface.show(['profile', 'psd', 'histogram'], figsize = (14,4))
_____no_output_____
MIT
examples/Generating random surfaces - perez method.ipynb
FrictionTribologyEnigma/SlipPY
We can make another random surface with normally distibuted heights as follows:
np.random.seed(1) normal_distribution = stats.norm() my_surface = s.RandomPerezSurface(target_psd = Cq, height_distribution=normal_distribution, grid_spacing=1, generate=True, exact='heights') _ = my_surface.show(['profile', 'psd', 'histogram'], figsiz...
_____no_output_____
MIT
examples/Generating random surfaces - perez method.ipynb
FrictionTribologyEnigma/SlipPY
As shown the surfaces do not perfectly fit the PSD, however the height function is well represented, a better fit to the PSD can be achieved by using the psd estimate from the original paper:
np.random.seed(1) normal_distribution = stats.norm() my_surface = s.RandomPerezSurface(target_psd = Cq, height_distribution=normal_distribution, grid_spacing=1, generate=True, exact='psd') _ = my_surface.show(['profile', 'psd', 'histogram'], figsize = ...
_____no_output_____
MIT
examples/Generating random surfaces - perez method.ipynb
FrictionTribologyEnigma/SlipPY
*Note: You are currently reading this using Google Colaboratory which is a cloud-hosted version of Jupyter Notebook. This is a document containing both text cells for documentation and runnable code cells. If you are unfamiliar with Jupyter Notebook, watch this 3-minute introduction before starting this challenge: http...
# import libraries (you may add additional imports but you may not have to) import numpy as np import pandas as pd from scipy.sparse import csr_matrix from sklearn.neighbors import NearestNeighbors import matplotlib.pyplot as plt # get data files !wget https://cdn.freecodecamp.org/project-data/books/book-crossings.zip ...
_____no_output_____
MIT
book_recommendation_knn.ipynb
PratikChowdhury/Book-Recommendation-Engine
Use the cell below to test your function. The `test_book_recommendation()` function will inform you if you passed the challenge or need to keep trying.
books = get_recommends("Where the Heart Is (Oprah's Book Club (Paperback))") print(books) def test_book_recommendation(): test_pass = True recommends = get_recommends("Where the Heart Is (Oprah's Book Club (Paperback))") if recommends[0] != "Where the Heart Is (Oprah's Book Club (Paperback))": test_pass = Fa...
["Where the Heart Is (Oprah's Book Club (Paperback))", [["I'll Be Seeing You", 0.8016211], ['The Weight of Water', 0.77085835], ['The Surgeon', 0.7699411], ['I Know This Much Is True', 0.7677075], ['The Lovely Bones: A Novel', 0.7234864]]] You passed the challenge! 🎉🎉🎉🎉🎉
MIT
book_recommendation_knn.ipynb
PratikChowdhury/Book-Recommendation-Engine
import pandas as pd df=pd.read_csv('/content/Data_for_UCI_named.csv') df.head() from sklearn.preprocessing import LabelEncoder encoder=LabelEncoder() df['stabf']=encoder.fit_transform(df['stabf']) df.head() df.info() df.describe() import matplotlib.pyplot as plt plt.figure(figsize=(12,10)) plt.style.use('ggplot') df.p...
_____no_output_____
MIT
Regression_electrical.ipynb
CrucifierBladex/electrical_regression
Lineare Regression In diesem Notebook werden mittels linearer Regression Vorhersagen auf dem "Advertising"-Datensatz machen. Ziel ist es auf Basis von Werbeausgaben (im Bereich "TV", "Radio" und "Newspaper") Vorhersagen über Verkaufserlöse ("Sales") zu machen. Laden des Advertising-Datensatzes Zuerst laden wird die D...
import pandas as pd data_raw = pd.read_csv("data/advertising.csv") data_raw.head()
_____no_output_____
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Die `head`-Funktion zeigt nur die ersten 5 Datenpunkte im DataFrame an. Um zu wissen wie viele Datenpunkte sich im DataFrame befinden, schauen wir auf das `shape`-Attribut.
rows, cols = data_raw.shape print("Anzahl Zeilen:", rows) print("Anzahl Spalten:", cols)
Anzahl Zeilen: 200 Anzahl Spalten: 5
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Die erste Spalte enthält lediglich einen fortlaufenden Index und wird für die Vorhersage nicht benötigt, daher wird sie entfernt.
data = data_raw.drop(columns=['index']) data.head()
_____no_output_____
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Als nächstes visualieren wir die Datenpunkte mit Hilfe der `matplotlib`-Library.Dazu erstellten wir einen Plot, welcher auf der x-Achse die `TV`-Daten und auf der y-Achse die `sales`-Daten darstellt.
import matplotlib.pyplot as plt plt.figure(figsize=(16, 8)) plt.scatter(data['TV'], data['sales']) plt.xlabel("TV Werbebudget (€)") plt.ylabel("Sales (€)") plt.show()
_____no_output_____
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Training der linearen Regression Als erstes Modell trainieren wir eine lineare Regression mit nur einem Feature. Als Feature wählen wir die Spalte `TV`.Bevor wir mit dem Training beginnen, unterteilten wir die verfügbaren Daten in Trainings- und Testdaten, wobei die Trainingsdaten 80% der ursprünglichen Daten beinhalt...
train_data = data.sample(frac=0.8, random_state=0) test_data = data.drop(train_data.index) # Daten welche nicht in train_data sind print('Shape der Trainingsdaten:', train_data.shape) print('Shape der Testdaten:', test_data.shape)
Shape der Trainingsdaten: (160, 4) Shape der Testdaten: (40, 4)
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Anschließend trainieren wir auf den Trainingsdaten eine lineare Regression mit dem Feature `TV` und dem Label `sales`.Dafür erstellen wir:1. Einen DataFrame mit dem Feature `TV`. Diesen nennen wir `X_train`2. Eine Series mit dem Label. Diese nennen wir `y_train`Um `X_train` als DataFrame und nicht als Series zu erhalte...
X_series = train_data['TV'] # nur TV selektiert print("Datentyp von X_series:", type(X_series)) X_df = train_data[['TV']] # Liste mit TV als einzigem Element print("Datentyp von X_df:", type(X_df)) X_train = X_df # Die Features müssen als DataFrame vorliegen und nicht als Series y_train = train_data['sales'] print("Da...
Datentyp von X_series: <class 'pandas.core.series.Series'> Datentyp von X_df: <class 'pandas.core.frame.DataFrame'> Datentyp von y_train: <class 'pandas.core.series.Series'>
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Jetzt folgt das eigentliche Training des Modells:
from sklearn.linear_model import LinearRegression reg = LinearRegression() reg.fit(X_train, y_train)
_____no_output_____
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Die lineare Regression ist nun trainiert und die Modellgewichte in the `reg`-Variable verfügbar. Wir können uns nun die Regressionsgerade ausgeben lassen.
print(f"Regressionsgerade: y = {reg.intercept_} + {reg.coef_[0]}*TV")
Regressionsgerade: y = 6.745792674540394 + 0.04950397743349263*TV
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Mit dem trainierten Modell können wir nun Vorhersagen auf einzelnen Datenpunkten machen.
dataPoint = X_train.iloc[0] # erster Datenpunkt aus den Trainingsdaten prediction = reg.predict([dataPoint]) # predict-Methode erwartet Liste von Datenpunkten print(f"Bei einem TV-Werbebudget von {dataPoint[0]}€, werden {prediction[0]}€ Umsatz erzielt.")
Bei einem TV-Werbebudget von 69.2€, werden 10.171467912938084€ Umsatz erzielt.
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Um zu Visualisieren wie die trainierte Regressionsgerade aussieht, machen wir mit dem Modell Vorhersagen auf den Trainingsdatenpunkten.
prediction_train = reg.predict(X_train) # Vorhersage auf allen Trainingsdaten gleichzeitig plt.figure(figsize=(16, 8)) plt.scatter(data['TV'], data['sales']) # Trainingsdatenpunkte plt.plot(X_train, prediction_train, 'r') # Regressionsgerade plt.xlabel("TV Werbebudget ($)") plt.ylabel("Umsatz (Euro)") plt.show()
_____no_output_____
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Testen des Regressionsmodells Um die Qualität des trainierte Regressionsmodells zu überprüfen, machen wir damit Vorhersagen auf den Testdaten und bestimmen den MSE.
from sklearn.metrics import mean_squared_error X_test = test_data[['TV']] # X_test muss ein DateFrame sein y_test = test_data['sales'] # y_test muss eine Series sein prediction_test = reg.predict(X_test) mse_test = mean_squared_error(y_test, prediction_test) print("Mean squared error (MSE) auf Testdaten:", mse_...
Mean squared error (MSE) auf Testdaten: 14.41037265386388
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Multidimensionale lineare Regression Wir erweitern nun die lineare Regression indem wir die beiden Features `radio` und `newspaper` zusätzlich benutzen.
X_train = train_data[["TV", "radio", "newspaper"]] y_train = train_data['sales'] reg_all = LinearRegression() reg_all.fit(X_train, y_train) print(f"Regression: Y = {reg_all.intercept_} + {reg_all.coef_[0]}*TV + {reg_all.coef_[1]}*radio + {reg_all.coef_[2]}*newspaper")
Regression: Y = 2.9008471054251608 + 0.04699763711005833*TV + 0.1822877768933094*radio + -0.0012975074726833402*newspaper
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
Abschließend nutzen wir das neuen Modell um wiederum Vorhersagen auf den Testdaten zu machen.
X_test = test_data[["TV", "radio", "newspaper"]] y_test = test_data['sales'] predictions = reg_all.predict(X_test) mse = mean_squared_error(y_test, predictions) print("Mean squared error (MSE) auf Testdaten: %.2f" % mse)
Mean squared error (MSE) auf Testdaten: 3.16
MIT
1_Lineare_Regression.ipynb
spielmann-cloud/Machine-Learning-Course-2021
EG01 Protoype Definitions
init_uox_source = {'name' : 'SourceInitUOX', 'config' : {'Source' : {'outcommod' : 'InitUOX', 'outrecipe' : 'UOX_no232', 'inventory_size' : init_total_assem_size } ...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
Regions and Institutions 1. Init LWR Fleet (Deploy Inst)
init_lwr_prototypes = ['LWR' for x in range(0, n_init_rxtrs)] n_builds = [1 for x in range(0, n_init_rxtrs)] # staggering build times over first 18 timesteps so that reactors # don't all cycle together build_times = [x + 1 for x in range(0, 17) for y in range(0,6)] del build_times[-3:-1] # Lifetimes borrowed from prev...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
2. EG01 FC Facilities: Manager Inst
eg1_fc_prototypes = ['SourceNatU', 'SourceNonIsos', 'SourceAddIsos', 'Enrichment', 'StorageDepU', 'UOXStrNon', 'UOXMixNon', 'UOXStrAdd', 'UOXMixAdd', 'LWR', 'UOXCool', 'UOXStr', 'Waste' ] eg1_fc_inst = {'name' : 'FCInstEG01', 'initialfacil...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
3. Growth Regions: Pick Flat or 1% Growth
# 1% growth per year month_grow_rate = 0.01 / 12 exp_str = '100000 ' + str(month_grow_rate) + ' 0' exp_func = {'piece' : [{'start' : 18, 'function' : {'type' : 'exponential', 'params' : exp_str} } ] } growth_region = {'GrowthRegion' : {'gr...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
Recipes- Here: 1. Depleted U 2. Natural U- 100 ppt Init U232 Additive Recipes (in recipe_100ppt.py): 1. NonAdditive U Isotopes (U234) 2. Additive U Isotopes (U232, U233, U234) 3. Almost UOX NonAdditive Enr Ratio 4. Almost UOX Additive Enr Ratio 5. UOX without Additive 6. UOX with Additive 7. Spent UOX from...
dep_u = {'name' : 'DU', 'basis' : 'mass', 'nuclide' : [{'id' : 'U235', 'comp' : 0.0025}, {'id' : 'U238', 'comp' : 0.9975}] } nat_u = {'name' : 'NU', 'basis' : 'mass', 'nuclide' : [{'id' : 'U235', 'comp' : 0.007110}, {'id' : 'U238'...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
Main Input File
control = {'duration' : sim_duration, 'startmonth' : 1, 'startyear' : 2022, #'dt' : 86400, #'explicit_inventory' : True } def run_sim(filebase, sim): in_file = filebase + '.py' sim_file = '../output/' + filebase + '.sqlite' with open(in_file, 'w') as...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
EG01
archetypes = {'spec' : [{'lib' : 'cycamore', 'name' : 'Source'}, {'lib' : 'cycamore', 'name' : 'Enrichment'}, {'lib' : 'cycamore', 'name' : 'Mixer'}, {'lib' : 'cycamore', 'name' : 'Reactor'}, {'lib' : 'cycamore', 'name' : 'S...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
EG01 --> 23 Prototype Definitions
# remember that mixer facilities have fake ratios as of 3/9/22 # only making fast fuel without additive, because the additive doesn't make sense for MOX # (prototypes exist to have a split, but creating both streams does not work as expected) from eg23_facilities import (eg23_sink, non_lwr_cool, add_lwr_cool, lwr_sep, ...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
Regions and Institutions
eg1_23_fc_prototypes = ['SourceNatU', 'SourceNonIsos', 'SourceAddIsos', 'Enrichment', 'StorageDepU', 'UOXStrNon', 'UOXMixNon', 'UOXStrAdd', 'UOXMixAdd', 'LWR', 'UOXCoolNon', 'UOXCoolAdd', 'Waste', 'SFR' ] eg1_23_fc_inst = {'name' : 'FCInstEG01-23...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
Ramp-up Approach EG01 Only (so far)10% additive fuel in refuel for 3 cycles, 50% for the next 3, and 100% after that.
from eg01_facilities import (store_pct_no232, store_pct_232, mix_50pct232, mix_10pct232) # LWR prototype for partial additive availability/slow utility uptake upon introduction intro_50 = intro_time + 3 * cycle_time intro_100 = intro_50 + 3 * cycle_time lwr_ramp = {'name' : 'LWR', ...
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
Set of Simulations List of Simulation Scenarios (24):If this is done with flat power plus 1% growth in power, doubles to 48 simulations- EG Scenarios 1. 01 2. 01-23 3. 01-29- Init Additive Concentration 1. 100ppt 2. ???pp?- Date 1. long before transition 2. closer to transition- Rate 1. full availability 2. r...
# File Names: for eg in ['01', '23', '29']: for ppx in ['100ppt', '100ppb']: for date in ['05yr', '15yr']: for rate in ['full', 'ramp']: file = eg + '_' + ppx + '_' + date + '_' + rate print(file)
_____no_output_____
BSD-3-Clause
input/inputs.ipynb
opotowsky/opra_sims
The ``DateRangeSlider`` widget allows selecting a date range using a slider with two handles.For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently...
date_range_slider = pn.widgets.DateRangeSlider( name='Date Range Slider', start=dt.datetime(2017, 1, 1), end=dt.datetime(2019, 1, 1), value=(dt.datetime(2017, 1, 1), dt.datetime(2018, 1, 10)) ) date_range_slider
_____no_output_____
BSD-3-Clause
examples/reference/widgets/DateRangeSlider.ipynb
slamer59/panel
``DateRangeSlider.value`` returns a tuple of datetime values that can be read out and set like other widgets:
date_range_slider.value
_____no_output_____
BSD-3-Clause
examples/reference/widgets/DateRangeSlider.ipynb
slamer59/panel
ControlsThe `DateRangeSlider` widget exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:
pn.Row(date_range_slider.controls(jslink=True), date_range_slider)
_____no_output_____
BSD-3-Clause
examples/reference/widgets/DateRangeSlider.ipynb
slamer59/panel
3. Sale price predictionThe aim of this part is to predict the sale price of houses as accurately as possible with a multivariate linear regression. SetupImport required libaries and load fitted and cleaned data from first notebook.
# load the libaries import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy import stats from datetime import datetime, date, time, timedelta from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, ...
_____no_output_____
MIT
03_sale_price_prediction.ipynb
Octodon-D/project_house_price
Split dataframe into test and train setWe do not yet know the true sales price until the house has been successfully sold. In order to test the model before applying on new and unknown data we have to split the data into a train and test dataset. During building the model I only work with the train dataset and keep th...
# remove columns because they do not provide prognostic information #df.drop(['sqft_above', 'sqft_basement', 'yr_renovated', 'lat', 'long', 'yr_sale', # 'mo_sale', 'sqft_price', 'sqft_lot_price'], axis=1, inplace=True) # define descriptive variables all_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft...
X_train (features for the model to learn from): (16197, 15) y_train (labels for the model to learn from): (16197,) X_test (features to test the model's accuracy against): (5399, 15) y_test (labels to test the model's accuracy with): (5399,)
MIT
03_sale_price_prediction.ipynb
Octodon-D/project_house_price
Correlations with priceTo get an idea which features are most interesting for the model, we take another look at the Pearson correlation coefficients
# combine X_train and y_train again to use only the training set X_training = X_train.merge(y_train, left_index=True, right_index=True) corr_price = X_training.corrwith(X_training.price).sort_values(ascending=False) corr_price = corr_price[1:] # exclude price # plot correlation with price fig, ax = plt.subplots(figsiz...
_____no_output_____
MIT
03_sale_price_prediction.ipynb
Octodon-D/project_house_price
Multiple Linear RegressionAbove we see that some features are more correlated with the price than others. To test different feature combinations, three different models will be build: One with all features (15 features), one with only features which have a Pearson correlation above 0.5 (4 features) and one with all fe...
# training of the model # first model with all features model1 = LinearRegression() model1.fit(X_train, y_train) # second model with features with a correlation higher than 0.5 # determine variables to pass into the model model2 = LinearRegression() features2 = ['sqft_living', 'grade', 'sqft_living15', 'bathrooms'] mo...
_____no_output_____
MIT
03_sale_price_prediction.ipynb
Octodon-D/project_house_price
Evaluation The R2 and the adjusted R2 indicate the percentage of variance of the target variable (price per square foot) explained by the model. Adjusted R2 is a modified version of R2 that has been adjusted with the number of explanatory variables. It penalises the addition of unnecessary variables and allows compari...
# evaluation model 1 print('R^2: ', model1.score(X_test, y_test).round(4)) print('adj. R^2: ', (1-(1-model1.score(X_test, y_test))*(X_test.shape[0]- 1)/(X_test.shape[0]-X_test.shape[1]-1)).round(4)) # evaluation model 2 print('R^2: ', model2.score(X_test[features2], y_test).round(4)) print('adj. R^2: ', (1-(1-model2.sc...
R^2: 0.5987 adj. R^2: 0.5979
MIT
03_sale_price_prediction.ipynb
Octodon-D/project_house_price
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's High Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/da...
%matplotlib inline import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_boston import sklearn.model_selection
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
import sagemaker from sagemaker import get_execution_role from sagemaker.amazon.amazon_estimator import get_image_uri from sagemaker.predictor import csv_serializer # This is an object that represents the SageMaker session that we are currently operating in. This # object contains some useful information that we will ...
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
boston = load_boston()
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
# First we package up the input data and the target variable (the median value) as pandas dataframes. This # will make saving the data to a file a little easier later on. X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names) Y_bos_pd = pd.DataFrame(boston.target) # We split the dataset into 2/3 training ...
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perfor...
# This is our local data directory. We need to make sure that it exists. data_dir = '../data/boston' if not os.path.exists(data_dir): os.makedirs(data_dir) # We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header # information or an index as this is requ...
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data u...
prefix = 'boston-xgboost-HL' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Step 4: Train the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. We will be making use of the high level SageMaker API to do this which will make the resulting code a little easier to read at the cost of some flexibility.To construct an e...
# As stated above, we use this utility method to construct the image name for the training container. container = get_image_uri(session.boto_region_name, 'xgboost') # Now that we know which container to use, we can construct the estimator object. xgb = sagemaker.estimator.Estimator(container, # The image name of the t...
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2. There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example: get_image_uri(region, 'xgboost', '1.0-1'). Parameter image_name will be renamed to image...
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Before asking SageMaker to begin the training job, we should probably set any model specific hyperparameters. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional info...
xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, objective='reg:linear', early_stopping_rounds=10, num_round=20...
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Now that we have our estimator object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method.
# This is a wrapper around the location of our train and validation data, to make sure that SageMaker # knows our data is in csv format. s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s...
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2. 's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Step 5: Test the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. To start with, we need to build a transformer object from our fit model.
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Next we ask SageMaker to begin a batch transform job using our trained model and applying it to the test data we previously stored in S3. We need to make sure to provide SageMaker with the type of data that we are providing to our model, in our case `text/csv`, so that it knows how to serialize our data. In addition, w...
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') xgb_transformer.wait()
........................... .2020-09-10T23:00:13.072:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-09-10 23:00:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-10 23:00:12 +0000] [1] [INFO] Listening at: http://0.0....
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Now that the batch transform job has finished, the resulting output is stored on S3. Since we wish to analyze the output inside of our notebook we can use a bit of notebook magic to copy the output file from its S3 location and save it locally.
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
Completed 2.3 KiB/2.3 KiB (36.4 KiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-444100773610/xgboost-2020-09-10-22-55-41-172/test.csv.out to ../data/boston/test.csv.out
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) plt.scatter(Y_test, Y_pred) plt.xlabel("Median Price") plt.ylabel("Predicted Price") plt.title("Median Price vs Predicted Price")
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a ...
# First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir
_____no_output_____
MIT
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
giaminhhoang/MLND-sagemaker-deployment
Data Preprocessing NotebookIn this notebook, we will show how to use python to preprocess the data.
# Load packages import pandas as pd import numpy as np from sklearn.impute import SimpleImputer from sklearn.preprocessing import scale, power_transform from sklearn.feature_selection import VarianceThreshold from scipy import stats from statistics import mean import matplotlib.pyplot as plt from matplotlib.pyplot impo...
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
01 Data CleaningAfter you load the data, the first thing is to check how many variables are there, the type of variables, the distributions, and data errors. You can get descriptive statistics of the data using `describe()` function:
dat.describe()
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
You can check missing value and column type quickly using `info()`:
dat.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1000 entries, 0 to 999 Data columns (total 19 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 1000 non-null int64 1 gender 1000 non-null object 2 income 816 non-null float64...
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
Are there any problems? Questionnaire response Q1-Q10 seem reasonable, the minimum is 1 and maximum is 5. Recall that the questionnaire score is 1-5. The number of store transactions (`store_trans`) and online transactions (`online_trans`) make sense too. Things to pay attention are:1. There are some missing values.2. ...
# set problematic values as missings dat.loc[dat.age > 100, 'age'] = np.nan dat.loc[dat.store_exp < 0, 'store_exp'] = np.nan dat.loc[dat.income.isnull(), 'income'] = np.nan # see the results # some of the values are set as NA dat[['income','age', 'store_exp']].info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1000 entries, 0 to 999 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 income 816 non-null float64 1 age 999 non-null float64 2 store_exp 999 non-null float64 dtypes: float64...
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
02-Missing Value 02.1-Impute missing values with `median`, `mode`, `mean`, or `constant`You can set the imputation strategy using `strategy` argument.- If “`mean`”, then replace missing values using the mean along each column. Can only be used with numeric data.- If “`median`”, then replace missing values using the m...
impdat = dat[['income','age','store_exp']] imp_mean = SimpleImputer(strategy="mean") imp_mean.fit(impdat) impdat = imp_mean.transform(impdat) impdat = pd.DataFrame(data=impdat, columns=["income", "age",'store_exp']) impdat.head()
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
Let us replace the columns in `dat` with the imputed columns.
# replace the columns in `dat` with the imputed columns dat2 = dat.drop(columns = ['income','age','store_exp']) dat_imputed = pd.concat([dat2.reset_index(drop=True), impdat] , axis=1) dat_imputed.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1000 entries, 0 to 999 Data columns (total 19 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 gender 1000 non-null object 1 house 1000 non-null object 2 online_exp 1000 non-null float64...
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
02.2-K-nearest neighbors
impdat = dat[['income','age','store_exp']] imp_knn = KNNImputer(n_neighbors=2, weights="uniform") impdat = imp_knn.fit_transform(impdat) impdat = pd.DataFrame(data=impdat, columns=["income", "age",'store_exp']) impdat.head() # replace the columns in `dat` with the imputed columns dat2 = dat.drop(columns = ['income','ag...
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1000 entries, 0 to 999 Data columns (total 19 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 gender 1000 non-null object 1 house 1000 non-null object 2 online_exp 1000 non-null float64...
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
03-Centering and ScalingLet’s standardize variables `income` and `age` from imputed data `dat_imputed`. - `axis`: axis used to compute the means and standard deviations along. If 0, standardize each column, otherwise(if 1) each row.- `with_mean`: if True, center the data before scaliing- `with_std`: if True, scale the...
dat_s = dat_imputed[['income', 'age']] dat_sed = scale(dat_s, axis = 0, with_mean = True, with_std = True)
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
After centering and scaling, the features are with mean 0 and standard deviation 1.
dat_sed = pd.DataFrame(data=dat_sed, columns=["income", "age"]) dat_sed.describe()
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
04-Resolve SkewnessWe can use `sklearn.preprocessing.power_transform` to resolve skewness in the data. Currently, `power_transform` supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Let's apply B...
dat_skew = dat_imputed[['income', 'age']] dat_skew_res = power_transform(dat_skew, method = 'box-cox') dat_skew_res = pd.DataFrame(data=dat_skew_res, columns=["income", "age"]) fig, axs = plt.subplots(2) fig.suptitle('Before (top) and after (bottom) transformation') axs[0].hist(dat_imputed.income) axs[1].hist(dat_skew_...
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
05-Resolve OutliersBox plot, histogram and some other basic visualizations can be used to initially check whether there are outliers. For example, we can visualize numerical non-survey variables using scatter matrix plot:
# select numerical non-survey data subdat = dat_imputed[["age", "income", "store_exp", "online_exp", "store_trans", "online_trans"]] subdat = pd.DataFrame(data=subdat, columns=["age", "income", "store_exp", "online_exp", "store_trans", "online_trans"]) plts = pd.plotting.scatter_matrix(subdat, alpha=0.2, figs...
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
Let us use MAD (section 5.5 of the book) to detect ourliers. The result here is slightly different because we use the imputed data `dat_imputed` here.
# calculate median of the absolute dispersion for income income = dat_imputed.income ymad = stats.median_absolute_deviation(income) # calculate z-score zs = (income - mean(income))/ymad # count the number of outliers sum(zs > 3.5)
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
06-CollinearityChecking correlations is an important part of the exploratory data analysis process. In python, you can visualize correlation structure of a set of predictors using [seaborn](https://seaborn.pydata.org) library.
# select non-survey numerical variables df = dat_imputed[["age", "income", "store_exp", "online_exp", "store_trans", "online_trans"]] df = pd.DataFrame(df, columns = ["age", "income", "store_exp", "online_exp", "store_trans", "online_trans"]) cor_plot = sns.heatmap(df.corr(), annot = True)
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
The closer the correlation is to 0, the lighter the color is. Let us write a `findCorrelation()` function to remove a minimum number of predictors to ensure all pairwise correlations are below a certain threshold.
## Drop out highly correlated features in Python def findCorrelation(df, cutoff = 0.8): """ Given a numeric pd.DataFrame, this will find highly correlated features, and return a list of features to remove params: - df : pd.DataFrame - cutoff : correlation threshold, will remove one of pairs of f...
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
Remove the columns:
removeCols = findCorrelation(df, cutoff = 0.7) print(removeCols) df1 = df.drop(columns = removeCols) # check the new cor matrix df1.corr()
['store_trans', 'online_trans']
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
07-Sparse Variables
# create a data frame with sparse variables col1 = [0,0,0,0,1,1,0,0,0,0,0,0,] col2 = range(0,len(col1)) a_dict = {"col1":col1, "col2":col2} df = pd.DataFrame(a_dict) df
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
Define a function to remove columns that have a low variance. An instance of the class can be created specify the “threshold” argument, which defaults to 0.0 to remove columns with a single value.
def variance_threshold_selector(df, threshold=0): """ Given a numeric pd.DataFrame, this will remove columns that have a low variance and the return the resulted dataframe params: - df : input dataframe from which to compute variances. - threshold : Features with a training set variance lower t...
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience
08-Encode Dummy VariablesLet’s encode `gender` and `house` from `dat_imputed` to dummy variables. You can use `get_dummies` function from `pandas`:
df = dat_imputed[['gender', 'house']] pd.get_dummies(df).head()
_____no_output_____
CC0-1.0
Python/DataPreprocessing.ipynb
happyrabbit/IntroDataScience