markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
geom_density2d()
import pandas as pd from lets_plot import * LetsPlot.setup_html() df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv') ggplot(df, aes('cty', 'hwy')) + geom_density2d(aes(color='..group..'))
_____no_output_____
MIT
source/examples/basics/gog/geom_density2d.ipynb
ASmirnov-HORIS/lets-plot-docs
General Imports
import pandas as pd import numpy as np
_____no_output_____
MIT
examples/permutation_importance_example.ipynb
barak1412/automl_infrastructure
Data Loading
df = pd.read_csv('adult_salary.data', header=None, usecols=[3,4,5,6,8,9,14], names=['EDUCATION', 'EDUCATION_PERIOD', 'STATUS', 'OCCUPY', 'RACE', 'GENDER','RICH'], dtype=str) label_col = 'RICH' features_cols = [c for c in df.columns if c != label_col] df['EDUCATION_PERIOD'] = df['EDUCATION_PERIOD'].astype(int) df[label_col] = df[label_col].apply(lambda x: 1 if x.strip() == '<=50K' else 0).astype(int) categorial_features = [c for c in df.columns if df.dtypes[c] != np.int32 and df.dtypes[c] != np.int64] df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 32561 entries, 0 to 32560 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 EDUCATION 32561 non-null object 1 EDUCATION_PERIOD 32561 non-null int32 2 STATUS 32561 non-null object 3 OCCUPY 32561 non-null object 4 RACE 32561 non-null object 5 GENDER 32561 non-null object 6 RICH 32561 non-null int32 dtypes: int32(2), object(5) memory usage: 1.5+ MB
MIT
examples/permutation_importance_example.ipynb
barak1412/automl_infrastructure
Data Preparation
from sklearn.preprocessing import LabelBinarizer feature_encoder_dict = {} final_df = df.copy() for feature in categorial_features: feature_encoder_dict[feature] = LabelBinarizer() final_df[feature] = pd.Series(list(feature_encoder_dict[feature].fit_transform(df[feature]))) final_df from sklearn.model_selection import train_test_split # split to train and test train_df, test_df = train_test_split(final_df, test_size=0.1, shuffle=True)
_____no_output_____
MIT
examples/permutation_importance_example.ipynb
barak1412/automl_infrastructure
Modeling
from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score, make_scorer from automl_infrastructure.classifiers.adapters import SklearnClassifierAdapter lr_model = SklearnClassifierAdapter(name='lr1', sklearn_model=LogisticRegression()) lr_model.fit(train_df[features_cols], train_df[label_col]) predictions = lr_model.predict(test_df[features_cols]) print(accuracy_score(test_df[label_col], predictions)) rf_model = SklearnClassifierAdapter(name='rf1', sklearn_model=RandomForestClassifier()) rf_model.fit(train_df[features_cols], train_df[label_col]) predictions = rf_model.predict(test_df[features_cols]) print(accuracy_score(test_df[label_col], predictions))
C:\Users\Barak\.conda\envs\DSEnv\lib\site-packages\sklearn\linear_model\_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
MIT
examples/permutation_importance_example.ipynb
barak1412/automl_infrastructure
Permutation Importance Calculation
from automl_infrastructure.interpretation import PermutationImportance pi = PermutationImportance(lr_model, scoring='accuracy') pi.fit(test_df[features_cols], test_df[label_col]) pi.show_weights() print() pi = PermutationImportance(rf_model, scoring='accuracy') pi.fit(test_df[features_cols], test_df[label_col]) pi.show_weights()
Feature Weight Std 0 STATUS 0.076348 0.002884 1 OCCUPY 0.029066 0.001612 2 EDUCATION_PERIOD 0.026302 0.001469 3 EDUCATION 0.005322 0.001511 4 RACE 0.001228 0.001566 5 GENDER 0.000819 0.000145 Feature Weight Std 0 STATUS 0.094259 0.002171 1 OCCUPY 0.036844 0.001958 2 GENDER 0.013714 0.002329 3 EDUCATION_PERIOD 0.009927 0.001013 4 EDUCATION 0.003991 0.003131 5 RACE 0.001740 0.001381
MIT
examples/permutation_importance_example.ipynb
barak1412/automl_infrastructure
Tutorial 0a: Setting Up Python For Scientific Computing In this tutorial, we will set up a scientific Python computing environment using the [Anaconda python distribution by Continuum Analytics](https://www.continuum.io/downloads). Why Python? As is true in human language, there are [hundreds of computer programming languages](https://en.wikipedia.org/wiki/List_of_programming_languages). While each has its own merit, the major languages for scientific computing are C, C++, R, MATLAB, Python, Java, Julia, and Fortran. [MATLAB](https://www.mathworks.com), [Julia](https://julialang.org/), and [Python](https://www.python.org) are similar in syntax and typically read as if they were written in plain english. This makes both languages a useful tool for teaching but they are also very powerful languages and are **very** actively used in real-life research. MATLAB is proprietary while Python is open source. A benefit of being open source is that anyone can write and release Python packages. For science, there are many wonderful community-driven packages such as [NumPy](http://www.numpy.org), [SciPy](http://www.scipy.org), [scikit-image](http://scikit-image.org), and [Pandas](http://pandas.pydata.org) just to name a few. In this tutorial, we will set up a scientific Python computing environment using the [Anaconda python distribution by Continuum Analytics](https://www.continuum.io/downloads). Why Python? - Beginner friendly- Versatile and flexible- Most mature package libraries around- Most popular in Machine learning world As is true in human language, there are [hundreds of computer programming languages](https://en.wikipedia.org/wiki/List_of_programming_languages). While each has its own merit, the major languages for scientific computing are C, C++, R, MATLAB, Python, Java, Julia, and Fortran. [MATLAB](https://www.mathworks.com), [Julia](https://julialang.org/), and [Python](https://www.python.org) are similar in syntax and typically read as if they were written in plain english. This makes both languages a useful tool for teaching but they are also very powerful languages and are **very** actively used in real-life research. MATLAB is proprietary while Python is open source. A benefit of being open source is that anyone can write and release Python packages. For science, there are many wonderful community-driven packages such as [NumPy](http://www.numpy.org), [SciPy](http://www.scipy.org), [scikit-image](http://scikit-image.org), and [Pandas](http://pandas.pydata.org) just to name a few. Installing Python 3 with Anaconda Python 3 vs Python 2 There are two dominant versions of Python (available through the Anaconda distribution) used for scientific computing, Python 2.7 and Python 3.7. We are at an interesting crossroads between these two versions. The most recent release (Python 3.10 ) is not backwards compatible with previous versions of Python. While there are still some packages written for Python 2.7 that have not been modified for compatibility with Python 3.7, a large number have transitioned and Python 2.7 will no longer be supported as of January 1, 2020. As this will be the future for scientific computing with Python, we will use Python 3.9 for these tutorials. Anaconda There are several scientific Python distributions available for MacOS, Windows, and Linux. The two most popular, [Enthought Canopy](https://www.enthought.com/products/canopy/) and [Anaconda](https://www.continuum.io/why-anaconda) are specifically designed for scientific computing and data science work. For this course, we will use the Anaconda Python 3.7 distribution. To install the correct version, follow the instructions below.1. Navigate to [the Anaconda download page](https://www.continuum.io/downloads) and download the Python 3.7 graphical installer.2. Launch the installer and follow the onscreen instructions.Congratulations! You now have the beginnings of a scientific Python distribution. Using JupyterLab as a Scientific Development Environment Packaged with the Anaconda Python distribution is the [Jupyter project](https://jupyter.org/). This environment is incredibly useful for interactive programming and development and is widely used across scientific computing. Jupyter allows for interactive programming in a large array of programming languages including Julia, R, and MATLAB. As you've guessed by this point, we will be focusing on using Python through the Jupyter Environment. The key component of the Jupyter interactive programming environment is the [Jupyter Notebook](https://jupyter.org/). This acts lkike an interactive script which allows one to interweave code, mathematics, and text to create a complete narrative around your computational project. In fact, you are reading a Jupyter Notebook right now!While Jupyter Notebooks are fantastic alone, we will be using them throughout the course via the [JupyterLab Integrated Development Environment (IDE)](https://jupyter.org/). JupyterLab allows omne to write code in notebooks, navigate around your file system, write isolated python scripts, and even access a UNIX terminal, all of which we will do throughout this class. Even better, JupyterLab comes prepackaged with your Anaconda Python distribution. Launching JupyterLab When you installed Anaconda, you also installed the Anaconda Navigator, an app that allows you to easily launch a JupyterLab instance. When you open up Anaconda Navigator, you should see a screen that looks like this,![](navigator.png)where I have boxed in the JupyterLab prompt with a red box. Launch the JupyterLab IDE by clicking the 'launch' button. This should automatically open a browser window with the JupyterLab interface, ![](jupyterlab.png) Creating your course directoryDuring the course, you will be handing in the computational portions of your homeworks as Jupyter Notebooks and, as such, it will be important for the TA's to be able to run your code to grade it. We will often be reading in data from a file on your computer, manipulating it, and then plotting the outcome. **To ensure the TA's can run your code without manipulating it, you MUST use a specific file structure.** We can set up the file structure pretty easily directly through JupyterLab. Open the side bar of the JupyterLab interface by clicking the folder icon on the left hand side of the screen. This will slide open a file browser like so:Your files will look different than mine (unless you're using my computer!), but it will show the contents of your computer's `home` directory. Using the sidebar, navigate to wherever you will want to make a new folder called `Scientific-Computing` by clicking the "new folder" symbol, ![](newfoldersymbol.png).Double-click the `Scientific-Computing` folder to open it and make two new folders, one named `code` and another `data`. Your final file directory should look like so:That's it! You've now made the file structure for the class. All of the Jupyter Notebooks you use in the course will be made and wirttin in the `code` folder. All data you have to load will live in the `data` directory. This structure will make things easier for the TA when it comes to grading your work, but will also help you maintain a tidy homework folder. Starting A Jupyter Notebook Let's open a new notebook. Navigate to your `code` folder and click the `+` in the sidebar. This will open a new "Launcher" window where a variety of new filetypes can be opened. One of them will be a "Python 3 Notebook".Clicking this will open a new Jupyter Nook named `Untitled.ipynb`.Right-click the "Untitled.ipynb" in the sidebar and rename it to something more informative, say `testing_out_python.ipynb`.The right-ha d side of your screen is the actual notebook. You will see a "code cell" (grey rectangle) along with a bunch of other boxes above it. In the [Jupyter Notebook Tutorial](http://rpgroup.caltech.edu/bige105/tutorials/t0b/t0b_jupyter_notebooks) we cover these buttons in detail. For now, we'll just check to make sure you have a working Python distribution. `Hello, World`Let's write our first bit of Python code to make sure that everything is working correctly on your system. In Jupyter Notebooks, all code is typed in grey rectangles called "code cells". When a cell is "run", the result of the computation is shown underneath the code cell. Double-click the code cell on the right-hand side of your JupyterLab window and type the following:
# This a comment and won't be read by Python. All comments start with `#` print('Hello, World. Long time, no see. This sentence should be printed below by pressing `Shift + Enter` ')
Hello, World. Long time, no see. This sentence should be printed below by pressing `Shift + Enter`
MIT
Chapter00/t0a/t0a_setting_up_python.ipynb
mazhengcn/scientific-computing-with-python
Note that you cannot edit the text *below* the code cell. This is the output of the `print()` function in Python. Our First PlotThis class will often require you to generate plots of your computations coupled with some comments about your interpretation. Let's try to generate a simple plot here to make sure everything is working with your distribution. Don't worry too much about the syntax for right now. The basics of Python syntax are given in [Tutorial 0c](http://rpgroup.caltech.edu/bige105/tutorials/t0b/t0c_python_syntax_and_plotting).Add a new code cell beneath the one that contains `print('Hello, Pangaea')`. When you execute a cell using `Shift + Enter`, a new cell should appear beneath what you just ran. If it's not there, you can make a new cell by clicking the `+` icon in the notebook menu bar. In the new cell, type the following:
# Import Python packages necessary for this script import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set() # Generate a beautiful sinusoidal curve x = np.linspace(0, 2*np.pi, 500) y = np.sin(2 * np.sin(2 * np.sin(2 * x))) plt.plot(x, y) plt.xlabel('$x$') plt.ylabel('$y$') plt.show()
_____no_output_____
MIT
Chapter00/t0a/t0a_setting_up_python.ipynb
mazhengcn/scientific-computing-with-python
Dummy applications CPU stress test
#analysis("example-data/cpu") analysis("example-data-sana/cpu", skiprows=15) #analysis("data-3bb4e2af-bf0a-4b78-b8b2-f9dbd1df35b3 (copy)/cpu")
==================== CPU Analysis ==================== Total CPU core: 64 Total CPU time (seconds): 6979.350 Parallel CPU time (seconds): 128.090 Makes span (seconds): 128.090
MIT
notebooks/Performance_Ridgeregression_Sana.ipynb
courtois-neuromod/movie_decoding_sa
Disk test
analysis("example-data-sana/disk", skiprows=15)
==================== CPU Analysis ==================== Total CPU core: 64 Total CPU time (seconds): 6979.350 Parallel CPU time (seconds): 128.090 Makes span (seconds): 128.090
MIT
notebooks/Performance_Ridgeregression_Sana.ipynb
courtois-neuromod/movie_decoding_sa
Network test
analysis("example-data-sana/network", skiprows=15)
==================== CPU Analysis ==================== Total CPU core: 64 Total CPU time (seconds): 6979.350 Parallel CPU time (seconds): 128.090 Makes span (seconds): 128.090
MIT
notebooks/Performance_Ridgeregression_Sana.ipynb
courtois-neuromod/movie_decoding_sa
Neuroimaging Applications BET participant analysis
analysis("example-data-sana/bet_participant", skiprows=15)
==================== CPU Analysis ==================== Total CPU core: 64 Total CPU time (seconds): 6979.350 Parallel CPU time (seconds): 128.090 Makes span (seconds): 128.090
MIT
notebooks/Performance_Ridgeregression_Sana.ipynb
courtois-neuromod/movie_decoding_sa
BET group analysis
analysis("example-data-sana/bet_group", skiprows=15)
==================== CPU Analysis ==================== Total CPU core: 64 Total CPU time (seconds): 6979.350 Parallel CPU time (seconds): 128.090 Makes span (seconds): 128.090
MIT
notebooks/Performance_Ridgeregression_Sana.ipynb
courtois-neuromod/movie_decoding_sa
MRIQC participant analysis
analysis("example-data-sana/mriqc_participant", skiprows=15)
==================== CPU Analysis ==================== Total CPU core: 64 Total CPU time (seconds): 6979.350 Parallel CPU time (seconds): 128.090 Makes span (seconds): 128.090
MIT
notebooks/Performance_Ridgeregression_Sana.ipynb
courtois-neuromod/movie_decoding_sa
MRIQC group analysis
analysis("example-data-sana/mriqc_group", skiprows=15)
==================== CPU Analysis ==================== Total CPU core: 64 Total CPU time (seconds): 6979.350 Parallel CPU time (seconds): 128.090 Makes span (seconds): 128.090
MIT
notebooks/Performance_Ridgeregression_Sana.ipynb
courtois-neuromod/movie_decoding_sa
Bokeh Circle X Glyph
from bokeh.plotting import figure, output_file, show from bokeh.models import Range1d from bokeh.io import export_png fill_color = '#e08214' line_color = '#fdb863' output_file("../../figures/circle_x.html") p = figure(plot_width=400, plot_height=400) p.circle_x(x=0,y=0,size=100, fill_alpha=1,fill_color=fill_color, line_alpha=1, line_color=line_color, line_dash='dashed', line_width=5) p.circle_x(x=0,y=1,size=100, fill_alpha=0.8, fill_color=fill_color, line_alpha=1, line_color=line_color, line_dash='dotdash', line_width=8) p.circle_x(x=1,y=0,size=100, fill_alpha=0.6, fill_color = fill_color, line_alpha=1, line_color=line_color, line_dash='dotted', line_width=13) p.circle_x(x=1,y=1,size=100, fill_alpha=0.4, fill_color = fill_color, line_alpha=1, line_color=line_color, line_dash='solid', line_width=17) p.x_range = Range1d(-0.5,1.5, bounds=(-1,2)) p.y_range = Range1d(-0.5,1.5, bounds=(-1,2)) show(p) export_png(p, filename="../../figures/circle_x.png");
_____no_output_____
MIT
visualizations/bokeh/notebooks/glyphs/circle_x.ipynb
martinpeck/apryor6.github.io
CNN Image Data Preview & Statistics Welcome! This notebook allows you to preview some of your single-cell image patches to make sure your annotated data are of good quality. You will also get a chance to calculate the statistics for your annotated data which can be useful for data preprocessing, e.g. *class imbalance check* prior to CNN training.
import os import json import random import zipfile import numpy as np import matplotlib.pyplot as plt from tqdm import tqdm from datetime import datetime from skimage.io import imread
_____no_output_____
BSD-3-Clause
notebooks/B_CNN_Data_Preview_Images.ipynb
nthndy/cnn-annotator
Specify how many patches you'd like to visualise from your batch:By default, the code below will allow you to see any 10 random patches per each class. If there is not enough training data for any label, a noisy image will be visualised. The default setting doesn't save the collage out, but you can change it by specifying the ```save_collage``` to ```True```.
LABELS = ["Interphase", "Prometaphase", "Metaphase", "Anaphase", "Apoptosis"] patches_to_show = 10 save_collage = False
_____no_output_____
BSD-3-Clause
notebooks/B_CNN_Data_Preview_Images.ipynb
nthndy/cnn-annotator
Load a random 'annotation' zip file to check image patches:
zipfiles = [f for f in os.listdir("./") if f.startswith("annotation") and f.endswith(".zip")] zip_file_name = zipfiles[0]
_____no_output_____
BSD-3-Clause
notebooks/B_CNN_Data_Preview_Images.ipynb
nthndy/cnn-annotator
Optional: specify which zip file you'd like to visualise:
#zip_file_name = "annotation_02-08-2021--10-33-59.zip"
_____no_output_____
BSD-3-Clause
notebooks/B_CNN_Data_Preview_Images.ipynb
nthndy/cnn-annotator
Process the zip file & extract subfolders with individual images:
# Make sure zip file name is stripped of '.zip' suffix: if zip_file_name.endswith(".zip"): zip_file_name = zip_file_name.split(".zip")[0] # Check if the zipfile was extracted: if not zip_file_name in os.listdir("./"): print (f"Zip file {zip_file_name}.zip : Exporting...", end="\t") with zipfile.ZipFile(f"./{zip_file_name}.zip", 'r') as zip_ref: zip_ref.extractall(f"./{zip_file_name}/") else: print (f"Zip file {zip_file_name}.zip : Exported!...", end="\t") print ("Done!")
Zip file annotation_02-08-2021--10-33-59.zip : Exporting... Done!
BSD-3-Clause
notebooks/B_CNN_Data_Preview_Images.ipynb
nthndy/cnn-annotator
Plot the collage with all 5 labels:
fig, axs = plt.subplots(figsize=(int(len(LABELS)*5), int(patches_to_show*5)), nrows=patches_to_show, ncols=len(LABELS), sharex=True, sharey=True) for idx in range(len(LABELS)): label = LABELS[idx] label_dr = f"./{zip_file_name}/{label}/" # Check if directory exists: if os.path.isdir(label_dr): patch_list = os.listdir(label_dr) random.shuffle(patch_list) print (f"Label: {label} contains {len(patch_list)} single-cell image patches") else: patch_list = [] print (f"Label: {label} has not been annotated.") # Plot the patches: for i in range(patches_to_show): # Set titles to individual columns if i == 0: axs[i][idx].set_title(f"Label: {label}", fontsize=16) if i >= len(patch_list): patch = np.random.randint(0,255,size=(64,64)).astype(np.uint8) axs[i][idx].text(x=32, y=32, s="noise", size=50, rotation=30., ha="center", va="center", bbox=dict(boxstyle="round", ec=(0.0, 0.0, 0.0), fc=(1.0, 1.0, 1.0))) else: patch = plt.imread(label_dr + patch_list[i]) axs[i][idx].imshow(patch, cmap="binary_r") axs[i][idx].axis('off') if save_collage is True: plt.savefig("../label_image_patches.png", bbox_to_inches='tight') plt.show() plt.close()
Label: Interphase contains 6 single-cell image patches Label: Prometaphase contains 6 single-cell image patches Label: Metaphase contains 5 single-cell image patches Label: Anaphase contains 8 single-cell image patches Label: Apoptosis contains 6 single-cell image patches
BSD-3-Clause
notebooks/B_CNN_Data_Preview_Images.ipynb
nthndy/cnn-annotator
Calculate some data statistics WITHOUT unzipping the files:
label_count = dict({'Prometaphase' : 0, 'Metaphase' : 0, 'Interphase' : 0, 'Anaphase' : 0, 'Apoptosis' : 0}) for f in tqdm(zipfiles): archive = zipfile.ZipFile(f, 'r') json_data = archive.read(f.split(".zip")[0] + ".json") data = json.loads(json_data) # Count instances per label: counts = [[x, data['labels'].count(x)] for x in set(data['labels'])] print (f"File: {f}\n\t{counts}") # Add counts to label counter: for lab in counts: label_count[lab[0]] += lab[1]
100%|██████████| 1/1 [00:00<00:00, 255.97it/s]
BSD-3-Clause
notebooks/B_CNN_Data_Preview_Images.ipynb
nthndy/cnn-annotator
Plot the statistics:
COLOR_CYCLE = [ '#1f77b4', # blue '#ff7f0e', # orange '#2ca02c', # green '#d62728', # red '#9467bd', # purple ] # Plot the bar graph: plt.bar(range(len(label_count)), list(label_count.values()), align='center', color=COLOR_CYCLE) plt.xticks(range(len(label_count)), list(label_count.keys()), rotation=30) plt.title("Single-Cell Patches per Label") plt.xlabel("Class Label") plt.ylabel("Patch Count") plt.grid(axis='y', alpha=0.3) plt.show() plt.close()
_____no_output_____
BSD-3-Clause
notebooks/B_CNN_Data_Preview_Images.ipynb
nthndy/cnn-annotator
Getting and Knowing your Dataimport dataset from https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user Import the necessary libraries
import numpy as np import pandas as pd
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Assign it to a variable called users and use the 'user_id' as index and See the first 25 entries
df = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user",sep='|', index_col="user_id") df.head(25)
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
See the last 10 entries
df.tail(10)
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
The number of observations in the dataset
df.shape[0]
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
The number of columns in the dataset
df.shape[1]
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Name of all the columns
df.columns
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Dataset index
df.index
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Data type of each column
df.dtypes
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Observation only the occupation column
df["occupation"]
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
The number of occupations in this dataset
df.occupation.nunique()
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
The most frequent occupation in dataset
df.occupation.value_counts() df.occupation.value_counts().head() df.occupation.value_counts().head().index[0]
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Summarize the DataFrame
df.describe()
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Summarize all the columns
df.describe(include = "all")
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Summarize only the gender column
df.gender.describe()
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
The mean age of dataframe
round(df.age.mean())
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
The occupation with least occurrence
df.occupation.value_counts().tail()
_____no_output_____
MIT
DataTalks_GettingKnowingData I.ipynb
gympohnpimol/Pandas-Data-Talks
Encoding of categorical variablesIn this notebook, we will present typical ways of dealing with**categorical variables** by encoding them, namely **ordinal encoding** and**one-hot encoding**. Let's first load the entire adult dataset containing both numerical andcategorical data.
import pandas as pd adult_census = pd.read_csv("../datasets/adult-census.csv") # drop the duplicated column `"education-num"` as stated in the first notebook adult_census = adult_census.drop(columns="education-num") target_name = "class" target = adult_census[target_name] data = adult_census.drop(columns=[target_name])
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
Identify categorical variablesAs we saw in the previous section, a numerical variable is aquantity represented by a real or integer number. These variables can benaturally handled by machine learning algorithms that are typically composedof a sequence of arithmetic instructions such as additions andmultiplications.In contrast, categorical variables have discrete values, typicallyrepresented by string labels (but not only) taken from a finite list ofpossible choices. For instance, the variable `native-country` in our datasetis a categorical variable because it encodes the data using a finite list ofpossible countries (along with the `?` symbol when this information ismissing):
data["native-country"].value_counts().sort_index()
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
How can we easily recognize categorical columns among the dataset? Part ofthe answer lies in the columns' data type:
data.dtypes
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
If we look at the `"native-country"` column, we observe its data type is`object`, meaning it contains string values. Select features based on their data typeIn the previous notebook, we manually defined the numerical columns. We coulddo a similar approach. Instead, we will use the scikit-learn helper function`make_column_selector`, which allows us to select columns based ontheir data type. We will illustrate how to use this helper.
from sklearn.compose import make_column_selector as selector categorical_columns_selector = selector(dtype_include=object) categorical_columns = categorical_columns_selector(data) categorical_columns
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
Here, we created the selector by passing the data type to include; we thenpassed the input dataset to the selector object, which returned a list ofcolumn names that have the requested data type. We can now filter out theunwanted columns:
data_categorical = data[categorical_columns] data_categorical.head() print(f"The dataset is composed of {data_categorical.shape[1]} features")
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
In the remainder of this section, we will present different strategies toencode categorical data into numerical data which can be used by amachine-learning algorithm. Strategies to encode categories Encoding ordinal categoriesThe most intuitive strategy is to encode each category with a differentnumber. The `OrdinalEncoder` will transform the data in such manner.We will start by encoding a single column to understand how the encodingworks.
from sklearn.preprocessing import OrdinalEncoder education_column = data_categorical[["education"]] encoder = OrdinalEncoder() education_encoded = encoder.fit_transform(education_column) education_encoded
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
We see that each category in `"education"` has been replaced by a numericvalue. We could check the mapping between the categories and the numericalvalues by checking the fitted attribute `categories_`.
encoder.categories_
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
Now, we can check the encoding applied on all categorical features.
data_encoded = encoder.fit_transform(data_categorical) data_encoded[:5] print( f"The dataset encoded contains {data_encoded.shape[1]} features")
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
We see that the categories have been encoded for each feature (column)independently. We also note that the number of features before and after theencoding is the same.However, be careful when applying this encoding strategy:using this integer representation leads downstream predictive modelsto assume that the values are ordered (0 < 1 < 2 < 3... for instance).By default, `OrdinalEncoder` uses a lexicographical strategy to map stringcategory labels to integers. This strategy is arbitrary and oftenmeaningless. For instance, suppose the dataset has a categorical variablenamed `"size"` with categories such as "S", "M", "L", "XL". We would like theinteger representation to respect the meaning of the sizes by mapping them toincreasing integers such as `0, 1, 2, 3`.However, the lexicographical strategy used by default would map the labels"S", "M", "L", "XL" to 2, 1, 0, 3, by following the alphabetical order.The `OrdinalEncoder` class accepts a `categories` constructor argument topass categories in the expected ordering explicitly. You can find moreinformation in the[scikit-learn documentation](https://scikit-learn.org/stable/modules/preprocessing.htmlencoding-categorical-features)if needed.If a categorical variable does not carry any meaningful order informationthen this encoding might be misleading to downstream statistical models andyou might consider using one-hot encoding instead (see below). Encoding nominal categories (without assuming any order)`OneHotEncoder` is an alternative encoder that prevents the downstreammodels to make a false assumption about the ordering of categories. For agiven feature, it will create as many new columns as there are possiblecategories. For a given sample, the value of the column corresponding to thecategory will be set to `1` while all the columns of the other categorieswill be set to `0`.We will start by encoding a single feature (e.g. `"education"`) to illustratehow the encoding works.
from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder(sparse=False) education_encoded = encoder.fit_transform(education_column) education_encoded
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
Notesparse=False is used in the OneHotEncoder for didactic purposes, namelyeasier visualization of the data.Sparse matrices are efficient data structures when most of your matrixelements are zero. They won't be covered in detail in this course. If youwant more details about them, you can look atthis. We see that encoding a single feature will give a NumPy array full of zerosand ones. We can get a better understanding using the associated featurenames resulting from the transformation.
feature_names = encoder.get_feature_names_out(input_features=["education"]) education_encoded = pd.DataFrame(education_encoded, columns=feature_names) education_encoded
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
As we can see, each category (unique value) became a column; the encodingreturned, for each sample, a 1 to specify which category it belongs to.Let's apply this encoding on the full dataset.
print( f"The dataset is composed of {data_categorical.shape[1]} features") data_categorical.head() data_encoded = encoder.fit_transform(data_categorical) data_encoded[:5] print( f"The encoded dataset contains {data_encoded.shape[1]} features")
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
Let's wrap this NumPy array in a dataframe with informative column names asprovided by the encoder object:
columns_encoded = encoder.get_feature_names_out(data_categorical.columns) pd.DataFrame(data_encoded, columns=columns_encoded).head()
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
Look at how the `"workclass"` variable of the 3 first records has beenencoded and compare this to the original string representation.The number of features after the encoding is more than 10 times larger thanin the original data because some variables such as `occupation` and`native-country` have many possible categories. Choosing an encoding strategyChoosing an encoding strategy will depend on the underlying models and thetype of categories (i.e. ordinal vs. nominal). NoteIn general OneHotEncoder is the encoding strategy used when thedownstream models are linear models while OrdinalEncoder is often agood strategy with tree-based models. Using an `OrdinalEncoder` will output ordinal categories. This meansthat there is an order in the resulting categories (e.g. `0 < 1 < 2`). Theimpact of violating this ordering assumption is really dependent on thedownstream models. Linear models will be impacted by misordered categorieswhile tree-based models will not.You can still use an `OrdinalEncoder` with linear models but you need to besure that:- the original categories (before encoding) have an ordering;- the encoded categories follow the same ordering than the original categories.The **next exercise** highlights the issue of misusing `OrdinalEncoder` witha linear model.One-hot encoding categorical variables with high cardinality can cause computational inefficiency in tree-based models. Because of this, it is not recommendedto use `OneHotEncoder` in such cases even if the original categories do not have a given order. We will show this in the **final exercise** of this sequence. Evaluate our predictive pipelineWe can now integrate this encoder inside a machine learning pipeline like wedid with numerical data: let's train a linear classifier on the encoded dataand check the generalization performance of this machine learning pipeline usingcross-validation.Before we create the pipeline, we have to linger on the `native-country`.Let's recall some statistics regarding this column.
data["native-country"].value_counts()
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
We see that the `Holand-Netherlands` category is occurring rarely. This willbe a problem during cross-validation: if the sample ends up in the test setduring splitting then the classifier would not have seen the category duringtraining and will not be able to encode it.In scikit-learn, there are two solutions to bypass this issue:* list all the possible categories and provide it to the encoder via the keyword argument `categories`;* use the parameter `handle_unknown`.Here, we will use the latter solution for simplicity. TipBe aware the OrdinalEncoder exposes as well a parameterhandle_unknown. It can be set to use_encoded_value and by settingunknown_value to handle rare categories. You are going to use theseparameters in the next exercise. We can now create our machine learning pipeline.
from sklearn.pipeline import make_pipeline from sklearn.linear_model import LogisticRegression model = make_pipeline( OneHotEncoder(handle_unknown="ignore"), LogisticRegression(max_iter=500) )
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
NoteHere, we need to increase the maximum number of iterations to obtain a fullyconverged LogisticRegression and silence a ConvergenceWarning. Contraryto the numerical features, the one-hot encoded categorical features are allon the same scale (values are 0 or 1), so they would not benefit fromscaling. In this case, increasing max_iter is the right thing to do. Finally, we can check the model's generalization performance only using thecategorical columns.
from sklearn.model_selection import cross_validate cv_results = cross_validate(model, data_categorical, target) cv_results scores = cv_results["test_score"] print(f"The accuracy is: {scores.mean():.3f} +/- {scores.std():.3f}")
_____no_output_____
CC-BY-4.0
notebooks/03_categorical_pipeline.ipynb
parmentelat/scikit-learn-mooc
Box Plots The following illustrates some options for the boxplot in statsmodels. These include `violin_plot` and `bean_plot`.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm
_____no_output_____
BSD-3-Clause
examples/notebooks/plots_boxplots.ipynb
chengevo/statsmodels
Bean Plots The following example is taken from the docstring of `beanplot`.We use the American National Election Survey 1996 dataset, which has PartyIdentification of respondents as independent variable and (among otherdata) age as dependent variable.
data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"]
_____no_output_____
BSD-3-Clause
examples/notebooks/plots_boxplots.ipynb
chengevo/statsmodels
Group age by party ID, and create a violin plot with it:
plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible plt.rcParams['figure.figsize'] = (10.0, 8.0) # make plot larger in notebook age = [data.exog['age'][data.endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") #plt.show() def beanplot(data, plot_opts={}, jitter=False): """helper function to try out different plot options """ fig = plt.figure() ax = fig.add_subplot(111) plot_opts_ = {'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} plot_opts_.update(plot_opts) sm.graphics.beanplot(data, ax=ax, labels=labels, jitter=jitter, plot_opts=plot_opts_) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") fig = beanplot(age, jitter=True) fig = beanplot(age, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'}) fig = beanplot(age, plot_opts={'violin_fc':'#66c2a5'}) fig = beanplot(age, plot_opts={'bean_size': 0.2, 'violin_width': 0.75, 'violin_fc':'#66c2a5'}) fig = beanplot(age, jitter=True, plot_opts={'violin_fc':'#66c2a5'}) fig = beanplot(age, jitter=True, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
_____no_output_____
BSD-3-Clause
examples/notebooks/plots_boxplots.ipynb
chengevo/statsmodels
Advanced Box Plots Based of example script `example_enhanced_boxplots.py` (by Ralf Gommers)
import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm # Necessary to make horizontal axis labels fit plt.rcParams['figure.subplot.bottom'] = 0.23 data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"] # Group age by party ID. age = [data.exog['age'][data.endog == id] for id in party_ID] # Create a violin plot. fig = plt.figure() ax = fig.add_subplot(111) sm.graphics.violinplot(age, ax=ax, labels=labels, plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30}) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create a bean plot. fig2 = plt.figure() ax = fig2.add_subplot(111) sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30}) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create a jitter plot. fig3 = plt.figure() ax = fig3.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30, 'violin_fc':(0.8, 0.8, 0.8), 'jitter_marker':'.', 'jitter_marker_size':3, 'bean_color':'#FF6F00', 'bean_mean_color':'#009D91'} sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create an asymmetrical jitter plot. ix = data.exog['income'] < 16 # incomes < $30k age = data.exog['age'][ix] endog = data.endog[ix] age_lower_income = [age[endog == id] for id in party_ID] ix = data.exog['income'] >= 20 # incomes > $50k age = data.exog['age'][ix] endog = data.endog[ix] age_higher_income = [age[endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts['violin_fc'] = (0.5, 0.5, 0.5) plot_opts['bean_show_mean'] = False plot_opts['bean_show_median'] = False plot_opts['bean_legend_text'] = 'Income < \$30k' plot_opts['cutoff_val'] = 10 sm.graphics.beanplot(age_lower_income, ax=ax, labels=labels, side='left', jitter=True, plot_opts=plot_opts) plot_opts['violin_fc'] = (0.7, 0.7, 0.7) plot_opts['bean_color'] = '#009D91' plot_opts['bean_legend_text'] = 'Income > \$50k' sm.graphics.beanplot(age_higher_income, ax=ax, labels=labels, side='right', jitter=True, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Show all plots. #plt.show()
_____no_output_____
BSD-3-Clause
examples/notebooks/plots_boxplots.ipynb
chengevo/statsmodels
TEST for matrix_facto_10_embeddings_100_epochs Deep recommender on top of Amason’s Clean Clothing Shoes and Jewelry explicit rating datasetFrame the recommendation system as a rating prediction machine learning problem and create a hybrid architecture that mixes the collaborative and content based filtering approaches:- Collaborative part: Predict items ratings in order to recommend to the user items that he is likely to rate high.- Content based: use metadata inputs (such as price and title) about items to find similar items to recommend. - Create 2 explicit recommendation engine models based on 2 machine learning architecture using Keras: 1. a matrix factorization model 2. a deep neural network model. Compare the results of the different models and configurations to find the "best" predicting model Used the best model for recommending items to users
### name of model modname = 'matrix_facto_10_embeddings_100_epochs' ### number of epochs num_epochs = 100 ### size of embedding embedding_size = 10 # import sys # !{sys.executable} -m pip install --upgrade pip # !{sys.executable} -m pip install sagemaker-experiments # !{sys.executable} -m pip install pandas # !{sys.executable} -m pip install numpy # !{sys.executable} -m pip install matplotlib # !{sys.executable} -m pip install boto3 # !{sys.executable} -m pip install sagemaker # !{sys.executable} -m pip install pyspark # !{sys.executable} -m pip install ipython-autotime # !{sys.executable} -m pip install surprise # !{sys.executable} -m pip install smart_open # !{sys.executable} -m pip install pyarrow # !{sys.executable} -m pip install fastparquet # Check Jave version # !sudo yum -y update # # Need to use Java 1.8.0 # !sudo yum remove jre-1.7.0-openjdk -y !java -version # !sudo update-alternatives --config java # !pip install pyarrow fastparquet # !pip install ipython-autotime # !pip install tqdm pydot pydotplus pydot_ng #### To measure all running time # https://github.com/cpcloud/ipython-autotime %load_ext autotime %pylab inline import warnings warnings.filterwarnings("ignore") %matplotlib inline import re import seaborn as sbn import nltk import tqdm as tqdm import sqlite3 import pandas as pd import numpy as np from pandas import DataFrame import string import pydot import pydotplus import pydot_ng import pickle import time import gzip import os os.getcwd() import matplotlib.pyplot as plt from math import floor,ceil #from nltk.corpus import stopwords #stop = stopwords.words("english") from nltk.stem.porter import PorterStemmer english_stemmer=nltk.stem.SnowballStemmer('english') from nltk.tokenize import word_tokenize from sklearn.metrics import accuracy_score, confusion_matrix,roc_curve, auc,classification_report, mean_squared_error, mean_absolute_error from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.model_selection import train_test_split from sklearn import metrics from sklearn.svm import LinearSVC from sklearn.neighbors import NearestNeighbors from sklearn.linear_model import LogisticRegression from sklearn import neighbors from scipy.spatial.distance import cosine from sklearn.feature_selection import SelectKBest from IPython.display import SVG # Tensorflow import tensorflow as tf #Keras from keras.models import Sequential, Model, load_model, save_model from keras.callbacks import ModelCheckpoint from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D, Embedding from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape from keras.optimizers import Adam from keras.layers.core import Reshape, Dropout, Dense from keras.layers.merge import Multiply, Dot, Concatenate from keras.layers.embeddings import Embedding from keras import optimizers from keras.callbacks import ModelCheckpoint from keras.utils.vis_utils import model_to_dot
Populating the interactive namespace from numpy and matplotlib WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow_core/__init__.py:1467: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead. time: 3.5 s
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Set and Check GPUs
#Session from keras import backend as K def set_check_gpu(): cfg = K.tf.ConfigProto() cfg.gpu_options.per_process_gpu_memory_fraction =1 # allow all of the GPU memory to be allocated # for 8 GPUs # cfg.gpu_options.visible_device_list = "0,1,2,3,4,5,6,7" # "0,1" # for 1 GPU cfg.gpu_options.visible_device_list = "0" #cfg.gpu_options.allow_growth = True # # Don't pre-allocate memory; dynamically allocate the memory used on the GPU as-needed #cfg.log_device_placement = True # to log device placement (on which device the operation ran) sess = K.tf.Session(config=cfg) K.set_session(sess) # set this TensorFlow session as the default session for Keras print("* TF version: ", [tf.__version__, tf.test.is_gpu_available()]) print("* List of GPU(s): ", tf.config.experimental.list_physical_devices() ) print("* Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"; # set for 8 GPUs # os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3,4,5,6,7"; # set for 1 GPU os.environ["CUDA_VISIBLE_DEVICES"] = "0"; # Tf debugging option tf.debugging.set_log_device_placement(True) gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: # Currently, memory growth needs to be the same across GPUs for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Memory growth must be set before GPUs have been initialized print(e) # print(tf.config.list_logical_devices('GPU')) print(tf.config.experimental.list_physical_devices('GPU')) print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) set_check_gpu() # reset GPU memory& Keras Session def reset_keras(): try: del classifier del model except: pass K.clear_session() K.get_session().close() # sess = K.get_session() cfg = K.tf.ConfigProto() cfg.gpu_options.per_process_gpu_memory_fraction # cfg.gpu_options.visible_device_list = "0,1,2,3,4,5,6,7" # "0,1" cfg.gpu_options.visible_device_list = "0" # "0,1" cfg.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU sess = K.tf.Session(config=cfg) K.set_session(sess) # set this TensorFlow session as the default session for Keras
time: 2.51 ms
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Load dataset and analysis using Spark Download and prepare Data: 1. Read the data: Read the data from the reviews dataset of amazon. Use the dastaset in which all users and items have at least 5 reviews. Location of dataset: https://nijianmo.github.io/amazon/index.html
import pandas as pd import boto3 import sagemaker from sagemaker import get_execution_role from sagemaker.session import Session from sagemaker.analytics import ExperimentAnalytics import gzip import json from pyspark.ml import Pipeline from pyspark.sql.types import StructField, StructType, StringType, DoubleType from pyspark.ml.feature import StringIndexer, VectorIndexer, OneHotEncoder, VectorAssembler from pyspark.sql.functions import * # spark imports from pyspark.sql import SparkSession from pyspark.sql.functions import UserDefinedFunction, explode, desc from pyspark.sql.types import StringType, ArrayType from pyspark.ml.evaluation import RegressionEvaluator import os import pandas as pd import pyarrow import fastparquet # from pandas_profiling import ProfileReport # !aws s3 cp s3://dse-cohort5-group1/2-Keras-DeepRecommender/dataset/Clean_Clothing_Shoes_and_Jewelry_5_clean.parquet ./data/ !ls -alh ./data
total 3.3G drwxrwxr-x 5 ec2-user ec2-user 4.0K May 26 16:08 . drwxrwxr-x 8 ec2-user ec2-user 4.0K May 26 19:47 .. -rw-rw-r-- 1 ec2-user ec2-user 308M May 26 15:35 Clean_Clothing_Shoes_and_Jewelry_5_clean.parquet drwxrwxr-x 2 ec2-user ec2-user 4.0K May 26 15:46 Cleaned_meta_Clothing_Shoes_and_Jewelry.parquet -rw-rw-r-- 1 ec2-user ec2-user 1.2G Nov 21 2019 Clothing_Shoes_and_Jewelry_5.json.gz drwxrwxr-x 2 ec2-user ec2-user 4.0K May 26 15:34 Clothing_Shoes_and_Jewelry_5.parquet -rw-rw-r-- 1 ec2-user ec2-user 31 May 26 15:34 for_dataset.txt drwxrwxr-x 2 ec2-user ec2-user 4.0K May 26 15:34 .ipynb_checkpoints -rw-rw-r-- 1 ec2-user ec2-user 1.5G Oct 15 2019 meta_Clothing_Shoes_and_Jewelry.json.gz -rw-rw-r-- 1 ec2-user ec2-user 71M May 26 16:08 ratings_test.parquet -rw-rw-r-- 1 ec2-user ec2-user 282M May 26 16:08 ratings_train.parquet time: 131 ms
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Read clened dataset from parquet files
review_data = pd.read_parquet("./data/Clean_Clothing_Shoes_and_Jewelry_5_clean.parquet") review_data[:3] review_data.shape
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
2. Arrange and clean the data Rearrange the columns by relevance and rename column names
review_data.columns review_data = review_data[['asin', 'image', 'summary', 'reviewText', 'overall', 'reviewerID', 'reviewerName', 'reviewTime']] review_data.rename(columns={ 'overall': 'score','reviewerID': 'user_id', 'reviewerName': 'user_name'}, inplace=True) #the variables names after rename in the modified data frame list(review_data)
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Add Metadata Metadata includes descriptions, price, sales-rank, brand info, and co-purchasing links- asin - ID of the product, e.g. 0000031852- title - name of the product- price - price in US dollars (at time of crawl)- imUrl - url of the product image- related - related products (also bought, also viewed, bought together, buy after viewing)- salesRank - sales rank information- brand - brand name- categories - list of categories the product belongs to
# !aws s3 cp s3://dse-cohort5-group1/2-Keras-DeepRecommender/dataset/Cleaned_meta_Clothing_Shoes_and_Jewelry.parquet ./data/ all_info = pd.read_parquet("./data/Cleaned_meta_Clothing_Shoes_and_Jewelry.parquet") all_info.head(n=5)
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Arrange and clean the data - Cleaning, handling missing data, normalization, etc:- For the algorithm in keras to work, remap all item_ids and user_ids to an interger between 0 and the total number of users or the total number of items
all_info.columns items = all_info.asin.unique() item_map = {i:val for i,val in enumerate(items)} inverse_item_map = {val:i for i,val in enumerate(items)} all_info["old_item_id"] = all_info["asin"] # copying for join with metadata all_info["item_id"] = all_info["asin"].map(inverse_item_map) items = all_info.item_id.unique() print ("We have %d unique items in metadata "%items.shape[0]) all_info['description'] = all_info['description'].fillna(all_info['title'].fillna('no_data')) all_info['title'] = all_info['title'].fillna(all_info['description'].fillna('no_data').apply(str).str[:20]) all_info['image'] = all_info['image'].fillna('no_data') all_info['price'] = pd.to_numeric(all_info['price'],errors="coerce") all_info['price'] = all_info['price'].fillna(all_info['price'].median()) users = review_data.user_id.unique() user_map = {i:val for i,val in enumerate(users)} inverse_user_map = {val:i for i,val in enumerate(users)} review_data["old_user_id"] = review_data["user_id"] review_data["user_id"] = review_data["user_id"].map(inverse_user_map) items_reviewed = review_data.asin.unique() review_data["old_item_id"] = review_data["asin"] # copying for join with metadata review_data["item_id"] = review_data["asin"].map(inverse_item_map) items_reviewed = review_data.item_id.unique() users = review_data.user_id.unique() print ("We have %d unique users"%users.shape[0]) print ("We have %d unique items reviewed"%items_reviewed.shape[0]) # We have 192403 unique users in the "small" dataset # We have 63001 unique items reviewed in the "small" dataset review_data.head(3)
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Adding the review count and avarage to the metadata
#items_nb = review_data['old_item_id'].value_counts().reset_index() items_avg = review_data.drop(['summary','reviewText','user_id','asin','user_name','reviewTime','old_user_id','item_id'],axis=1).groupby('old_item_id').agg(['count','mean']).reset_index() items_avg.columns= ['old_item_id','num_ratings','avg_rating'] #items_avg.head(5) items_avg['num_ratings'].describe() all_info = pd.merge(all_info,items_avg,how='left',left_on='asin',right_on='old_item_id') pd.set_option('display.max_colwidth', 100) all_info.head(2)
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Explicit feedback (Reviewed Dataset) Recommender System Explicit feedback is when users gives voluntarily the rating information on what they like and dislike.- In this case, I have explicit item ratings ranging from one to five.- Framed the recommendation system as a rating prediction machine learning problem: - Predict an item's ratings in order to be able to recommend to a user an item that he is likely to rate high if he buys it. ` To evaluate the model, I randomly separate the data into a training and test set.
ratings_train, ratings_test = train_test_split( review_data, test_size=0.1, random_state=0) ratings_train.shape ratings_test.shape
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Adding Metadata to the train setCreate an architecture that mixes the collaborative and content based filtering approaches:```- Collaborative Part: Predict items ratings to recommend to the user items which he is likely to rate high according to learnt item & user embeddings (learn similarity from interactions).- Content based part: Use metadata inputs (such as price and title) about items to recommend to the user contents similar to those he rated high (learn similarity of item attributes).``` Adding the title and price - Add the metadata of the items in the training and test datasets.
# # creating metadata mappings # titles = all_info['title'].unique() # titles_map = {i:val for i,val in enumerate(titles)} # inverse_titles_map = {val:i for i,val in enumerate(titles)} # price = all_info['price'].unique() # price_map = {i:val for i,val in enumerate(price)} # inverse_price_map = {val:i for i,val in enumerate(price)} # print ("We have %d prices" %price.shape) # print ("We have %d titles" %titles.shape) # all_info['price_id'] = all_info['price'].map(inverse_price_map) # all_info['title_id'] = all_info['title'].map(inverse_titles_map) # # creating dict from # item2prices = {} # for val in all_info[['item_id','price_id']].dropna().drop_duplicates().iterrows(): # item2prices[val[1]["item_id"]] = val[1]["price_id"] # item2titles = {} # for val in all_info[['item_id','title_id']].dropna().drop_duplicates().iterrows(): # item2titles[val[1]["item_id"]] = val[1]["title_id"] # # populating the rating dataset with item metadata info # ratings_train["price_id"] = ratings_train["item_id"].map(lambda x : item2prices[x]) # ratings_train["title_id"] = ratings_train["item_id"].map(lambda x : item2titles[x]) # # populating the test dataset with item metadata info # ratings_test["price_id"] = ratings_test["item_id"].map(lambda x : item2prices[x]) # ratings_test["title_id"] = ratings_test["item_id"].map(lambda x : item2titles[x])
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
create rating train/test dataset and upload into S3
# !aws s3 cp s3://dse-cohort5-group1/2-Keras-DeepRecommender/dataset/ratings_test.parquet ./data/ # !aws s3 cp s3://dse-cohort5-group1/2-Keras-DeepRecommender/dataset/ratings_train.parquet ./data/ ratings_test = pd.read_parquet('./data/ratings_test.parquet') ratings_train = pd.read_parquet('./data/ratings_train.parquet') ratings_train[:3] ratings_train.shape
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
**Define embeddings The $\underline{embeddings}$ are low-dimensional hidden representations of users and items, i.e. for each item I can find its properties and for each user I can encode how much they like those properties so I can determine attitudes or preferences of users by a small number of hidden factors Throughout the training, I learn two new low-dimensional dense representations: one embedding for the users and another one for the items.
price = all_info['price'].unique() titles = all_info['title'].unique()
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
1. Matrix factorization approach![image.png](attachment:image.png)
# declare input embeddings to the model # User input user_id_input = Input(shape=[1], name='user') # Item Input item_id_input = Input(shape=[1], name='item') price_id_input = Input(shape=[1], name='price') title_id_input = Input(shape=[1], name='title') # define the size of embeddings as a parameter # Check 5, 10 , 15, 20, 50 user_embedding_size = embedding_size item_embedding_size = embedding_size price_embedding_size = embedding_size title_embedding_size = embedding_size # apply an embedding layer to all inputs user_embedding = Embedding(output_dim=user_embedding_size, input_dim=users.shape[0], input_length=1, name='user_embedding')(user_id_input) item_embedding = Embedding(output_dim=item_embedding_size, input_dim=items_reviewed.shape[0], input_length=1, name='item_embedding')(item_id_input) price_embedding = Embedding(output_dim=price_embedding_size, input_dim=price.shape[0], input_length=1, name='price_embedding')(price_id_input) title_embedding = Embedding(output_dim=title_embedding_size, input_dim=titles.shape[0], input_length=1, name='title_embedding')(title_id_input) # reshape from shape (batch_size, input_length,embedding_size) to (batch_size, embedding_size). user_vecs = Reshape([user_embedding_size])(user_embedding) item_vecs = Reshape([item_embedding_size])(item_embedding) price_vecs = Reshape([price_embedding_size])(price_embedding) title_vecs = Reshape([title_embedding_size])(title_embedding)
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
Matrix Factorisation works on the principle that we can learn the user and the item embeddings, and then predict the rating for each user-item by performing a dot (or scalar) product between the respective user and item embedding.
# Applying matrix factorization: declare the output as being the dot product between the two embeddings: items and users y = Dot(1, normalize=False)([user_vecs, item_vecs]) !mkdir -p ./models # create model model = Model(inputs= [ user_id_input, item_id_input ], outputs=y) # compile model model.compile(loss='mse', optimizer="adam" ) # set save location for model save_path = "./models" thename = save_path + '/' + modname + '.h5' mcheck = ModelCheckpoint(thename, monitor='val_loss', save_best_only=True) # fit model history = model.fit([ratings_train["user_id"] , ratings_train["item_id"] ] , ratings_train["score"] , batch_size=64 , epochs=num_epochs , validation_split=0.2 , callbacks=[mcheck] , shuffle=True) # Save the fitted model history to a file with open('./histories/' + modname + '.pkl' , 'wb') as file_pi: pickle.dump(history.history, file_pi) print("Save history in ", './histories/' + modname + '.pkl') def disp_model(path,file,suffix): model = load_model(path+file+suffix) ## Summarise the model model.summary() # Extract the learnt user and item embeddings, i.e., a table with number of items and users rows and columns, with number of columns is the dimension of the trained embedding. # In our case, the embeddings correspond exactly to the weights of the model: weights = model.get_weights() print ("embeddings \ weights shapes",[w.shape for w in weights]) return model model_path = "./models/" def plt_pickle(path,file,suffix): with open(path+file+suffix , 'rb') as file_pi: thepickle= pickle.load(file_pi) plot(thepickle["loss"],label ='Train Error ' + file,linestyle="--") plot(thepickle["val_loss"],label='Validation Error ' + file) plt.legend() plt.xlabel("Epoch") plt.ylabel("Error") ##plt.ylim(0, 0.1) return pd.DataFrame(thepickle,columns =['loss','val_loss']) hist_path = "./histories/" model=disp_model(model_path, modname, '.h5') # Display the model using keras SVG(model_to_dot(model).create(prog='dot', format='svg')) x=plt_pickle(hist_path, modname, '.pkl') x.head(20).transpose()
_____no_output_____
Apache-2.0
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/matrix_facto_10_embeddings_100_epochs.ipynb
zirubak/dse260-CapStone-Amazon
#let us import the pandas library import pandas as pd
_____no_output_____
MIT
Moringa_Data_Science_Prep_W4_Independent_Project_2021_07_Cindy_Gachuhi_Python_IP.ipynb
CindyMG/Week4IP
From the following data sources, we will acquire our datasets for analysis:http://bit.ly/autolib_datasethttps://drive.google.com/a/moringaschool.com/file/d/13DXF2CFWQLeYxxHFekng8HJnH_jtbfpN/view?usp=sharing
# let us create a dataframe from the following url: # http://bit.ly/autolib_dataset df_url = "http://bit.ly/autolib_dataset" Autolib_dataset = pd.read_csv(df_url) Autolib_dataset # let us identify the columns with null values and drop them # Autolib_dataset.isnull() Autolib_dataset.dropna(axis=1,how='all',inplace=True) Autolib_dataset # Dropping unnecessary columns D_autolib= Autolib_dataset.drop(Autolib_dataset.columns[[8,9,10,15,17,18,19]], axis = 1) D_autolib # let us access the hour column from our dataframe D_autolib['hour'] # Now, we want to identify the most popular hour in which the Blue cars are picked up # To do this, we are going to use the mode() function # D_autolib['hour'].mode()
_____no_output_____
MIT
Moringa_Data_Science_Prep_W4_Independent_Project_2021_07_Cindy_Gachuhi_Python_IP.ipynb
CindyMG/Week4IP
Assignment 2 Batch 7 Ans.1. **List properties:** ordered, iterable, mutable, can contain multiple data types List default functions are:- append()- add values or items at the end of the list- index()-returns the index of the list item- count()-Return number of occurrences of value.
list1 = ['Abhilasha','Anamika','Dhanya',1,2,3,4] list1.append('Matu') list1 list1.remove('Matu') list1 list1.append(1) list1 list1.count(1) list1.pop(-1) list1.clear() #Remove all items from list. list1
_____no_output_____
Apache-2.0
Python Essentials B7 Assignment Day2.ipynb
dhanyasingh/LetsUpgrade-Python-B7
Ans.2. **Dictionary properties:**unordered, iterable, mutable, can contain multiple data types- Made of key-value pairs- Keys must be unique, and can be strings, numbers, or tuples- Values can be any type Dictionaries default functions:- get()- retrieving a value from dictionary- items()- keys()- pop()
# create an empty dictionary (two ways) empty_dict = {} empty_dict = dict() # create a dictionary (two ways) family = {'dad':'Sachin', 'mom':'Geeta', 'size':6} family = dict(dad='Sachin', mom='Geeta', size=6) family # pass a key to return its value family['dad'] # return the number of key-value pairs len(family) 'Geeta' in family.values() # add a new entry family['cat'] = 'snowball' family
_____no_output_____
Apache-2.0
Python Essentials B7 Assignment Day2.ipynb
dhanyasingh/LetsUpgrade-Python-B7
Ans.3. **Set properties:** unordered, iterable, mutable, can contain multiple data types- Made of unique elements (strings, numbers, or tuples)- Like dictionaries, but with keys only (no values)- complex data structures- used for finding union, disjoints, commons etc.
st = {1,2,4,5,'Dhanya', 7777} st1 = {'Matu',7777, 6,7,8,9} st.intersection(st1) st.issubset(st1) st1.issubset(st) st.union(st1) st.difference(st1) st.isdisjoint(st1)
_____no_output_____
Apache-2.0
Python Essentials B7 Assignment Day2.ipynb
dhanyasingh/LetsUpgrade-Python-B7
Ans.4. **Tuple properties:** ordered, iterable, immutable, can contain multiple data typesLike lists, but they don't change size
# create a tuple directly digits = (0, 1, 'two') # create a tuple from a list digits = tuple([0, 1, 'two']) digits[2] len(digits) digits.count(0) #count no.of instances digits.index(1)
_____no_output_____
Apache-2.0
Python Essentials B7 Assignment Day2.ipynb
dhanyasingh/LetsUpgrade-Python-B7
Ans.5. **String and its default methods:** stores characters,
name1 = 'Kumari Himanshi' name2 = 'Piyushi Srivastava' name1 + " " + name2 type(name1) name1*2 #repetition of string name2*5
_____no_output_____
Apache-2.0
Python Essentials B7 Assignment Day2.ipynb
dhanyasingh/LetsUpgrade-Python-B7
DescriptionThis notebook is used to request computation of average time-series of a WaPOR data layer for an area using WaPOR API.You will need WaPOR API Token to use this notebook Step 1: Read APITokenGet your APItoken from https://wapor.apps.fao.org/profile. Enter your API Token when running the cell below.
import requests import pandas as pd path_query=r'https://io.apps.fao.org/gismgr/api/v1/query/' path_sign_in=r'https://io.apps.fao.org/gismgr/api/v1/iam/sign-in/' APIToken=input('Your API token: ')
Your API token: Enter your API token
CC0-1.0
notebooks/Module1_unit5/4_AreaStatsTimeSeries.ipynb
LaurenZ-IHE/WAPOROCW
Step 2: Get Authorization AccessTokenUsing the input API token to get AccessToken for authorization
resp_signin=requests.post(path_sign_in,headers={'X-GISMGR-API-KEY':APIToken}) resp_signin = resp_signin.json() AccessToken=resp_signin['response']['accessToken'] AccessToken
_____no_output_____
CC0-1.0
notebooks/Module1_unit5/4_AreaStatsTimeSeries.ipynb
LaurenZ-IHE/WAPOROCW
Step 3: Write Query PayloadFor more examples of areatimeseries query load visit https://io.apps.fao.org/gismgr/api/v1/swagger-ui/examples/AreaStatsTimeSeries.txt
crs="EPSG:4326" #coordinate reference system cube_code="L1_PCP_E" workspace='WAPOR_2' start_date="2009-01-01" end_date="2019-01-01" #get datacube measure cube_url=f'https://io.apps.fao.org/gismgr/api/v1/catalog/workspaces/{workspace}/cubes/{cube_code}/measures' resp=requests.get(cube_url).json() measure=resp['response']['items'][0]['code'] print('MEASURE: ',measure) #get datacube time dimension cube_url=f'https://io.apps.fao.org/gismgr/api/v1/catalog/workspaces/{workspace}/cubes/{cube_code}/dimensions' resp=requests.get(cube_url).json() items=pd.DataFrame.from_dict(resp['response']['items']) dimension=items[items.type=='TIME']['code'].values[0] print('DIMENSION: ',dimension)
MEASURE: WATER_MM DIMENSION: DAY
CC0-1.0
notebooks/Module1_unit5/4_AreaStatsTimeSeries.ipynb
LaurenZ-IHE/WAPOROCW
Define area by coordinate extent
bbox= [37.95883206252312, 7.89534, 43.32093, 12.3873979377346] #latlon xmin,ymin,xmax,ymax=bbox[0],bbox[1],bbox[2],bbox[3] Polygon=[ [xmin,ymin], [xmin,ymax], [xmax,ymax], [xmax,ymin], [xmin,ymin] ] query_areatimeseries={ "type": "AreaStatsTimeSeries", "params": { "cube": { "code": cube_code, #cube_code "workspaceCode": workspace, #workspace code: use WAPOR for v1.0 and WAPOR_2 for v2.1 "language": "en" }, "dimensions": [ { "code": dimension, #use DAY DEKAD MONTH or YEAR "range": f"[{start_date},{end_date})" #start date and endate } ], "measures": [ measure ], "shape": { "type": "Polygon", "properties": { "name": crs #coordinate reference system }, "coordinates": [ Polygon ] } } } query_areatimeseries
_____no_output_____
CC0-1.0
notebooks/Module1_unit5/4_AreaStatsTimeSeries.ipynb
LaurenZ-IHE/WAPOROCW
OR define area by reading GeoJSON
import ogr shp_fh=r".\data\Awash_shapefile.shp" shpfile=ogr.Open(shp_fh) layer=shpfile.GetLayer() epsg_code=layer.GetSpatialRef().GetAuthorityCode(None) shape=layer.GetFeature(0).ExportToJson(as_object=True)['geometry'] #get geometry of shapefile in JSON string shape["properties"]={"name": "EPSG:{0}".format(epsg_code)}#latlon projection query_areatimeseries={ "type": "AreaStatsTimeSeries", "params": { "cube": { "code": cube_code, "workspaceCode": workspace, "language": "en" }, "dimensions": [ { "code": dimension, "range": f"[{start_date},{end_date})" } ], "measures": [ measure ], "shape": shape } } query_areatimeseries
_____no_output_____
CC0-1.0
notebooks/Module1_unit5/4_AreaStatsTimeSeries.ipynb
LaurenZ-IHE/WAPOROCW
Step 4: Post the QueryPayload with AccessToken in Header In responses, get an url to query job.
resp_query=requests.post(path_query,headers={'Authorization':'Bearer {0}'.format(AccessToken)}, json=query_areatimeseries) resp_query = resp_query.json() job_url=resp_query['response']['links'][0]['href'] job_url
_____no_output_____
CC0-1.0
notebooks/Module1_unit5/4_AreaStatsTimeSeries.ipynb
LaurenZ-IHE/WAPOROCW
Step 5: Get Job Results.It will take some time for the job to be finished. When the job is finished, its status will be changed from 'RUNNING' to 'COMPLETED' or 'COMPLETED WITH ERRORS'. If it is COMPLETED, the area time series results can be achieved from Response 'output'.
i=0 print('RUNNING',end=" ") while i==0: resp = requests.get(job_url) resp=resp.json() if resp['response']['status']=='RUNNING': print('.',end =" ") if resp['response']['status']=='COMPLETED': results=resp['response']['output'] df=pd.DataFrame(results['items'],columns=results['header']) i=1 if resp['response']['status']=='COMPLETED WITH ERRORS': print(resp['response']['log']) i=1 df df.index=pd.to_datetime(df.day,format='%Y-%m-%d') df.plot()
_____no_output_____
CC0-1.0
notebooks/Module1_unit5/4_AreaStatsTimeSeries.ipynb
LaurenZ-IHE/WAPOROCW
Key word / sentence topic loading
keyword_embed = model_distilBert.embed(["volunteering"]) res = cosine_similarity(keyword_embed, model_distilBert.topic_vectors) scores = pd.DataFrame(res, index=["Cosine similiarity"]).T scores["Topic"] = list(range(0,len(scores))) scores["Top words"] = scores["Topic"].apply(lambda x: list(topics_top2Vec.iloc[x,0:3])) scores.sort_values(by="Cosine similiarity", ascending=False, inplace=True) scores.head(10) fig = px.bar(scores.iloc[0:10,:], x='Topic', y='Cosine similiarity', text="Top words", title='10 highest topic loadings') fig.update_layout(xaxis=dict(type='category'), xaxis_title="Topic number") fig.show()
_____no_output_____
Apache-2.0
notebook/REIT-Industrial.ipynb
piinghel/TopicModelling
Most similar documents for a topic
documents, document_scores, document_ids = model_distilBert.search_documents_by_topic(topic_num=5, num_docs=2) for doc, score, doc_id in zip(documents, document_scores, document_ids): print(f"Document: {doc_id}, Filename (Company and year): {df.iloc[doc_id,:].filename}, Score: {score}") print("-----------") print(doc) print("-----------") print() unique_labels = set(model_distilBert.clustering.labels_) model_distilBert._create_topic_vectors() df["topic"] = model_distilBert.clustering.labels_ out = pd.DataFrame(df.groupby(["filename","topic"]).count().iloc[:,0]) out_sorted = (out.iloc[out.index.get_level_values(0) == out.index.get_level_values(0)[0],:]. sort_values(out.columns[0], ascending=False)) out_sorted["topic"] = out_sorted.index.get_level_values(1) out_sorted["top words"] = out_sorted["topic"].apply(lambda x: list(topics_top2Vec.iloc[x, 0:3]) if x >= 0 else list(["Noise topic"])) out_sorted fig = px.bar(out_sorted.head(10), x='topic', y=out.columns[0], text="top words", title='10 highest topic counts') fig.update_layout(xaxis=dict(type='category'), xaxis_title="Topic number", yaxis_title="Count") fig.show() model_distilBert._deduplicate_topics() model_distilBert.topic_vectors.shape model_distilBert.get_num_topics()
_____no_output_____
Apache-2.0
notebook/REIT-Industrial.ipynb
piinghel/TopicModelling
Update model
model_distilBert.n_components = 5 model_distilBert.ngram_range = (1,4) model_distilBert._update_steps(documents=paragraphs, step=1) topic_words, word_scores, topic_nums = model_distilBert.get_topics() topic_sizes, topic_nums = model_distilBert.get_topic_sizes() topics_top2Vec = pd.DataFrame(topic_words).iloc[:,0:10] topics_top2Vec["size"] = topic_sizes topics_top2Vec
_____no_output_____
Apache-2.0
notebook/REIT-Industrial.ipynb
piinghel/TopicModelling
Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func, inspect engine = create_engine("sqlite:///Resources/hawaii.sqlite") # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine, reflect=True) # We can view all of the classes that automap found Base.classes.keys() # Save references to each table Measurement = Base.classes.measurement Station = Base.classes.station # Create our session (link) from Python to the DB session = Session(engine) inspector = inspect(engine)
_____no_output_____
ADSL
climate_starter.ipynb
ishanku/sqlalchemy-challenge
Exploratory Climate Analysis
columns = inspector.get_columns('Measurement') for column in columns: print(column["name"], column["type"]) columns = inspector.get_columns('Station') for column in columns: print(column["name"], column["type"]) # Design a query to retrieve the last 12 months of precipitation data and plot the results # Calculate the date 1 year ago from the last data point in the database LatestDate=np.ravel(session.query(Measurement.date).order_by(Measurement.date.desc()).first()) LatestDate=str(LatestDate).replace("-","").replace("'","").replace("[","").replace("]","") LatestDate #Date Calculation Using regex import re #Split Year, Month and Date to form a Date time format CYear=int(re.sub(r'(\d{4})(\d{2})(\d{2})', r'\1', LatestDate)) CMonth=int(re.sub(r'(\d{4})(\d{2})(\d{2})', r'\2', LatestDate)) CDay=int(re.sub(r'(\d{4})(\d{2})(\d{2})', r'\3', LatestDate)) LatestDateFormat = dt.datetime(CYear,CMonth,CDay) #Subract a year from dateutil.relativedelta import relativedelta OneYearAgoDate =(LatestDateFormat) + relativedelta(years=-1) # Convert Back to queriable pattern Latest = re.sub(r'(\d{4})(\d{2})(\d{2})', r'\1-\2-\3', LatestDate) OYear=str(OneYearAgoDate.year) OMonth=str(OneYearAgoDate.month) ODay=str(OneYearAgoDate.day) if len(OMonth) == 1: OMonth= "0" + OMonth if len(ODay) == 1: ODay= "0" + ODay OneYearAgo = OYear + "-" + OMonth + "-" + ODay Latest,OneYearAgo # Perform a query to retrieve the data and precipitation scores LastYearPreciptitationData=session.query(Measurement.date,Measurement.prcp).filter(Measurement.date >= OneYearAgo).order_by(Measurement.date.desc()).all() session.query(Measurement.date,Measurement.prcp).filter(Measurement.date >= OneYearAgo).order_by(Measurement.date.desc()).count() # Save the query results as a Pandas DataFrame and set the index to the date column LPData=pd.DataFrame() for L in LastYearPreciptitationData: df=pd.DataFrame({'Date':[L[0]],"Prcp":[L[1]]}) LPData=LPData.append(df) # Sort the dataframe by date LPData=LPData.set_index('Date').sort_values(by="Date",ascending=False) LPData.head(10)
_____no_output_____
ADSL
climate_starter.ipynb
ishanku/sqlalchemy-challenge
![precipitation](Images/precipitation.png)
# Use Pandas Plotting with Matplotlib to plot the data LPData.plot(rot=90); plt.ylim(0,7) plt.xlabel("Date") plt.ylabel("Rain (Inches)") plt.title("Precipitation Analysis") plt.legend(["Precipitation"]) plt.savefig("./Output/Figure1.png") plt.show() # Use Pandas to calcualte the summary statistics for the precipitation data LPData.describe()
_____no_output_____
ADSL
climate_starter.ipynb
ishanku/sqlalchemy-challenge
![describe](Images/describe.png)
# Design a query to show how many stations are available in this dataset? # ---- From Measurement Data session.query(Measurement.station).group_by(Measurement.station).count() #----From Station Date session.query(Station).count() #-- Method 1 -- Using DataFrame # What are the most active stations? (i.e. what stations have the most rows)? # List the stations and the counts in descending order. Stations=session.query(Measurement.station,Measurement.tobs).all() station_df=pd.DataFrame() for s in Stations: df=pd.DataFrame({"Station":[s.station],"Tobs":[s.tobs]}) station_df=station_df.append(df) ActiveStation=station_df.Station.value_counts() ActiveStation #-- Method 2 -- Using Direct Query ActiveStationList=session.query(Measurement.station,func.count(Measurement.tobs)).group_by(Measurement.station).order_by(func.count(Measurement.tobs).desc()).all() ActiveStationList # Using the station id from the previous query, calculate the lowest temperature recorded, # highest temperature recorded, and average temperature of the most active station? station_df[station_df.Station == 'USC00519281'].Tobs.min(),station_df[station_df.Station == 'USC00519281'].Tobs.max(),station_df[station_df.Station == 'USC00519281'].Tobs.mean() # Choose the station with the highest number of temperature observations. print(f"The Station with Highest Number of temperature obervations is {ActiveStationList[0][0]} and the No of Observations are {ActiveStationList[0][1]}") # Query the last 12 months of temperature observation data for this station and plot the results as a histogram Last12TempO=session.query(Measurement.tobs).filter(Measurement.date > OneYearAgo).filter(Measurement.station==ActiveStationList[0][0]).all() df=pd.DataFrame(Last12TempO) plt.hist(df['tobs'],12,color='purple',hatch="/",edgecolor="yellow") plt.xlabel("Temperature",fontsize=14) plt.ylabel("Frequency", fontsize=14) plt.title("One Year Temperature (For Station USC00519281)",fontsize=14) labels=["Temperature obervation"] plt.legend(labels) plt.savefig("./Output/Figure2.png") plt.show()
_____no_output_____
ADSL
climate_starter.ipynb
ishanku/sqlalchemy-challenge
![precipitation](Images/station-histogram.png)
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date string in the format %Y-%m-%d end_date (string): A date string in the format %Y-%m-%d Returns: TMIN, TAVE, and TMAX """ return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\ filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all() # function usage example print(calc_temps('2012-02-28', '2012-03-05')) #----First Sample # Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax # for your trip using the previous year's data for those same dates. TemperatureAverageLast12Months=calc_temps(OneYearAgo, Latest) print(TemperatureAverageLast12Months) #----Second Sample calc_temps('2015-08-21', '2016-08-21') # Plot the results from your previous query as a bar chart. # Use "Trip Avg Temp" as your Title # Use the average temperature for the y value # Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr) Error = TemperatureAverageLast12Months[0][2]-TemperatureAverageLast12Months[0][0] AverageTemp = TemperatureAverageLast12Months[0][1] MinTemp = TemperatureAverageLast12Months[0][0] MaxTemp = TemperatureAverageLast12Months[0][2] fig, ax = plt.subplots(figsize=(5,6)) bar_chart = ax.bar(1 , AverageTemp, color= 'salmon', tick_label='',yerr=Error, alpha=0.6) ax.set_xlabel("Trip") ax.set_ylabel("Temp (F)") ax.set_title("Trip Avg Temp") def autolabels(rects): for rect in rects: h=rect.get_height() #label the bars autolabels(bar_chart) plt.ylim(0, 100) plt.xlim(0,2) ax.xaxis.grid() fig.tight_layout() plt.savefig("./Output/temperature.png") plt.show() # Plot the results from your previous query as a bar chart. # Use "Trip Avg Temp" as your Title # Use the average temperature for the y value # Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr) TripStartTime= '2016-08-21' TripEndTime = '2016-08-30' FirstStep = [Station.station, Station.name, Station.latitude, Station.longitude, Station.elevation, func.sum(Measurement.prcp)] PlaceForTrip = session.query(*FirstStep).\ filter(Measurement.station == Station.station).\ filter(Measurement.date >= TripStartTime).\ filter(Measurement.date <= TripEndTime).\ group_by(Station.name).order_by(func.sum(Measurement.prcp).desc()).all() print (PlaceForTrip) # Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates. # Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
[('USC00516128', 'MANOA LYON ARBO 785.2, HI US', 21.3331, -157.8025, 152.4, 0.31), ('USC00519281', 'WAIHEE 837.5, HI US', 21.45167, -157.84888999999998, 32.9, 0.25), ('USC00518838', 'UPPER WAHIAWA 874.3, HI US', 21.4992, -158.0111, 306.6, 0.1), ('USC00513117', 'KANEOHE 838.1, HI US', 21.4234, -157.8015, 14.6, 0.060000000000000005), ('USC00511918', 'HONOLULU OBSERVATORY 702.2, HI US', 21.3152, -157.9992, 0.9, 0.0), ('USC00514830', 'KUALOA RANCH HEADQUARTERS 886.9, HI US', 21.5213, -157.8374, 7.0, 0.0), ('USC00517948', 'PEARL CITY, HI US', 21.3934, -157.9751, 11.9, 0.0), ('USC00519397', 'WAIKIKI 717.2, HI US', 21.2716, -157.8168, 3.0, 0.0), ('USC00519523', 'WAIMANALO EXPERIMENTAL FARM, HI US', 21.33556, -157.71139, 19.5, 0.0)]
ADSL
climate_starter.ipynb
ishanku/sqlalchemy-challenge
Optional Challenge Assignment
# Create a query that will calculate the daily normals # (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day) def daily_normals(date): """Daily Normals. Args: date (str): A date string in the format '%m-%d' Returns: A list of tuples containing the daily normals, tmin, tavg, and tmax """ sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)] return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all() daily_normals("01-01") # calculate the daily normals for your trip # push each tuple of calculations into a list called `normals` normals=[] # Set the start and end date of the trip TripStartTime= '2016-08-21' TripEndTime = '2016-08-30' # Stip off the year and save a list of %m-%d strings TripStartTime=TripStartTime.replace("-","") StartDate=int(re.sub(r'(\d{4})(\d{2})(\d{2})', r'\3', TripStartTime)) TripEndTime=TripEndTime.replace("-","") EndDate=int(re.sub(r'(\d{4})(\d{2})(\d{2})', r'\3', TripEndTime)) TripMonth=re.sub(r'(\d{4})(\d{2})(\d{2})', r'\2', TripEndTime) if len(TripMonth) == 1: TripMonth= "0" + TripMonth # Use the start and end date to create a range of dates Dates = [f"{TripMonth}-{num}" for num in range(StartDate, EndDate)] # Loop through the list of %m-%d strings and calculate the normals for each date for d in Dates: Normal = daily_normals(d) normals.extend(Normal) normals # Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index TempMin = [x[0] for x in normals] TempAvg = [x[1] for x in normals] TempMax = [x[2] for x in normals] SYear=int(re.sub(r'(\d{4})(\d{2})(\d{2})', r'\1', TripStartTime)) TripDatesYear = [f"{SYear}-{d}" for d in Dates] TripDatesYear trip_normals = pd.DataFrame({"TempMin":TempMin, "TempAvg":TempAvg, "TempMax":TempMax, "date":TripDatesYear}).set_index("date") trip_normals.head() # Plot the daily normals as an area plot with `stacked=False` trip_normals.plot(kind="area", stacked=False) plt.legend(loc="right") plt.ylabel("Temperature (F)") plt.xticks(range(len(trip_normals.index)), trip_normals.index, rotation="60") plt.savefig("./Output/daily-normals.png") plt.show() # Plot the daily normals as an area plot with `stacked=False`
_____no_output_____
ADSL
climate_starter.ipynb
ishanku/sqlalchemy-challenge
使用随即森林填补缺失值
dataset = load_boston() dataset.data.shape #总共506*13=6578个数据 X_full, y_full = dataset.data, dataset.target n_samples = X_full.shape[0] n_features = X_full.shape[1]
_____no_output_____
MIT
code/randomForest/bostonRegression.ipynb
Knowledge-Precipitation-Tribe/Machine-Learning
添加缺失值
#首先确定我们希望放入的缺失数据的比例,在这里我们假设是50%,那总共就要有3289个数据缺失 rng = np.random.RandomState(0) missing_rate = 0.5 n_missing_samples = int(np.floor(n_samples * n_features * missing_rate)) #np.floor向下取整,返回.0格式的浮点数 #所有数据要随机遍布在数据集的各行各列当中,而一个缺失的数据会需要一个行索引和一个列索引 #如果能够创造一个数组,包含3289个分布在0~506中间的行索引,和3289个分布在0~13之间的列索引,那我们就可 #以利用索引来为数据中的任意3289个位置赋空值 #然后我们用0,均值和随机森林来填写这些缺失值,然后查看回归的结果如何 missing_features = rng.randint(0,n_features,n_missing_samples) missing_samples = rng.randint(0,n_samples,n_missing_samples) #missing_samples = rng.choice(dataset.data.shape[0],n_missing_samples,replace=False) #我们现在采样了3289个数据,远远超过我们的样本量506,所以我们使用随机抽取的函数randint。但如果我们需要 #的数据量小于我们的样本量506,那我们可以采用np.random.choice来抽样,choice会随机抽取不重复的随机数, #因此可以帮助我们让数据更加分散,确保数据不会集中在一些行中 X_missing = X_full.copy() y_missing = y_full.copy() X_missing[missing_samples,missing_features] = np.nan X_missing = pd.DataFrame(X_missing) #转换成DataFrame是为了后续方便各种操作,numpy对矩阵的运算速度快到拯救人生,但是在索引等功能上却不如pandas
_____no_output_____
MIT
code/randomForest/bostonRegression.ipynb
Knowledge-Precipitation-Tribe/Machine-Learning
使用0和均值填充
#使用均值进行填补 from sklearn.impute import SimpleImputer imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean') X_missing_mean = imp_mean.fit_transform(X_missing) #使用0进行填补 imp_0 = SimpleImputer(missing_values=np.nan, strategy="constant",fill_value=0) X_missing_0 = imp_0.fit_transform(X_missing)
_____no_output_____
MIT
code/randomForest/bostonRegression.ipynb
Knowledge-Precipitation-Tribe/Machine-Learning
使用随即森林填充缺失值
""" 使用随机森林回归填补缺失值 任何回归都是从特征矩阵中学习,然后求解连续型标签y的过程,之所以能够实现这个过程,是因为回归算法认为,特征 矩阵和标签之前存在着某种联系。实际上,标签和特征是可以相互转换的,比如说,在一个“用地区,环境,附近学校数 量”预测“房价”的问题中,我们既可以用“地区”,“环境”,“附近学校数量”的数据来预测“房价”,也可以反过来, 用“环境”,“附近学校数量”和“房价”来预测“地区”。而回归填补缺失值,正是利用了这种思想。 对于一个有n个特征的数据来说,其中特征T有缺失值,我们就把特征T当作标签,其他的n-1个特征和原本的标签组成新 的特征矩阵。那对于T来说,它没有缺失的部分,就是我们的Y_test,这部分数据既有标签也有特征,而它缺失的部分,只有特征没有标签,就是我们需要预测的部分。 特征T不缺失的值对应的其他n-1个特征 + 本来的标签:X_train 特征T不缺失的值:Y_train 特征T缺失的值对应的其他n-1个特征 + 本来的标签:X_test 特征T缺失的值:未知,我们需要预测的Y_test 这种做法,对于某一个特征大量缺失,其他特征却很完整的情况,非常适用。 那如果数据中除了特征T之外,其他特征也有缺失值怎么办? 答案是遍历所有的特征,从缺失最少的开始进行填补(因为填补缺失最少的特征所需要的准确信息最少)。 填补一个特征时,先将其他特征的缺失值用0代替,每完成一次回归预测,就将预测值放到原本的特征矩阵中,再继续填 补下一个特征。每一次填补完毕,有缺失值的特征会减少一个,所以每次循环后,需要用0来填补的特征就越来越少。当 进行到最后一个特征时(这个特征应该是所有特征中缺失值最多的),已经没有任何的其他特征需要用0来进行填补了, 而我们已经使用回归为其他特征填补了大量有效信息,可以用来填补缺失最多的特征。 遍历所有的特征后,数据就完整,不再有缺失值了。 """ X_missing_reg = X_missing.copy() # 找出数据集中缺失值最多的从小到大的排序 sortindex = np.argsort(X_missing_reg.isnull().sum(axis=0)).values for i in sortindex: #构建我们的新特征矩阵和新标签 df = X_missing_reg fillc = df.iloc[:,i] df = pd.concat([df.iloc[:,df.columns != i],pd.DataFrame(y_full)],axis=1) #在新特征矩阵中,对含有缺失值的列,进行0的填补 df_0 =SimpleImputer(missing_values=np.nan,strategy='constant',fill_value=0).fit_transform(df) #找出我们的训练集和测试集 Ytrain = fillc[fillc.notnull()] Ytest = fillc[fillc.isnull()] Xtrain = df_0[Ytrain.index,:] Xtest = df_0[Ytest.index,:] #用随机森林回归来填补缺失值 rfc = RandomForestRegressor(n_estimators=100) rfc = rfc.fit(Xtrain, Ytrain) Ypredict = rfc.predict(Xtest) #将填补好的特征返回到我们的原始的特征矩阵中 X_missing_reg.loc[X_missing_reg.iloc[:,i].isnull(),i] = Ypredict
_____no_output_____
MIT
code/randomForest/bostonRegression.ipynb
Knowledge-Precipitation-Tribe/Machine-Learning