markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Under the hood, a Gaussian mixture model is very similar to k-means: it uses an expectation–maximization approach which qualitatively does the following: Choose starting guesses for the location and shape Repeat until converged: E-step: for each point, find weights encoding the probability of membership in each c...
from matplotlib.patches import Ellipse def draw_ellipse(position, covariance, ax=None, **kwargs): """Draw an ellipse with a given position and covariance""" ax = ax or plt.gca() # Convert covariance to principal axes if covariance.shape == (2, 2): U, s, Vt = np.linalg.svd(covariance) ...
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
With this in place, we can take a look at what the four-component GMM gives us for our initial data:
gmm = GMM(n_components=4, random_state=42) plot_gmm(gmm, X)
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
Similarly, we can use the GMM approach to fit our stretched dataset; allowing for a full covariance the model will fit even very oblong, stretched-out clusters:
gmm = GMM(n_components=4, covariance_type='full', random_state=42) plot_gmm(gmm, X_stretched)
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
This makes clear that GMM addresses the two main practical issues with k-means encountered before. Choosing the covariance type If you look at the details of the preceding fits, you will see that the covariance_type option was set differently within each. This hyperparameter controls the degrees of freedom in the shape...
from sklearn.datasets import make_moons Xmoon, ymoon = make_moons(200, noise=.05, random_state=0) plt.scatter(Xmoon[:, 0], Xmoon[:, 1]);
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
If we try to fit this with a two-component GMM viewed as a clustering model, the results are not particularly useful:
gmm2 = GMM(n_components=2, covariance_type='full', random_state=0) plot_gmm(gmm2, Xmoon)
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
But if we instead use many more components and ignore the cluster labels, we find a fit that is much closer to the input data:
gmm16 = GMM(n_components=16, covariance_type='full', random_state=0) plot_gmm(gmm16, Xmoon, label=False)
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
Here the mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data. This is a generative model of the distribution, meaning that the GMM gives us the recipe to generate new random data distributed similarly to our input. For example, here are 4...
# Xnew = gmm16.sample(400, random_state=42) Xnew, _ = gmm16.sample(400) plt.scatter(Xnew[:, 0], Xnew[:, 1]);
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
GMM is convenient as a flexible means of modeling an arbitrary multi-dimensional distribution of data. How many components? The fact that GMM is a generative model gives us a natural means of determining the optimal number of components for a given dataset. A generative model is inherently a probability distribution fo...
n_components = np.arange(1, 21) models = [GMM(n, covariance_type='full', random_state=0).fit(Xmoon) for n in n_components] plt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC') plt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC') plt.legend(loc='best') plt.xlabel('n_components');
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. The AIC tells us that our choice of 16 components above was probably too many: around 8-12 components would have been a better choice. As is typical with this sort of problem, the BIC recommends a...
from sklearn.datasets import load_digits digits = load_digits() digits.data.shape
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
Next let's plot the first 100 of these to recall exactly what we're looking at:
def plot_digits(data): fig, ax = plt.subplots(10, 10, figsize=(8, 8), subplot_kw=dict(xticks=[], yticks=[])) fig.subplots_adjust(hspace=0.05, wspace=0.05) for i, axi in enumerate(ax.flat): im = axi.imshow(data[i].reshape(8, 8), cmap='binary') im.set_clim(0, 16) plo...
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
We have nearly 1,800 digits in 64 dimensions, and we can build a GMM on top of these to generate more. GMMs can have difficulty converging in such a high dimensional space, so we will start with an invertible dimensionality reduction algorithm on the data. Here we will use a straightforward PCA, asking it to preserve 9...
from sklearn.decomposition import PCA pca = PCA(0.99, whiten=True) data = pca.fit_transform(digits.data) data.shape
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
The result is 41 dimensions, a reduction of nearly 1/3 with almost no information loss. Given this projected data, let's use the AIC to get a gauge for the number of GMM components we should use:
n_components = np.arange(50, 210, 10) models = [GMM(n, covariance_type='full', random_state=0) for n in n_components] aics = [model.fit(data).aic(data) for model in models] plt.plot(n_components, aics);
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
It appears that around 110 components minimizes the AIC; we will use this model. Let's quickly fit this to the data and confirm that it has converged:
gmm = GMM(110, covariance_type='full', random_state=0) gmm.fit(data) print(gmm.converged_)
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
Now we can draw samples of 100 new points within this 41-dimensional projected space, using the GMM as a generative model:
data_new, _ = gmm.sample(100) data_new.shape
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
Finally, we can use the inverse transform of the PCA object to construct the new digits:
digits_new = pca.inverse_transform(data_new) plot_digits(digits_new)
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
Let's get the first page in which we will be able to extract some interesting content !
# Ask for the first page on IS Academia. To see it, just type it on your browser address bar : http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.filter?ww_i_reportModel=133685247 r = requests.get('http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.filter?ww_i_reportModel=133685247') htmlContent = BeautifulSoup(r.con...
Homework/02 - Data from the Web/Question 2.ipynb
Merinorus/adaisawesome
gpl-3.0
Now we need to make other requests to IS Academia, which specify every parameter : computer science students, all the years, and all bachelor semester (which are a couple of two values : pedagogic period and semester type). Thus, we're going to get all the parameters we need to make the next request :
# We first get the "Computer science" value computerScienceField = htmlContent.find('option', text='Informatique') computerScienceField computerScienceValue = computerScienceField.get('value') computerScienceValue # Then, we're going to need all the academic years values. academicYearsField = htmlContent.find('select...
Homework/02 - Data from the Web/Question 2.ipynb
Merinorus/adaisawesome
gpl-3.0
Now, we got all the information to get all the master students ! Let's make all the requests we need to build our data. We will try to do requests such as : - Get students from master semester 1 of 2007-2008 - ... - Get students from master semester 4 of 2007-2008 - Get students from mineur semester 1 of 2007-2008 - Ge...
# Let's put the semester types aside, because we're going to need them autumn_semester_value = semesterType_df.loc[semesterType_df['Semester_type'] == 'Semestre d\'automne', 'Value'] autumn_semester_value = autumn_semester_value.iloc[0] spring_semester_value = semesterType_df.loc[semesterType_df['Semester_type'] == 'S...
Homework/02 - Data from the Web/Question 2.ipynb
Merinorus/adaisawesome
gpl-3.0
The requests are now ready to be sent to IS Academia. Let's try it out ! TIME OUT : We stopped right here for our homework. What is below should look like the beginning of a loop that gets students lists from IS Academia. It's not finished at all :(
# WARNING : NEXT LINE IS COMMENTED FOR DEBGUGGING THE FIRST REQUEST ONLY. UNCOMMENT IT AND INDENT THE CODE CORRECTLY TO MAKE ALL THE REQUESTS #for request in requestsToISAcademia: # LINE TO UNCOMMENT TO SEND ALL REQUESTS request = requestsToISAcademia[0] # LINE TO COMMENT TO SEND ALL REQUESTS print(request) # Send th...
Homework/02 - Data from the Web/Question 2.ipynb
Merinorus/adaisawesome
gpl-3.0
DON'T RUN THE NEXT CELL OR IT WILL CRASH ! :x
# Getting the information about the student we're "looping on" currentStudent = [] tr = th.findNext('tr') children = tr.children for child in children: currentStudent.append(child.text) # Add the student to the array studentsTable.append(currentStudent) a = tr.findNext('tr') a while tr.findNext('tr') is ...
Homework/02 - Data from the Web/Question 2.ipynb
Merinorus/adaisawesome
gpl-3.0
Retail Product Stockouts Prediction using AutoML Tables <table align="left"> <td> <a href="https://colab.sandbox.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb"> <img src="https://cloud....
! pip install --upgrade --quiet --user google-cloud-automl ! pip install matplotlib
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Note: Try installing using sudo, if the above command throw any permission errors. Restart the kernel to allow automl_v1beta1 to be imported for Jupyter Notebooks.
from IPython.core.display import HTML HTML("<script>Jupyter.notebook.kernel.restart()</script>")
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Set up your GCP Project Id Enter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
PROJECT_ID = "[your-project-id]" #@param {type:"string"} COMPUTE_REGION = "us-central1" # Currently only supported region.
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. Otherwise, follow these steps: In the GCP Console, go to the Create service account key page. From the Service account drop-down list, select New service account. In the Service acco...
# Upload the downloaded JSON file that contains your key. import sys if 'google.colab' in sys.modules: from google.colab import files keyfile_upload = files.upload() keyfile = list(keyfile_upload.keys())[0] %env GOOGLE_APPLICATION_CREDENTIALS $keyfile ! gcloud auth activate-service-account --key-file $ke...
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
If you are running the notebook locally, enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell
# If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. %env GOOGLE_APPLICATION_CREDENTIALS /path/to/service/account ! gcloud auth activate-service-account --key-file '/path/to/service/account'
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. AI Platform runs the code from this package. In this tutorial, AI Platform als...
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Only if your bucket doesn't exist: Run the following cell to create your Cloud Storage bucket. Make sure Storage > Storage Admin role is enabled
! gsutil mb -p $PROJECT_ID -l $COMPUTE_REGION gs://$BUCKET_NAME
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Import libraries and define constants Import relevant packages.
from __future__ import absolute_import from __future__ import division from __future__ import print_function # AutoML library. from google.cloud import automl_v1beta1 as automl import google.cloud.automl_v1beta1.proto.data_types_pb2 as data_types import matplotlib.pyplot as plt
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Populate the following cell with the necessary constants and run it to initialize constants.
#@title Constants { vertical-output: true } # A name for the AutoML tables Dataset to create. DATASET_DISPLAY_NAME = 'stockout_data' #@param {type: 'string'} # The BigQuery Dataset URI to import data from. BQ_INPUT_URI = 'bq://product-stockout.product_stockout.stockout' #@param {type: 'string'} # A name for the AutoML...
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Initialize the client for AutoML and AutoML Tables.
# Initialize the clients. automl_client = automl.AutoMlClient() tables_client = automl.TablesClient(project=PROJECT_ID, region=COMPUTE_REGION)
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Test the set up To test whether your project set up and authentication steps were successful, run the following cell to list your datasets in this project. If no dataset has previously imported into AutoML Tables, you shall expect an empty return.
# List the datasets. list_datasets = tables_client.list_datasets() datasets = { dataset.display_name: dataset.name for dataset in list_datasets } datasets
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
You can also print the list of your models by running the following cell. If no model has previously trained using AutoML Tables, you shall expect an empty return.
# List the models. list_models = tables_client.list_models() models = { model.display_name: model.name for model in list_models } models
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Import training data Create dataset Select a dataset display name and pass your table source information to create a new dataset.
# Create dataset. dataset = tables_client.create_dataset(DATASET_DISPLAY_NAME) dataset_name = dataset.name dataset
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Import data You can import your data to AutoML Tables from GCS or BigQuery. For this solution, you will import data from a BigQuery Table. The URI for your table is in the format of bq://PROJECT_ID.DATASET_ID.TABLE_ID. The BigQuery Table used for demonstration purpose can be accessed as bq://product-stockout.product_st...
# Import data. import_data_response = tables_client.import_data( dataset=dataset, bigquery_input_uri=BQ_INPUT_URI, ) print('Dataset import operation: {}'.format(import_data_response.operation)) # Synchronous check of operation status. Wait until import is done. print('Dataset import response: {}'.format(import...
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Importing this stockout datasets takes about 10 minutes. If you re-visit this Notebook, uncomment the following cell and run the command to retrieve your dataset. Replace YOUR_DATASET_NAME with its actual value obtained in the preceding cells. YOUR_DATASET_NAME is a string in the format of 'projects/&lt;project_id&gt;/...
# dataset_name = '<YOUR_DATASET_NAME>' #@param {type: 'string'} # dataset = tables_client.get_dataset(dataset_name=dataset_name)
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Review the specs Run the following command to see table specs such as row count.
# List table specs. list_table_specs_response = tables_client.list_table_specs(dataset=dataset) table_specs = [s for s in list_table_specs_response] # List column specs. list_column_specs_response = tables_client.list_column_specs(dataset=dataset) column_specs = {s.display_name: s for s in list_column_specs_response} ...
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
In the pie chart above, you see this dataset contains three variable types: FLOAT64 (treated as Numeric), CATEGORY (treated as Categorical) and STRING (treated as Text). Update dataset: assign a label column and enable nullable columns Get column specs AutoML Tables automatically detects your data column type. There a...
# Print column data types. for column in column_specs: print(column, '-', column_specs[column].data_type)
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Update columns: make categorical From the column data type, you noticed Item_Number, Category, Vendor_Number, Store_Number, Zip_Code and County_Number have been autodetected as FLOAT64 (Numerical) instead of CATEGORY (Categorical). In this solution, the columns Item_Number, Category, Vendor_Number and Store_Number are...
type_code='CATEGORY' #@param {type:'string'} # Update dataset. categorical_column_names = ['Item_Number', 'Category', 'Vendor_Number', 'Store_Number', 'Zip_Code', 'County_Number'] is_nullable = [False, False, False, False, True, True] for i in range(len(categorical_...
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Update dataset: Assign a label Select the target column and update the dataset.
#@title Update dataset { vertical-output: true } target_column_name = 'Stockout' #@param {type: 'string'} update_dataset_response = tables_client.set_target_column( dataset=dataset, column_spec_display_name=target_column_name, ) update_dataset_response
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Creating a model Train a model Training the model may take one hour or more. To obtain the results with less training time or budget, you can set train_budget_milli_node_hours, which is the train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. For demonstra...
# The number of hours to train the model. model_train_hours = 1 #@param {type:'integer'} # Set optimization objective to train a model. model_optimization_objective = 'MAXIMIZE_AU_PRC' #@param {type:'string'} create_model_response = tables_client.create_model( MODEL_DISPLAY_NAME, dataset=dataset, train_bud...
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
If your Colab times out, use tables_client.list_models() to check whether your model has been created. Then uncomment the following cell and run the command to retrieve your model. Replace YOUR_MODEL_NAME with its actual value obtained in the preceding cell. YOUR_MODEL_NAME is a string in the format of 'projects/&lt;pr...
#model_name = '<YOUR_MODEL_NAME>' #@param {type: 'string'} # model = tables_client.get_model(model_name=model_name)
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Batch prediction Initialize prediction Your data source for batch prediction can be GCS or BigQuery. For this solution, you will use a BigQuery Table as the input source. The URI for your table is in the format of bq://PROJECT_ID.DATASET_ID.TABLE_ID. To write out the predictions, you need to specify a GCS bucket gs://B...
#@title Start batch prediction { vertical-output: true, output-height: 200 } batch_predict_bq_input_uri = 'bq://product-stockout.product_stockout.batch_prediction_inputs' #@param {type:'string'} batch_predict_gcs_output_uri_prefix = 'gs://{}'.format(BUCKET_NAME) #@param {type:'string'} batch_predict_response = tables...
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial.
# Delete model resource. tables_client.delete_model(model_name=model_name) # Delete dataset resource. tables_client.delete_dataset(dataset_name=dataset_name) # Delete Cloud Storage objects that were created. ! gsutil -m rm -r gs://$BUCKET_NAME # If training model is still running, cancel it. automl_client.transport....
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Why NumPy?
%%time total = 0 for i in range(100000): total += i %%time total = np.arange(100000).sum() %%time l = list(range(0, 1000000)) ltimes5 = [x * 5 for x in l] %%time l = np.arange(1000000) ltimes5 = l * 5
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Introduction The numpy package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices),...
import numpy as np
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array. Creating numpy arrays There are a number of ways to initialize new numpy arrays, for example from a Python list or tuples using functions that are dedicated to generating numpy arrays, such as arange, linspace, ...
# a vector: the argument to the array function is a Python list v = np.array([1,2,3,4]) v # a matrix: the argument to the array function is a nested Python list M = np.array([[1, 2], [3, 4]]) M
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
The v and M objects are both of the type ndarray that the numpy module provides.
type(v), type(M)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property.
v.shape M.shape
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
So far the numpy.ndarray looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type? There are several reasons: Python lists are very general. They can contain any kind of object. They are dynamically typed. They do not support mathem...
M.dtype
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
We get an error if we try to assign a value of the wrong type to an element in a numpy array:
try: M[0,0] = "hello" except ValueError as e: print(traceback.format_exc())
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
If we want, we can explicitly define the type of the array data when we create it, using the dtype keyword argument:
M = np.array([[1, 2], [3, 4]], dtype=complex) M
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Common data types that can be used with dtype are: int, float, complex, bool, object, etc. We can also explicitly define the bit size of the data types, for example: int64, int16, float128, complex128. Using array-generating functions For larger arrays it is inpractical to initialize the data manually, using explicit p...
# create a range x = np.arange(0, 10, 1) # arguments: start, stop, step x x = np.arange(-1, 1, 0.1) x
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
linspace and logspace
# using linspace, both end points ARE included np.linspace(0, 10, 25) np.logspace(0, 10, 10, base=np.e)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
mgrid
x, y = np.mgrid[0:5, 0:5] # similar to meshgrid in MATLAB x y
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
diag
# a diagonal matrix np.diag([1,2,3]) # diagonal with offset from the main diagonal np.diag([1,2,3], k=1)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
File I/O Comma-separated values (CSV) A very common file format for data files is comma-separated values (CSV), or related formats such as TSV (tab-separated values). To read data from such files into Numpy arrays we can use the numpy.genfromtxt function. For example,
!head ../data/stockholm_td_adj.dat data = np.genfromtxt('../data/stockholm_td_adj.dat') data.shape fig, ax = plt.subplots(figsize=(14,4)) ax.plot(data[:,0]+data[:,1]/12.0+data[:,2]/365, data[:,5]) ax.axis('tight') ax.set_title('tempeatures in Stockholm') ax.set_xlabel('year') ax.set_ylabel('temperature (C)');
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Using numpy.savetxt we can store a Numpy array to a file in CSV format:
M = np.random.rand(3,3) M np.savetxt("../data/random-matrix.csv", M) !cat ../data/random-matrix.csv np.savetxt("../data/random-matrix.csv", M, fmt='%.5f') # fmt specifies the format !cat ../data/random-matrix.csv
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Numpy's native file format Useful when storing and reading back numpy array data. Use the functions numpy.save and numpy.load:
np.save("../data/random-matrix.npy", M) !file ../data/random-matrix.npy np.load("../data/random-matrix.npy")
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
More properties of the numpy arrays
M.itemsize # bytes per element M.nbytes # number of bytes M.ndim # number of dimensions
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Manipulating arrays Indexing We can index elements in an array using square brackets and indices:
# v is a vector, and has only one dimension, taking one index v[0] # M is a matrix, or a 2 dimensional array, taking two indices M[1,1]
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
M M[1]
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
The same thing can be achieved with using : instead of an index:
M[1,:] # row 1 M[:,1] # column 1
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
We can assign new values to elements in an array using indexing:
M[0,0] = 1 M # also works for rows and columns M[1,:] = 0 M[:,2] = -1 M
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
We can omit any of the three parameters in M[lower:upper:step]:
A[::] # lower, upper, step all take the default values A[::2] # step is 2, lower and upper defaults to the beginning and end of the array A[:3] # first three elements A[3:] # elements from index 3
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Negative indices counts from the end of the array (positive index from the begining):
A = np.array([1,2,3,4,5]) A[-1] # the last element in the array A[-3:] # the last three elements
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Index slicing works exactly the same way for multidimensional arrays:
A = np.array([[n+m*10 for n in range(5)] for m in range(5)]) A # a block from the original array A[1:4, 1:4] # strides A[::2, ::2]
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Fancy indexing Fancy indexing is the name for when an array or list is used in-place of an index:
row_indices = [1, 2, 3] A[row_indices] col_indices = [1, 2, -1] # remember, index -1 means the last element A[row_indices, col_indices]
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
We can also use index masks: If the index mask is an Numpy array of data type bool, then an element is selected (True) or not (False) depending on the value of the index mask at the position of each element:
B = np.array([n for n in range(5)]) B row_mask = np.array([True, False, True, False, False]) B[row_mask] # same thing row_mask = np.array([1,0,1,0,0], dtype=bool) B[row_mask]
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
This feature is very useful to conditionally select elements from an array, using for example comparison operators:
x = np.arange(0, 10, 0.5) x mask = (5 < x) * (x < 7.5) mask x[mask]
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Functions for extracting data from arrays and creating arrays where The index mask can be converted to position index using the where function
indices = np.where(mask) indices x[indices] # this indexing is equivalent to the fancy indexing x[mask]
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
diag With the diag function we can also extract the diagonal and subdiagonals of an array:
np.diag(A) np.diag(A, -1)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
take The take function is similar to fancy indexing described above:
v2 = np.arange(-3,3) v2 row_indices = [1, 3, 5] v2[row_indices] # fancy indexing v2.take(row_indices)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
But take also works on lists and other objects:
np.take([-3, -2, -1, 0, 1, 2], row_indices)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
choose Constructs an array by picking elements from several arrays:
which = [1, 0, 1, 0] choices = [[-2,-2,-2,-2], [5,5,5,5]] np.choose(which, choices)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Linear algebra Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication. Scalar-array operations We can use the usual arithmetic operators to...
v1 = np.arange(0, 5) v1 * 2 v1 + 2 A * 2, A + 2
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Matrix algebra What about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:
np.dot(A, A)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Python 3 has a new operator for using infix notation with matrix multiplication.
A @ A np.dot(A, v1) np.dot(v1, v1)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra.
M = np.matrix(A) v = np.matrix(v1).T # make it a column vector v M * M M * v # inner product v.T * v # with matrix objects, standard matrix algebra applies v + M*v
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
If we try to add, subtract or multiply objects with incomplatible shapes we get an error:
v = np.matrix([1,2,3,4,5,6]).T M.shape, v.shape import traceback try: M * v except ValueError as e: print(traceback.format_exc())
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
See also the related functions: inner, outer, cross, kron, tensordot. Try for example help(np.kron). Array/Matrix transformations Above we have used the .T to transpose the matrix object v. We could also have used the transpose function to accomplish the same thing. Other mathematical functions that transform matrix o...
C = np.matrix([[1j, 2j], [3j, 4j]]) C np.conjugate(C)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Hermitian conjugate: transpose + conjugate
C.H
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
We can extract the real and imaginary parts of complex-valued arrays using real and imag:
np.real(C) # same as: C.real np.imag(C) # same as: C.imag
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Or the complex argument and absolute value
np.angle(C+1) # heads up MATLAB Users, angle is used instead of arg abs(C)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Matrix computations Inverse
np.linalg.inv(C) # equivalent to C.I C.I * C
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Determinant
np.linalg.det(C) np.linalg.det(C.I)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Data processing Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays. For example, let's calculate some properties from the Stockholm temperature dataset used above.
# reminder, the tempeature dataset is stored in the data variable: np.shape(data)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
mean
# the temperature data is in column 3 np.mean(data[:,3])
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
The daily mean temperature in Stockholm over the last 200 years has been about 6.2 C. standard deviations and variance
np.std(data[:,3]), np.var(data[:,3])
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
min and max
# lowest daily average temperature data[:,3].min() # highest daily average temperature data[:,3].max()
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
sum, prod, and trace
d = np.arange(0, 10) d # sum up all elements np.sum(d) # product of all elements np.prod(d+1) # cummulative sum np.cumsum(d) # cummulative product np.cumprod(d+1) # same as: diag(A).sum() np.trace(A)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Computations on subsets of arrays We can compute with subsets of the data in an array using indexing, fancy indexing, and the other methods of extracting data from an array (described above). For example, let's go back to the temperature dataset:
!head -n 3 ../data/stockholm_td_adj.dat
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
The dataformat is: year, month, day, daily average temperature, low, high, location. If we are interested in the average temperature only in a particular month, say February, then we can create a index mask and use it to select only the data for that month using:
np.unique(data[:,1]) # the month column takes values from 1 to 12 mask_feb = data[:,1] == 2 # the temperature data is in column 3 np.mean(data[mask_feb,3])
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
With these tools we have very powerful data processing capabilities at our disposal. For example, to extract the average monthly average temperatures for each month of the year only takes a few lines of code:
months = np.arange(1,13) monthly_mean = [np.mean(data[data[:,1] == month, 3]) for month in months] fig, ax = plt.subplots() ax.bar(months, monthly_mean) ax.set_xlabel("Month") ax.set_ylabel("Monthly avg. temp.");
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Calculations with higher-dimensional data When functions such as min, max, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave:
m = np.random.rand(3,3) m # global max m.max() # max in each column m.max(axis=0) # max in each row m.max(axis=1)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument. Reshaping, resizing and stacking arrays The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.
A n, m = A.shape B = A.reshape((1,n*m)) B B[0,0:5] = 5 # modify the array B A # and the original variable is also changed. B is only a different view of the same data
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data.
B = A.flatten() B B[0:5] = 10 B A # now A has not changed, because B's data is a copy of A's, not refering to the same data
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Adding a new dimension: newaxis With newaxis, we can insert new dimensions in an array, for example converting a vector to a column or row matrix:
v = np.array([1,2,3]) v.shape # make a column matrix of the vector v v[:, np.newaxis] # column matrix v[:, np.newaxis].shape # row matrix v[np.newaxis, :].shape
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Copy and "deep copy" To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term: pass by reference).
A = np.array([[1, 2], [3, 4]]) A # now B is referring to the same array data as A B = A # changing B affects A B[0,0] = 10 B A
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
Iterating over array elements Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB/R), iterations are really slow compared to vectorized operations. However, sometimes iterations are unavoidable. For such c...
v = np.array([1,2,3,4]) for element in v: print(element) M = np.array([[1,2], [3,4]]) for row in M: print("row", row) for element in row: print(element)
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit
When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate function to obtain both the element and its index in the for loop:
for row_idx, row in enumerate(M): print("row_idx", row_idx, "row", row) for col_idx, element in enumerate(row): print("col_idx", col_idx, "element", element) # update the matrix M: square each element M[row_idx, col_idx] = element ** 2 # each element in M is now squared M
notebooks/intro-numpy.ipynb
AlJohri/DAT-DC-12
mit