markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
セットアップ(ignore_items指定)先の実施例のPOSTAGE(送料)が第1位の相関であることは明白なため、以下の例ではPOSTAGEを無視してセットアップを行う。
exp_arul101 = setup(data=dataset, transaction_id='InvoiceNo', item_id='Description', ignore_items=['POSTAGE'],session_id=42) arule_model2 = create_model() print(arule_model2) arule_model2.head()
_____no_output_____
MIT
06.Association-Rule-Mining-Sample/06.Association-Rule-Mining-Sample.ipynb
Kazuhito00/PyCaret-Learn
プロットモデル
plot_model(arule_model2) plot_model(arule_model2, plot='3d')
_____no_output_____
MIT
06.Association-Rule-Mining-Sample/06.Association-Rule-Mining-Sample.ipynb
Kazuhito00/PyCaret-Learn
Heatmap of top 100 cites in Vermont Kewang Chen 25/07/2018
import gmaps import numpy as np import pandas as pd import matplotlib.pyplot as plt import csv gmaps.configure(api_key="API") # Personal info, need your Google API key results=[] # put locations_top_100_cites.csv in the same dic with open("locations_top_100_cites.csv") as csvfile: # data of locations of top 100 cites ...
_____no_output_____
Apache-2.0
Heatmap_of _top_100_cites_in_Vermont.ipynb
kewangchen/Heatmap_of_cites_in_VT
Programming Assignment 5: Building a Movie Recommendation System Team Details:When submitting, fill your team details in this cell. Note that this is a markdown cell.Student 1 Full Name: Student 1 Student ID: Student 1 Email Address: Student 2 Full Name: Student 2 Student ID: Student 2 Email Address: Studen...
########### Do not change anything below %matplotlib inline #Array processing import numpy as np #Data analysis, wrangling and common exploratory operations import pandas as pd from pandas import Series, DataFrame from IPython.display import display #For visualization. Matplotlib for basic viz and seaborn for more...
_____no_output_____
Apache-2.0
assignments/PA5/soln_PA5_RecSys.ipynb
saravanan-thirumuruganathan/cse5334Spring2015
Part 1: Exploratory Analysis/Sparsity **Dataset:** We will be using the MovieLens 100K dataset. It is a fairly small dataset with 100K ratings from 1000 users on 1700 movies. You can download the dataset from http://grouplens.org/datasets/movielens/ . Some basic details about the dataset can be found in the README tex...
#####Do not change anything below #Load the user data users_df = pd.read_csv('ml-100k/u.user', sep='|', names=['UserId', 'Age', 'Gender', 'Occupation', 'ZipCode']) #Load the movies data: we will only use movie id and title for this assignment movies_df = pd.read_csv('ml-100k/u.item', sep='|', names=['MovieId', 'Title'...
_____no_output_____
Apache-2.0
assignments/PA5/soln_PA5_RecSys.ipynb
saravanan-thirumuruganathan/cse5334Spring2015
**Common Support**: The concept of common support is a key idea is recommender systems. Given two items (movies in our case), the common support is the number of reviewers who rated both items. It is useful in both K-nearest neighbor and collaborative filtering based recommenders. Specifically, if the common support is...
#Task t1g: Let us now analyze the common support for movielens dataset # Here is the high level idea # We are going to create an array and populate it with the common support for all pair of movies # We then are going to plot a histogram and see its distribution. #This task might take quite some time - 1-2 hours for t...
_____no_output_____
Apache-2.0
assignments/PA5/soln_PA5_RecSys.ipynb
saravanan-thirumuruganathan/cse5334Spring2015
Part 2: Nearest Neighbor based Recommender System Let us now build a simple global recommendation system based on the nearest neighbor idea.
#Task t2a: # Create a dictionary where key is Movie Name and value is id # You can either use the movies_df or read and parse the u.item file yourself movie_name_to_id_dictionary = {} #Write code to populate the movie names to this array all_movie_names = [] f = open("ml-100k/u.item", "r") lines = f.readlines(...
Movies similar to GoldenEye [25] [(4.2426406871192848, 'Program, The (1993)'), (4.6904157598234297, 'Up Close and Personal (1996)'), (4.7958315233127191, 'Quick and the Dead, The (1995)'), (4.8989794855663558, 'Murder at 1600 (1997)'), (5.0990195135927845, 'Down Periscope (1996)'), (5.0990195135927845, "My Best Frien...
Apache-2.0
assignments/PA5/soln_PA5_RecSys.ipynb
saravanan-thirumuruganathan/cse5334Spring2015
Task 3: Item based Collaborative FilteringIn this task, let us try to perform item based collaborative filtering. In spirit, it is very similar to what you already did for Task 2. With some minor changes, you can easily build an item based collaborative filtering recommenders.
#Do not change below # By default euclidean distance can be give arbitrary values # Let us "normalize" it by limit its value between 0 and 1 and slightly change the interpretation # 0 means that the preferences are very different # 1 means that preferences are identical # For tasks 3 and 4, remember to use this functio...
1 [(4.1925848292261962, 'Fly Away Home (1996)'), (4.0983740953356511, "Ulee's Gold (1997)"), (4.0853921793090748, 'Seven Years in Tibet (1997)'), (3.9897313336931295, 'Maltese Falcon, The (1941)'), (3.9879158866803466, 'Manchurian Candidate, The (1962)'), (3.9876557080885249, 'Cop Land (1997)'), (3.9782574914172879, '...
Apache-2.0
assignments/PA5/soln_PA5_RecSys.ipynb
saravanan-thirumuruganathan/cse5334Spring2015
Task 4: User based Collaborative FilteringIn this task, let us try to perform user based collaborative filtering.
#In order to simplify the coding, let us create a nested hash structure to store the user-rating data # It will look as follows: #{ # u1: {movie_name_1:rating1, movie_name_2:rating2, ....}, # .... # un: {movie_name_1:rating1, movie_name_2:rating2, ....}, #} #Of course, we will only store the movies that the...
[(5.0000000000000009, 'Saint of Fort Washington, The (1993)'), (5.0, 'They Made Me a Criminal (1939)'), (5.0, "Someone Else's America (1995)"), (5.0, 'Santa with Muscles (1996)'), (5.0, 'Prefontaine (1997)'), (5.0, 'Marlene Dietrich: Shadow and Light (1996) '), (5.0, 'Little City (1998)'), (5.0, 'Great Day in Harlem, A...
Apache-2.0
assignments/PA5/soln_PA5_RecSys.ipynb
saravanan-thirumuruganathan/cse5334Spring2015
Task 5: Latent Factor ModelsIn this task, let us try to find the simplest SVD based latent factor model.
number_of_users = 943 number_of_movies = 1682 ratings_matrix = sp.sparse.lil_matrix((number_of_users, number_of_movies)) #Task t5a: This task requires a different data structure and hence different from prior tasks. # Here is the high level idea: # - Create a sparse matrix of type lil_matrix # - populate it with ...
_____no_output_____
Apache-2.0
assignments/PA5/soln_PA5_RecSys.ipynb
saravanan-thirumuruganathan/cse5334Spring2015
Objetivo En este trabajo vamos a analizar los datos epidemiológicos generados para Almendralejo por la Comunidad Extremadura desde aquí [fuente](https://saludextremadura.ses.es/web/casospositivos). Siguiendo los procesos estándares descargaremos los datos desde gitub, analizaremos los campos y prepararemos una serie d...
import pandas as pd import matplotlib.pyplot as plt from matplotlib.ticker import FuncFormatter from matplotlib import cm import matplotlib.dates as mdates import matplotlib.ticker as ticker from matplotlib.dates import (YEARLY, MONTHLY, DateFormatter, MonthLocator,DayLocator, rrulewrapp...
_____no_output_____
MIT
graficos_almendralejo.ipynb
mharias/covid_almendralejo
Descargamos la información Hacemos en primer lugar una actualización de parámetros y preparación de variables que necesitaremos durante el ejercicio
pd.options.display.max_rows = 999 #Variable de contexto para permitir la presentación de datos por pantalla pd.set_option('display.max_columns', None) #url de la fuente de datos path_fuente_datos='datos/almendralejo.xlsx' path_fuente_datos_github='https://github.com/mharias/covid_almendralejo/blob/main/datos/almendrale...
_____no_output_____
MIT
graficos_almendralejo.ipynb
mharias/covid_almendralejo
Leemos los datos en un `pandas`
datos = pd.read_csv(path_fuente_datos_github,sep=',') datos #datos = pd.read_excel(path_fuente_datos,skiprows=2)
_____no_output_____
MIT
graficos_almendralejo.ipynb
mharias/covid_almendralejo
veamos una rápida descripción de la información: y un muestreo de valores y de algunas columnas de interés:
datos.tail() datos['date']=pd.to_datetime(datos['date'],format='%Y-%m-%d') datos['media_7']=datos['Casos positivos'].rolling(window=7).mean().round() datos['ia14'] = datos['Casos positivos'].rolling(window=14).sum()/poblacion_almendralejo*100000 datos['ia_ratio'] = datos['ia14'].pct_change(periods=7).add(1) datos.tail(...
_____no_output_____
MIT
graficos_almendralejo.ipynb
mharias/covid_almendralejo
Gráficos A continuación vamos a crear un gráfico múltiple que nos permita visualizar cada una de las columnas con datos numéricos. Ello nos permitirá Una vez analizadas cada una de las columnas de datos podremos elegir las que queremos presentar.. Preparemos un gráfico tipo [Facetgrid](https://seaborn.pydata.org/gene...
sns.set(style="white",rc={"axes.facecolor": (0, 0, 0, 0)}) # Preparamos los datos. Es importante añadir un zero a los campos NotANumber.. clave_avg='daily_cases_PCR_avg7' clave_ratio_avg = 'ratio_daily_cases_PCR_avg7' color_ratio = 'red' color_fill = 'royalblue' color_titulos = 'navy' color_linea='darkred' clave_rat...
No handles with labels found to put in legend.
MIT
graficos_almendralejo.ipynb
mharias/covid_almendralejo
from bs4 import BeautifulSoup import html5lib import requests def population(): r = requests.get("https://www.worldometers.info/world-population/india-population/") soup = BeautifulSoup(r.content, 'html.parser') target = soup.find( "div", class_="col-md-8 country-pop-description").find_all_next("s...
_____no_output_____
Apache-2.0
Covid_Project1.ipynb
saikale/Deck
Fit Data to a mixutre of Gaussian distribution
import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from mog_class import MoG_class,MoG_indep_class from util import nzr %matplotlib inline print ("Packages loaded.")
Packages loaded.
MIT
src/demo_fit_MoG.ipynb
sjchoi86/density_network
GMM with 'MoG_indep_class'- Let each output dimension to have each mixture probabilities'
# Instantiate MoG class tf.reset_default_graph() # Reset graph sess = tf.Session() M = MoG_indep_class(_x_dim=2,_k=5,_sess=sess) print ("MoG class instantiated.") # Training data mu1,mu2,mu3,mu4 = np.array([10,0]),np.array([10,10]),np.array([0,10]),np.array([0,0]) var1,var2,var3,var4 = 1/4,1,4,1/16 n1,n2,n3,n4 = 500,50...
_____no_output_____
MIT
src/demo_fit_MoG.ipynb
sjchoi86/density_network
Train GMM with 'MoG_class'- Let each mixture to model multivariate Gaussian
# Instantiate MoG class tf.reset_default_graph() # Reset graph sess = tf.Session() M = MoG_class(_x_dim=2,_k=5,_sess=sess) print ("MoG class instantiated.") # Training data mu1,mu2,mu3,mu4 = np.array([10,0]),np.array([10,10]),np.array([0,10]),np.array([0,0]) var1,var2,var3,var4 = 1/4,1,4,1/16 n1,n2,n3,n4 = 500,500,500,...
_____no_output_____
MIT
src/demo_fit_MoG.ipynb
sjchoi86/density_network
Parse all notebooks and update the library files
from nbdev.export import * notebook2script()
Converted charts.ipynb. Converted index.ipynb. Converted logging.ipynb. Converted markups.ipynb. Converted misc.ipynb. Converted paths.ipynb. Converted registry.ipynb. Converted report.ipynb. Converted show.ipynb. Converted sklegos.ipynb.
Apache-2.0
scripts.ipynb
sizhky/torch_snippets
Parse all notebooks and update the docs
from nbdev.export2html import nbdev_build_docs nbdev_build_docs(n_workers=0)
converting: /mnt/sda1/code/torch_snippets/nbs/charts.ipynb converting: /mnt/sda1/code/torch_snippets/nbs/logging.ipynb converting: /mnt/sda1/code/torch_snippets/nbs/markups.ipynb converting: /mnt/sda1/code/torch_snippets/nbs/paths.ipynb converting: /mnt/sda1/code/torch_snippets/nbs/registry.ipynb converting: /mnt/sda1/...
Apache-2.0
scripts.ipynb
sizhky/torch_snippets
Update the notebooks if there have been any adhoc changes (bug-fixes) directly made in library files
from nbdev.sync import nbdev_update_lib nbdev_update_lib() !make release
rm -rf dist python setup.py sdist bdist_wheel running sdist running egg_info writing torch_snippets.egg-info/PKG-INFO writing dependency_links to torch_snippets.egg-info/dependency_links.txt writing entry points to torch_snippets.egg-info/entry_points.txt writing requirements to torch_snippets.egg-info/requires.txt wri...
Apache-2.0
scripts.ipynb
sizhky/torch_snippets
Create a softlink
''' %cd nbs !ln -s ../<libname> . %cd .. ''';
_____no_output_____
Apache-2.0
scripts.ipynb
sizhky/torch_snippets
필요한 패키지를 로드합니다.
import pandas as pd import tensorflow as tf import seaborn as sns
_____no_output_____
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
1000개의 데이터를 난수로 생성합니다. 대략 절반 정도는 평균:0.5, 표준편차:0.6의 x값과 평균:0.3, 표준편차:0.9의 y값을 가지고 나머지 절만 정도는 평균:2.5, 표준편차:0.4의 x값과 평균:0.8, 표준편차:0.5의 y값을 가집니다.
num_vectors = 1000 num_clusters = 4 num_steps = 100 vector_values = [] for i in range(num_vectors): if np.random.random() > 0.5: vector_values.append([np.random.normal(0.5, 0.6), np.random.normal(0.3, 0.9)]) else: vector_values.append([np.random.normal(2.5, 0.4), ...
_____no_output_____
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
vector_values 의 2차원 배열의 값을 각각 데이터프레임의 컬럼으로 지정합니다. 시본으로 그래프를 그립니다.
df = pd.DataFrame({"x": [v[0] for v in vector_values], "y": [v[1] for v in vector_values]}) sns.lmplot("x", "y", data=df, fit_reg=False, height=7) plt.show()
_____no_output_____
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
vector_values를 사용하여 constant를 만들고 초기 센트로이드 네개를 랜덤하게 선택합니다. 그런 후에 vectors, centroids 텐서에 각각 차원을 추가합니다.
vectors = tf.constant(vector_values) centroids = tf.Variable(tf.slice(tf.random_shuffle(vectors), [0,0], [num_clusters,-1])) expanded_vectors = tf.expand_dims(vectors, 0) expanded_centroids = tf.expand_dims(centroids, 1) print(expanded_vectors.get_shape()) print(expanded_centroids.get_shape())
WARNING:tensorflow:From /home/haesun/anaconda3/envs/first-steps-with-tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automa...
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
각 데이터 포인트에서 가장 가까운 센트로이드의 인덱스를 계산합니다.
distances = tf.reduce_sum(tf.square(tf.subtract(expanded_vectors, expanded_centroids)), 2) assignments = tf.argmin(distances, 0)
_____no_output_____
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
각 클러스터의 평균 값을 계산하여 새로운 센트로이드를 구합니다.
means = tf.concat([ tf.reduce_mean( tf.gather(vectors, tf.reshape( tf.where( tf.equal(assignments, c) ),[1,-1]) ),reduction_indices=[1]) for c in range(num_clusters)], 0) update_centroids = tf.assign(centroids, means)
_____no_output_____
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
변수를 초기화하고 세션을 시작합니다.
init_op = tf.global_variables_initializer() sess = tf.Session() sess.run(init_op)
_____no_output_____
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
100번의 반복을 하여 센트로이드를 계산하고 결과를 출력합니다.
for step in range(num_steps): _, centroid_values, assignment_values = sess.run([update_centroids, centroids, assignments]) print("centroids") print(centroid_values)
centroids [[ 0.38575467 0.2242536 ] [ 0.5428277 1.3617442 ] [ 2.4923067 0.82333124] [ 0.6738804 -0.9226212 ]]
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
vector_values 데이터를 클러스터에 따라 색깔을 구분하여 산포도를 그립니다.
data = {"x": [], "y": [], "cluster": []} for i in range(len(assignment_values)): data["x"].append(vector_values[i][0]) data["y"].append(vector_values[i][1]) data["cluster"].append(assignment_values[i]) df = pd.DataFrame(data) sns.lmplot("x", "y", data=df, fit_reg=False, height=7, hue...
_____no_output_____
MIT
chapter2-6/chapter3_kmeans.py.ipynb
jinheeson1008/first-steps-with-tensorflow
[Pytorch Document](http://pytorch.org/docs/master/optim.html)
# hyper parameters input_size = 2 output_size = 2 num_epochs = 60 learning_rate = 0.001 # toy dataset x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168], [9.779], [6.182], [7.59], [2.167], [7.042], [10.791], [5.313], [7.997], [3.1]], dtype=np.float32) y_train = ...
_____no_output_____
MIT
pytorch/02. linear regression.ipynb
zzsza/TIL
[super](https://docs.python.org/3/library/functions.htmlsuper)
model = LinearRegression(input_size, output_size) model for i in model.parameters(): print(i) # loss and optimizer criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): # convert numpy array to torch variable inputs = Variable(torch.from_n...
CPU times: user 599 µs, sys: 788 µs, total: 1.39 ms Wall time: 767 µs
MIT
pytorch/02. linear regression.ipynb
zzsza/TIL
Other Linear Regression
vis = Visdom() # make dataset num_data = 1000 noise = init.normal(torch.FloatTensor(num_data, 1), std=0.2) x = init.uniform(torch.Tensor(num_data, 1), -10, 10) y = 2*x + 3 y_noise = 2*(x+noise)+3 x y model = nn.Linear(1,1) model output = model(Variable(x)) loss_func = nn.L1Loss() optimizer = optim.SGD(model.parameters...
-0.6497 [torch.FloatTensor of size 1x1] 2.9504 [torch.FloatTensor of size 1]
MIT
pytorch/02. linear regression.ipynb
zzsza/TIL
1. 1st vary the number of significant nodes and sample size (fix non-sig nodes = 15, fix delta=0.25, p = 0.5)2. 2nd vary the number of non-significant nodes and sample size (fix sig nodes = 5, fix delta=0.25, p = 0.5)
# Experiment 1 spacing = 20 ms = np.linspace(0, 50, spacing + 1)[1:].astype(int) num_sig_nodes = np.linspace(0, 100, spacing + 1)[1:].astype(int) num_non_sig_nodes = 15 delta = 0.25 reps = 100 res = Parallel(n_jobs=-1, verbose=1)( delayed(experiment)( m=m, num_sig_nodes=n, num_non_sig_nodes...
_____no_output_____
BSD-3-Clause
experiments/experiment_3/community_assignment.ipynb
neurodata/dos_and_donts
Transformada de Fourier 1D1. Use scipy.fftpack para calcular la transformada de Fourier de la señal con valores reales guardada en `data.pkl`1. Muestre el espectro de magnitud y el espectro de fase.1. Determine las frecuencias más relevantes estudiando el espectro de magnitud y extraiga sus amplitudes y ángulos asocia...
import pickle t, s, Fs = pickle.load(open("data.pkl", "rb")) fig, ax = plt.subplots(figsize=(6, 3), tight_layout=True) ax.plot(t, s, '.'); ax.set_xlabel('Tiempo [s]') ax.set_ylabel('Señal');
_____no_output_____
MIT
actividades/01_match_filter/actividad.ipynb
phuijse/UACH-INFO185
Match filterUn *match filter* es un filtro convolucional cuyo objetivo es detectar la presencia de una señal modelo o *template* dentro de otra señalEn este experimento usaremos imágenes en escala de grises- La imagen denominada `template` corresponde al modelo- La imagen denominada `data` corresponde a la imagen de p...
template = plt.imread("template.png") def get_data(s_noise=0.1): escena = plt.imread("mario1.png") return escena + s_noise*np.random.randn(*escena.shape) data = get_data(s_noise=0.1) fig, ax = plt.subplots(1, 2, figsize=(10, 4)) ax[0].imshow(template, cmap=plt.cm.Greys_r); ax[1].imshow(data, cmap=plt.cm.Grey...
_____no_output_____
MIT
actividades/01_match_filter/actividad.ipynb
phuijse/UACH-INFO185
![Banner.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAyAAAABuCAYAAADfynfpAAAgAElEQVR4nOydB3gU1drH/zO72fSEEnoPXfpFxIIgoiIWpCjSVKSq2EC9VvACFmygH6JULyqgFFFRQQREaXoFpFfpoQdCetlkZ77nPVN2Zmd2swlJCOH8Hsfszpxz5syZCXnfeZsgy7IMDofD4XA4HA6HwykBRL7IHA6Hw+FwOBwOp6TgCgiHw+FwOBwOh8MpMbgCwuFwOBwOh8PhcEoMroBwOBwOh8PhcDicEoMrIBw...
#importing matplotlib import matplotlib.pyplot as plt
_____no_output_____
MIT
Data_Visualization.ipynb
Bethinadileep/Backend-Development
**Plotting a simple plot**
x=list(range(0,10,1)) #default value is 1 y=list(range(0,10,1)) print("x = ",x) print("y = ",y) plt.figure(figsize=(5,5)) plt.xlabel('X Axis ') plt.ylabel('Y Axis ') plt.title('Simple plot') #plt.rcParams["font_size"]=15.0 plt.bar(x,y) plt.show() # x-axis values x = [5, 2, 9, 4, 7] #x.sort() # Y-axis values y...
_____no_output_____
MIT
Data_Visualization.ipynb
Bethinadileep/Backend-Development
Fine tuning your plotMatplotlib allows us to fine tune our plots in great detail. Here is an example:
x = np.arange(-3.14, 3.14, 0.01) y1 = np.sin(x) y2 = np.cos(x) plt.figure(figsize =(5 , 5)) plt.plot(x, y1, label='sin(x)') plt.plot(x, y2, label='cos(x)') plt.legend() plt.grid() plt.xlabel('x') plt.title('This is the title of the graph')
_____no_output_____
MIT
Data_Visualization.ipynb
Bethinadileep/Backend-Development
showing some other useful commands:- `figure(figsize=(5,5))` sets the figure size to 5inch by 5inch- `plot(x,y1,label=’sin(x)’)` The “label” keyword defines the name of this line. The line label will be shown in the legend if the `legend()` command is used later.- Note that calling the `plot` command repeatedly, ...
t = np.arange (0 , 2 * N . pi , 0.01) plt.subplot(2, 1, 1) plt.plot(t, N.sin(t)) plt.xlabel('t') plt.ylabel('sin(t)') plt.subplot(2, 1, 2) plt.plot(t, N.cos(t)) plt.xlabel('t') plt.ylabel('cos(t)')
_____no_output_____
MIT
Data_Visualization.ipynb
Bethinadileep/Backend-Development
Two (or more) figure windows
plt.figure(1) plt.plot(range(10),'o') plt.figure(2) plt.plot(range(100),'x')
_____no_output_____
MIT
Data_Visualization.ipynb
Bethinadileep/Backend-Development
---
model = Sequential() model.add(Dense(units=50, activation='relu', kernel_regularizer=L1L2(1e-5, 1e-5))) model.add(Dense(units=200, activation='relu', kernel_regularizer=L1L2(1e-5, 1e-5))) model.add(Dense(units=500, activation='relu', kernel_regularizer=L1L2(1e-5, 1e-5))) # model.add(Dropout(0.2)) model.add(Dense(units=...
F1 = 0.8506587435005278 Precision = 0.74014111430911
MIT
iot23_ML_training.ipynb
LRAbbade/IoT_anomaly_detection
--- Try undersampling
def shuffle_df(df): return df.sample(frac=1).reset_index(drop=True) def get_undersampled_df(file): X_train, X_test, y_train, y_test = get_train_test_dfs(file) train = pd.concat([X_train, y_train], axis=1) negative = train[train['label'] == 0] positive = train[train['label'] == 1] us_positive = s...
F1 = 0.23101133034482205 Precision = 0.999989002690675 Recall = 0.13058967876655703
MIT
iot23_ML_training.ipynb
LRAbbade/IoT_anomaly_detection
--- Try oversampling
X_train, X_test, y_train, y_test = get_train_test_dfs('dataset/shuffled/part_6') len(X_train), len(y_train), len(X_test), len(y_test) check_distrib(y_train), check_distrib(y_test) train = pd.concat([X_train, y_train], axis=1) negative = train[train['label'] == 0] positive = train[train['label'] == 1] len(negative), len...
F1 = 0.2314173085682645 Precision = 0.9999926829714453 Recall = 0.13084914335993245
MIT
iot23_ML_training.ipynb
LRAbbade/IoT_anomaly_detection
Trying both again with 60% of positives Undersampling
X_train, X_test, y_train, y_test = get_train_test_dfs('dataset/shuffled/part_6') len(X_train), len(y_train), len(X_test), len(y_test) check_distrib(y_train), check_distrib(y_test) train = pd.concat([X_train, y_train], axis=1) negative = train[train['label'] == 0] positive = train[train['label'] == 1] len(negative), len...
F1 = 0.8505211735867056 Precision = 0.7399189563504854 Recall = 1.0
MIT
iot23_ML_training.ipynb
LRAbbade/IoT_anomaly_detection
Oversampling
X_train, X_test, y_train, y_test = get_train_test_dfs('dataset/shuffled/part_6') len(X_train), len(y_train), len(X_test), len(y_test) check_distrib(y_train), check_distrib(y_test) train = pd.concat([X_train, y_train], axis=1) negative = train[train['label'] == 0] positive = train[train['label'] == 1] len(negative), len...
F1 = 0.8505211735867056 Precision = 0.7399189563504854 Recall = 1.0
MIT
iot23_ML_training.ipynb
LRAbbade/IoT_anomaly_detection
--- Try Grid search on undersampled df
X_train, X_test, y_train, y_test = get_undersampled_df('dataset/shuffled/part_6') len(X_train), len(y_train), len(X_test), len(y_test) check_distrib(y_train), check_distrib(y_test) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.3, stratify=y_train) len(X_train), len(y_train), len(X_val)...
F1 = 0.2303335905517547 Precision = 0.9988466101726045 Recall = 0.13017606731143458
MIT
iot23_ML_training.ipynb
LRAbbade/IoT_anomaly_detection
golden_gate = cv2.imread("goldengate.jfif") golden_gate_vis = cv2.cvtColor(golden_gate,cv2.COLOR_BGR2RGB) plt.figure(),plt.imshow(golden_gate_vis),plt.title("Orijinal") print(golden_gate.shape) mask = np.zeros(golden_gate.shape[:2],np.uint8) plt.figure(),plt.imshow(mask,cmap="gray"),plt.title("MASK") mask[150:200,0...
_____no_output_____
MIT
1-) Image Processing with Cv2/9- Image Histograms with CV2.ipynb
AkifCanSonmez/ImageProccessingCourse
Histogram Eşitleme Karşıtlık Arttırma (Kontraslık)
img = cv2.imread("histogram.jpg",0) plt.figure(),plt.imshow(img,cmap="gray") img_hist = cv2.calcHist([img],channels=[0],mask=None,histSize=[256],ranges=[0,256]) plt.figure(), plt.plot(img_hist) eq_hist = cv2.equalizeHist(img) plt.figure(),plt.imshow(eq_hist,cmap="gray")
_____no_output_____
MIT
1-) Image Processing with Cv2/9- Image Histograms with CV2.ipynb
AkifCanSonmez/ImageProccessingCourse
Açık renkliler 255'e koyu renkliler 0'a çekildi.
eq_img_hist = cv2.calcHist([eq_hist],channels=[0],mask=None,histSize=[256],ranges=[0,256]) plt.figure(),plt.plot(eq_img_hist) plt.figure(),plt.plot(img_hist)
_____no_output_____
MIT
1-) Image Processing with Cv2/9- Image Histograms with CV2.ipynb
AkifCanSonmez/ImageProccessingCourse
Control FlowTo be able to do anything useful we need to control the flow of data by applying logic to the program. This section will look at the common control flow directives available in Python. As you read through this, try to compare to existing languages and tools. Conditional Statements: if, elif and elseWe c...
# Setup the scenario value = 6 # Note: the execution will only take one of the paths here if value < 5: print("Value is less than 5") elif 6 < value <= 10: # this is short-hand for if value > 6 and value <= 10 print("Value is between 5 and 10") else: print("Value is greater than 10")
_____no_output_____
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
This sample code has a bug - can you spot it and fix it?A comment on indentation:* Python uses indentation to designate code blocks, this is usually managed by the tools you use so it is much less of an issue than it once was.* Indentation controls scope; variables introduced and used within an indented block do **not*...
# Setup the scenario tonk = 1 # change this value blue = 1 if blue == 1: new_tonk = tonk + 2 tonk += 2 print("new_tonk:", new_tonk, "tonk:", tonk) else: tonk += 3 print(tonk) print(blue) print(new_tonk) # Scope in Jupyter is a little more global than in a program del(new_tonk)
4 2
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
LoopsLoops in python operate in the same way as other languages; the keywords are `for` and `while`. Both keywords operate on `iterable` types
# Lists are iterable for x in [1, 2, 3]: print("x is", x) # Tuples are iterable for y in (1, 2, 3): print("y is", y) d = dict(a=1, b=2, c=3) # Dictionaries are also iterable for p in d: print("p is ", p) # Setup scenario c = 0 for i in range(25): c += i print(c) else: # else is a statement th...
0 1 3 6 10 15 21 28 36 45 55 66 78 91 105 120 136 153 171 190 210 231 253 276 300 Total for 24! is 300
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
Check out the `help` for the `range` function`while` loops execute until the test defined on the loop is satisfied
# Setup the test # Carry over the totals total = 0 # use a counter for the factorial counter = 0 # define our threshold value threshold = 200 while total < threshold: total += counter counter += 1 print("The last factorial less than",threshold,"is",counter)
The last factorial less than 200 is 21
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
You can use the `continue` keyword to skip onto the next iteration, and `break` to leave the loop
import random samples = [random.randint(-100, 100) for x in range(100)] for smp in samples: if smp < 0: continue elif smp > 50: print("Exiting as over 50") break else: print(smp)
19 5 33 Exiting
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
Uses of Control Flow and Operators Searching for a value in a Tuple or ListBoth lists and tuples are iterable elements. This means you can iterate over the set of values. Let's use this to check and see if a value is in a list (for the purposes of this exercise we'll consider tuples and lists interchangeably) usin...
# define our list to search l = [1, 3, 4, 7, 12, 19, 25] # initialise our variable found = False search_value = 12 # now, iterate over the values using a for loop for value in l: if value == search_value: found = True # found our value, mark the search as a success # use a conditional statement to trigg...
_____no_output_____
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
We can short circuit the search somewhat!
# define our list to search l = [1, 3, 4, 7, 12, 19, 25] # initialise our variable found = False search_value = 12 # now, iterate over the values using a for loop for value in l: if value == search_value: found = True # found our value, mark the search as a success break # break stops th...
_____no_output_____
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
And for the simplest
# define our list to search l = [1, 3, 4, 7, 12, 19, 25] # initialise our variable search_value = 12 # now, iterate over the values using a for loop for value in l: if value == search_value: print("Found value", search_value, "in", l) break # break stops the iteration else: # else run...
_____no_output_____
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
Say we wanted to know whereabouts the value we searched for is; we can use the `enumerate` function
# define our list to search l = [1, 3, 4, 7, 12, 19, 25] # initialise our variable search_value = 12 # the enumerate function wraps the iteration, and returns a tuple; the index of the current value and the value for i, value in enumerate(l): if value == search_value: print("Found value", search_value, "...
Found value 12 at position 4
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
`enumerate` takes a `start` argument, which tells the interpreter what value to start on - by default it is 0Those of you who have read ahead will know an easier way...
# define our list to search l = [1, 3, 4, 7, 12, 19, 25] # the in operator implements a search for a value if 12 in l: # the index accessor on an iterable returns the first location the value is found print("Found value", search_value, "at position", l.index(12)) else: print("Didn't find value", search_v...
Found value 12 at 4
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
Now, an exercise for you! Using what we've discussed prior, create the code to work out the mean for the following list:
c = [23, -57, -87, -17, 29, -5, 22, 66, -52, -9, 63, -47, 64, -83, 55, -15, 91, 39, -66, -28, 34, -65, 42, -94, 62, 1, 71, -79, -29, -32, 45, -50, -51, 5, -39, 45, -29, -38, -70, -58, -57, 35, -18, -72, -43, -34, -63, 74, -36, 70]
_____no_output_____
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
List comprehensionsList comprehensions are a bit of **syntatic sugar**, it allows you to create a list according to a function. As an example; if we wanted to get all the positive values for `c` we can use the following list comprehension
positive_c = [x for x in c if x >= 0]
_____no_output_____
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
Or, get the absolute values for all the elements (note, this doesn't change the original list)
abs_val = [abs(x) for x in c] # abs is a python builtin to take the absolute value
_____no_output_____
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
Loops with dictionaries are a little different
# items returns a tuple of key and value pairs for category, values in t.items(): print(category, "->", values) # keys returns the list of keys for category in t.keys(): print(category) # values returns a list of the values for values in t.values(): print(values)
Fruit -> ['Tomato', 'Pear', 'Apple'] Vegetable -> ['Carrot', 'Parsnip'] Pet -> ['Dog', 'Cat', 'Budgie', 'Dog', 'Cat', 'Budgie'] Computer -> ['Mac', 'PC', 'Commodore64'] Fruit Vegetable Pet Computer ['Tomato', 'Pear', 'Apple'] ['Carrot', 'Parsnip'] ['Dog', 'Cat', 'Budgie', 'Dog', 'Cat', 'Budgie'] ['Mac', 'PC', 'Commodor...
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
The `in` accessor defaults to using the keys
print("Computer" in t) print("Astronaut" in t) print("Parsnip" in t)
True False False
MIT
03_control_flow.ipynb
glow-mdsol/phuse_eu_connect_python
TensorFlow Neural Network Lab In this lab, you'll use all the tools you learned from *Introduction to TensorFlow* to label images of English letters! The data you are using, notMNIST, consists of images of a letter from A to J in different fonts.The above images are a few examples of the data you'll be training on. Aft...
import hashlib import os import pickle from urllib.request import urlretrieve import numpy as np from PIL import Image from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.utils import resample from tqdm import tqdm from zipfile import ZipFile print('All m...
All modules imported.
MIT
lectures/intro_to_tensorflow/intro_to_tensorflow.ipynb
mohnkhan/deep-learning-nano-foundation
The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
def download(url, file): """ Download file from <url> :param url: URL to file :param file: Local file path """ if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') # Download the training and test dataset. do...
100%|██████████| 210001/210001 [00:43<00:00, 4773.45files/s] 100%|██████████| 10001/10001 [00:02<00:00, 4577.99files/s]
MIT
lectures/intro_to_tensorflow/intro_to_tensorflow.ipynb
mohnkhan/deep-learning-nano-foundation
Problem 1The first problem involves normalizing the features for your training and test data.Implement Min-Max scaling in the `normalize_grayscale()` function to a range of `a=0.1` and `b=0.9`. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.Since the raw notMNIST image data is i...
# Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ a = 0.1 b = 0.9 X_min = 0...
Saving data to pickle file... Data cached in pickle file.
MIT
lectures/intro_to_tensorflow/intro_to_tensorflow.ipynb
mohnkhan/deep-learning-nano-foundation
CheckpointAll your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
%matplotlib inline # Load the modules import pickle import math import numpy as np import tensorflow as tf from tqdm import tqdm import matplotlib.pyplot as plt # Reload the data pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: pickle_data = pickle.load(f) train_features = pickle_data['train_da...
Data and modules loaded.
MIT
lectures/intro_to_tensorflow/intro_to_tensorflow.ipynb
mohnkhan/deep-learning-nano-foundation
Problem 2Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one ...
# All the pixels in the image (28 * 28 = 784) features_count = 784 # All the labels labels_count = 10 # Set the features and labels tensors features = tf.placeholder(tf.float32) labels = tf.placeholder(tf.float32) # Set the weights and biases tensors weights = tf.Variable(tf.truncated_normal((features_count, labels_c...
Accuracy function created.
MIT
lectures/intro_to_tensorflow/intro_to_tensorflow.ipynb
mohnkhan/deep-learning-nano-foundation
Problem 3Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.Parameter configurations:Configuration 1* **Epochs:** 1* **Learning Rate:** * 0.8 * 0.5 * 0.1 * 0...
# Change if you have memory restrictions batch_size = 128 # epochs = 1 # learning_rate = 0.1 epochs = 5 learning_rate = 0.2 # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements u...
Epoch 1/5: 100%|██████████| 1114/1114 [00:15<00:00, 71.51batches/s] Epoch 2/5: 100%|██████████| 1114/1114 [00:17<00:00, 64.98batches/s] Epoch 3/5: 100%|██████████| 1114/1114 [00:15<00:00, 70.32batches/s] Epoch 4/5: 100%|██████████| 1114/1114 [00:16<00:00, 69.16batches/s] Epoch 5/5: 100%|██████████| 1114/1114 [00:1...
MIT
lectures/intro_to_tensorflow/intro_to_tensorflow.ipynb
mohnkhan/deep-learning-nano-foundation
TestYou're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
# The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}...
Epoch 1/5: 100%|██████████| 1114/1114 [00:01<00:00, 857.59batches/s] Epoch 2/5: 100%|██████████| 1114/1114 [00:01<00:00, 859.69batches/s] Epoch 3/5: 100%|██████████| 1114/1114 [00:01<00:00, 861.03batches/s] Epoch 4/5: 100%|██████████| 1114/1114 [00:01<00:00, 860.91batches/s] Epoch 5/5: 100%|██████████| 1114/1114 [...
MIT
lectures/intro_to_tensorflow/intro_to_tensorflow.ipynb
mohnkhan/deep-learning-nano-foundation
Table of Contents1&nbsp;&nbsp;Introduction2&nbsp;&nbsp;Get bands 3, 4, 5 fullsize (green, red, near-ir)3&nbsp;&nbsp;This cell reads in your affine transform, metadata and profile4&nbsp;&nbsp;This cell gets the right reflection function for your satellite5&nbsp;&nbsp;Read only the window pixels from the band 3, 4 files6...
import rasterio import a301 import numpy as np from matplotlib import pyplot as plt from matplotlib.colors import Normalize from a301.landsat.landsat_metadata import landsat_metadata import cartopy from rasterio import windows from pyproj import transform as proj_transform from pyproj import Proj from a301.landsat.toa_...
_____no_output_____
MIT
assignments/ndvi_subscene_assignment-PA_long_way.ipynb
Pearl-Ayem/ATSC_Course_Work
Get bands 3, 4, 5 fullsize (green, red, near-ir)At the end of this cell you shiould have the following path objects for your spring scene:meta_bigfile, band3_bigfile, band4_bigfile, band5_bigfilethat point to your landsat TIF and mtl.txt files.
filenames=["LC08_L1TP_190031_20170528_20170615_01_T1_B3.TIF", "LC08_L1TP_190031_20170528_20170615_01_T1_B4.TIF", "LC08_L1TP_190031_20170528_20170615_01_T1_B5.TIF", "LC08_L1TP_190031_20170528_20170615_01_T1_MTL.txt"] dest_folder=a301.data_dir / Path("landsat8/italy") band3_bigfile=list(dest_folder.glob("*_B...
_____no_output_____
MIT
assignments/ndvi_subscene_assignment-PA_long_way.ipynb
Pearl-Ayem/ATSC_Course_Work
This cell reads in your affine transform, metadata and profileUsing band4_bigfile (arbitrary)
metadata=landsat_metadata(meta_bigfile) with rasterio.open(str(band4_bigfile)) as raster: big_transform=raster.affine big_profile=raster.profile zone = metadata.UTM_ZONE crs = cartopy.crs.UTM(zone, southern_hemisphere=False) p_utm=Proj(crs.proj4_init) p_lonlat=Proj(proj='latlong',datum='WGS84')
Scene LC81900312017148LGN00 center time is 2017-05-28 09:46:46
MIT
assignments/ndvi_subscene_assignment-PA_long_way.ipynb
Pearl-Ayem/ATSC_Course_Work
This cell gets the right reflection function for your satellite
refl_dict={'LANDSAT_7':calc_refl_457,'LANDSAT_8':calc_reflc_8} satellite=metadata.SPACECRAFT_ID refl_fun=refl_dict[satellite]
_____no_output_____
MIT
assignments/ndvi_subscene_assignment-PA_long_way.ipynb
Pearl-Ayem/ATSC_Course_Work
Define a subscene window and a transformIn the cell below, get the upper left col,row (ul_col,ul_row) and upper left and lowerright x,y (ul_x,ul_y,lr_x,lr_y)coordinates the upper left corner of your subscene as in the image_zoom notebook. Use ul_col, ul_row, ul_x, ul_y plus your subscenewidth and height to make a ras...
italy_lon = 13.66477 italy_lat = 41.75983 italy_x, italy_y =proj_transform(p_lonlat,p_utm,italy_lon, italy_lat) full_ul_xy=np.array(big_transform*(0,0)) print(f"orig ul corner x,y (km)={full_ul_xy*1.e-3}") ul_col, ul_row = ~big_transform*(italy_x,italy_y) ul_col, ul_row = int(ul_col), int(ul_row) l_col_offset= -13...
orig ul corner x,y (km)=[ 271.185 4742.715]
MIT
assignments/ndvi_subscene_assignment-PA_long_way.ipynb
Pearl-Ayem/ATSC_Course_Work
Read only the window pixels from the band 3, 4, 5 files
from a301.landsat.toa_reflectance import toa_reflectance_8 refl_vals=toa_reflectance_8([3,4,5],meta_bigfile) refl_dict=dict() for bandnum,filepath in zip([3,4,5],[band3_bigfile,band4_bigfile,band5_bigfile]): with rasterio.open(str(filepath)) as src: refl_dict[bandnum]=refl_vals
Scene LC81900312017148LGN00 center time is 2017-05-28 09:46:46
MIT
assignments/ndvi_subscene_assignment-PA_long_way.ipynb
Pearl-Ayem/ATSC_Course_Work
In the next cell calculate your ndviSave it in a variable called ndvi
# YOUR CODE HERE ndvi = (refl_vals[5] - refl_vals[4])/(refl_vals[5] + refl_vals[4]) plt.hist(ndvi[~np.isnan(ndvi)].flat); plt.title('spring ndvi') plt.savefig('spring_ndvi.png')
_____no_output_____
MIT
assignments/ndvi_subscene_assignment-PA_long_way.ipynb
Pearl-Ayem/ATSC_Course_Work
In the next cell plot a mapped ndvi image with a red dot in your ul corner and a white dot in your lr cornerAdjust this plot to fit your image. Just delete the bottom line and work with the provided commands
vmin=0.0 vmax=0.8 the_norm=Normalize(vmin=vmin,vmax=vmax,clip=False) palette='viridis' pal = plt.get_cmap(palette) pal.set_bad('0.75') #75% grey for out-of-map cells pal.set_over('w') #color cells > vmax red pal.set_under('k') #color cells < vmin black fig, ax = plt.subplots(1, 1,figsize=[10,15], ...
_____no_output_____
MIT
assignments/ndvi_subscene_assignment-PA_long_way.ipynb
Pearl-Ayem/ATSC_Course_Work
Representing Qubit States You now know something about bits, and about how our familiar digital computers work. All the complex variables, objects and data structures used in modern software are basically all just big piles of bits. Those of us who work on quantum computing call these *classical variables.* The comput...
from qiskit import QuantumCircuit, execute, Aer from qiskit.visualization import plot_histogram, plot_bloch_vector from math import sqrt, pi
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
In Qiskit, we use the `QuantumCircuit` object to store our circuits, this is essentially a list of the quantum gates in our circuit and the qubits they are applied to.
qc = QuantumCircuit(1) # Create a quantum circuit with one qubit
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
In our quantum circuits, our qubits always start out in the state $|0\rangle$. We can use the `initialize()` method to transform this into any state. We give `initialize()` the vector we want in the form of a list, and tell it which qubit(s) we want to initialise in this state:
qc = QuantumCircuit(1) # Create a quantum circuit with one qubit initial_state = [0,1] # Define initial_state as |1> qc.initialize(initial_state, 0) # Apply initialisation operation to the 0th qubit qc.draw() # Let's view our circuit
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
We can then use one of Qiskit’s simulators to view the resulting state of our qubit. To begin with we will use the statevector simulator, but we will explain the different simulators and their uses later.
backend = Aer.get_backend('statevector_simulator') # Tell Qiskit how to simulate our circuit
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
To get the results from our circuit, we use `execute` to run our circuit, giving the circuit and the backend as arguments. We then use `.result()` to get the result of this:
qc = QuantumCircuit(1) # Create a quantum circuit with one qubit initial_state = [0,1] # Define initial_state as |1> qc.initialize(initial_state, 0) # Apply initialisation operation to the 0th qubit result = execute(qc,backend).result() # Do the simulation, returning the result
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
from `result`, we can then get the final statevector using `.get_statevector()`:
qc = QuantumCircuit(1) # Create a quantum circuit with one qubit initial_state = [0,1] # Define initial_state as |1> qc.initialize(initial_state, 0) # Apply initialisation operation to the 0th qubit result = execute(qc,backend).result() # Do the simulation, returning the result out_state = result.get_statevector() pr...
[0.+0.j 1.+0.j]
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
**Note:** Python uses `j` to represent $i$ in complex numbers. We see a vector with two complex elements: `0.+0.j` = 0, and `1.+0.j` = 1.Let’s now measure our qubit as we would in a real quantum computer and see the result:
qc.measure_all() qc.draw()
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
This time, instead of the statevector we will get the counts for the `0` and `1` results using `.get_counts()`:
result = execute(qc,backend).result() counts = result.get_counts() plot_histogram(counts)
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
We can see that we (unsurprisingly) have a 100% chance of measuring $|1\rangle$. This time, let’s instead put our qubit into a superposition and see what happens. We will use the state $|q_0\rangle$ from earlier in this section:$$ |q_0\rangle = \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{i}{\sqrt{2}}|1\rangle $$We need to ad...
initial_state = [1/sqrt(2), 1j/sqrt(2)] # Define state |q>
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
And we then repeat the steps for initialising the qubit as before:
qc = QuantumCircuit(1) # Must redefine qc qc.initialize(initial_state, 0) # Initialise the 0th qubit in the state `initial_state` state = execute(qc,backend).result().get_statevector() # Execute the circuit print(state) # Print the result results = execute(qc,backend).result().get_counts() plot_histogram(resu...
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
We can see we have equal probability of measuring either $|0\rangle$ or $|1\rangle$. To explain this, we need to talk about measurement. 2. The Rules of Measurement 2.1 A Very Important Rule There is a simple rule for measurement. To find the probability of measuring a state $|\psi \rangle$ in the state $|x\rangle$ we...
vector = [1,1] qc.initialize(vector, 0)
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
Quick Exercise1. Create a state vector that will give a $1/3$ probability of measuring $|0\rangle$.2. Create a different state vector that will give the same measurement probabilities.3. Verify that the probability of measuring $|1\rangle$ for these two states is $2/3$. You can check your answer in the widget below (y...
# Run the code in this cell to interact with the widget from qiskit_textbook.widgets import state_vector_exercise state_vector_exercise(target=1/3)
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
2 Alternative measurementThe measurement rule gives us the probability $p(|x\rangle)$ that a state $|\psi\rangle$ is measured as $|x\rangle$. Nowhere does it tell us that $|x\rangle$ can only be either $|0\rangle$ or $|1\rangle$.The measurements we have considered so far are in fact only one of an infinite number of p...
qc = QuantumCircuit(1) # Redefine qc initial_state = [0.+1.j/sqrt(2),1/sqrt(2)+0.j] qc.initialize(initial_state, 0) qc.draw()
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
This should initialise our qubit in the state:$$ |q\rangle = \tfrac{i}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle $$We can verify this using the simulator:
state = execute(qc, backend).result().get_statevector() print("Qubit State = " + str(state))
Qubit State = [0. +0.70710678j 0.70710678+0.j ]
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
We can see here the qubit is initialised in the state `[0.+0.70710678j 0.70710678+0.j]`, which is the state we expected.Let’s now measure this qubit:
qc.measure_all() qc.draw()
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
When we simulate this entire circuit, we can see that one of the amplitudes is _always_ 0:
state = execute(qc, backend).result().get_statevector() print("State of Measured Qubit = " + str(state))
State of Measured Qubit = [0.+0.j 1.+0.j]
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
You can re-run this cell a few times to reinitialise the qubit and measure it again. You will notice that either outcome is equally probable, but that the state of the qubit is never a superposition of $|0\rangle$ and $|1\rangle$. Somewhat interestingly, the global phase on the state $|0\rangle$ survives, but since thi...
from qiskit_textbook.widgets import plot_bloch_vector_spherical coords = [pi/2,0,1] # [Theta, Phi, Radius] plot_bloch_vector_spherical(coords) # Bloch Vector with spherical coordinates
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook
Warning!When first learning about qubit states, it's easy to confuse the qubits _statevector_ with its _Bloch vector_. Remember the statevector is the vector disucssed in [1.1](notation), that holds the amplitudes for the two states our qubit can be in. The Bloch vector is a visualisation tool that maps the 2D, comple...
from qiskit_textbook.widgets import bloch_calc bloch_calc() import qiskit qiskit.__qiskit_version__
_____no_output_____
Apache-2.0
content/ch-states/representing-qubit-states.ipynb
achieveordie/qiskit-textbook