markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
This is also a good way to check if our procedure was correct. If you look back into the Mathematical Computation section, you will see that the values match! Next, apply dimensionality reduction to the data to obtain the scores.
# Apply dimensionality reduction (Finding the scores) scores_sk = pca.transform(dataset_scale.as_matrix()) print scores_sk
[[ 1.28881571 -2.58539236] [ 1.23529003 -2.63038672] [ 1.29608702 -3.35400166] ..., [-3.39489928 -4.92057914] [-3.45961704 -4.93657062] [-3.35244805 -4.98731342]]
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Last, get the biplot with the values obtained in this section.
plt.figure(figsize=(10,8)) for i in scores_sk: plt.scatter(i[0], i[1], color = 'b') # Assigning the PCs pc1_sk = pca.components_[0] pc2_sk = pca.components_[1] plt.title('Biplot', fontsize=16, fontweight='bold') plt.xlabel('PC1 (40.3%)', fontsize=14, fontweight='bold') plt.ylabel('PC2 (35.1%)', fontsize=14, ...
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Fitting the data from a Ramsey experiment In this notebook we analyse data from a Ramsey experiment. Using the method and data from:Watson, T. F., Philips, S. G. J., Kawakami, E., Ward, D. R., Scarlino, P., Veldhorst, M., … Vandersypen, L. M. K. (2018). A programmable two-qubit quantum processor in silicon. Nature, 55...
import numpy as np import matplotlib.pyplot as plt from qtt.algorithms.functions import gauss_ramsey, fit_gauss_ramsey
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Test data, based on the data acquired by Watson et all.
y_data = np.array([0.6019, 0.5242, 0.3619, 0.1888, 0.1969, 0.3461, 0.5276, 0.5361, 0.4261, 0.28 , 0.2323, 0.2992, 0.4373, 0.4803, 0.4438, 0.3392, 0.3061, 0.3161, 0.3976, 0.4246, 0.398 , 0.3757, 0.3615, 0.3723, 0.3803, 0.3873, 0.3873, 0.3561, 0.37 , 0.3819, 0.3834, 0.3838, 0.37 , 0.383 , 0...
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Plotting the data:
plt.figure() plt.plot(x_data * 1e6,y_data, '--o') plt.xlabel(r'time ($\mu$s)') plt.ylabel('Q1 spin-up probability') plt.show()
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Applying the `fit_gauss_ramsey` function to fit the data:
par_fit_test, _ = fit_gauss_ramsey(x_data, y_data) freq_fit = abs(par_fit_test[2] * 1e-6) t2star_fit = par_fit_test[1] * 1e6
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Plotting the data and the fit:
test_x = np.linspace(0, total_wait_time, 200) plt.figure() plt.plot(x_data * 1e6, y_data, 'o', label='Data') plt.plot(test_x * 1e6, gauss_ramsey(test_x, par_fit_test), label='Fit') plt.title('Frequency detuning: %.1f MHz / $T_2^*$: %.1f $\mu$s' % (freq_fit, t2star_fit)) plt.xlabel('time ($\mu$s)') plt.ylabel('Spin-up p...
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Plotting geospatial data on a map In this first activity for geoplotlib, you'll combine methodologies learned in the previous exercise and use theoretical knowledge from previous lessons. Besides from wrangling data you need to find the area with given attributes. Before we can start, however, we need to import ou...
# importing the necessary dependencies import numpy as np import pandas as pd import geoplotlib # loading the Dataset (make sure to have the dataset downloaded)
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** If we import our dataset without defining the dtype of column *Region* as String, we will get a warning telling out the it has a mixed datatype. We can get rid of this warning by explicitly defining the type of the values in this column by using the `dtype` parameter. `dtype={'Region': np.str}`
# looking at the data types of each column
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** Here we can see the dtypes of each column. Since the String type is no primitive datatype, it's displayed as `object` here.
# showing the first 5 entries of the dataset
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
--- Mapping `Latitude` and `Longitude` to `lat` and `lon` Most datasets won't be in the format that you want to have. Some of them might have their latitude and longitude values hidden in a different column. This is where the data wrangling skills of lesson 1 are needed. For the given dataset, the transformations ...
# mapping Latitude to lat and Longitude to lon
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** Geoplotlibs methods expect dataset columns `lat` and `lon` for plotting. This means your dataframe has to be tranfsormed to resemble this structure. --- Understanding our data It's your first day at work, your boss hands you this dataset and wants you to dig into it and find the areas with the most adja...
# plotting the whole dataset with dots
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
Other than seeing the density of our datapoints, we also need to get some information about how the data is distributed.
# amount of countries and cities # amount of cities per country (first 20 entries) # average num of cities per country
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
Since we are only interested in areas with densely placed cities and high population, we can filter out cities without a population. Reducing our data Our dataset has more than 3Mio cities listed. Many of them are really small and can be ignored, given our objective for this activity. We only want to look at those ...
# filter for countries with a population entry (Population > 0) # displaying the first 5 items from dataset_with_pop # showing all cities with a defined population with a dot density plot
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** Not only the execution time of the visualization has been decreased but we already can see where the areas with more cities are. Following the request from our boss, we shall only consider areas that have a high density of adjacent cities with a population of more than 100k.
# dataset with cities with population of >= 100k # displaying all cities >= 100k population with a fixed bounding box (WORLD) in a dot density plot from geoplotlib.utils import BoundingBox
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** In order to get the same view on our map every time, we can set the bounding box to the constant viewport declared in the geoplotlib library. We can also instantiate the BoundingBox class with values for north, west, south, and east. --- Finding the best area After reducing our data, we can now use more ...
# using filled voronoi to find dense areas
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
In the voronoi plot we can see tendencies. Germany, Great Britain, Nigeria, India, Japan, Java, the East Coast of the USA, and Brasil stick out. We can now again filter our data and only look at those countries to find the best suited. --- Final call After meeting with your boss, he tells you that we want to st...
# filter 100k dataset for cities in Germany and GB # using Delaunay triangulation to find the most dense aree
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
Q10. Produce a list of facilities with a total revenue less than 1000.The output of facility name and total revenue, sorted by revenue.
query = """ SELECT sub2.name AS facilityname, sub2.totalrevenue AS totalrevenue FROM ( SELECT sub1.facilityname AS name, SUM(sub1.revenue) AS totalrevenue FROM ( SELECT b.bookid, f.name AS facilityname, CASE WHEN b.memid = 0 THEN (b.slots ...
_____no_output_____
MIT
SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb
shalin4788/Springboard-Do-not-refer-
Q11: Produce a report of members and who recommended them in alphabetic surname,firstname order
query = """ SELECT sub2.memberName AS membername, sub2.recommenderfirstname || ', ' || sub2.recommendersurname AS recommendername FROM ( SELECT sub1.memberName AS memberName, sub1.recommenderId AS memberId, m.firstname AS recommenderfirstname, m.surname AS recommendersurname ...
_____no_output_____
MIT
SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb
shalin4788/Springboard-Do-not-refer-
Q12: Find the facilities with their usage by member, but not guests
query = """ SELECT f.name AS facilityname, SUM(b.slots) AS slot_usage FROM Bookings AS b LEFT JOIN Facilities AS f ON f.facid = b.facid LEFT JOIN Members AS m ON m.memid = b.memid WHERE b.memid <> 0 GROUP BY facilityname ORDER BY slot_usage DESC; """ pd.read_sql_query(query, engine)
_____no_output_____
MIT
SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb
shalin4788/Springboard-Do-not-refer-
Q13: Find the facilities usage by month, but not guests
query = """ SELECT sub.MONTH AS MONTH, sub.facilityname AS facility, SUM(sub.slotNumber) AS slotusage FROM ( SELECT strftime('%m', starttime) AS MONTH, f.name AS facilityname, b.slots AS slotNumber FROM Bookings AS b LEFT JOIN Facilities AS f ON f.facid = b.facid...
_____no_output_____
MIT
SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb
shalin4788/Springboard-Do-not-refer-
scikit-learn-svm Credits: Forked from [PyCon 2015 Scikit-learn Tutorial](https://github.com/jakevdp/sklearn_pycon2015) by Jake VanderPlas* Support Vector Machine Classifier* Support Vector Machine with Kernels Classifier
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn; from sklearn.linear_model import LinearRegression from scipy import stats import pylab as pl seaborn.set()
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Support Vector Machine Classifier Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for **classification** or for **regression**. SVMs draw a boundary between clusters of data. SVMs attempt to maximize the margin between sets of points. Many lines can be drawn to separate the points ab...
from sklearn.datasets.samples_generator import make_blobs X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60) xfit = np.linspace(-1, 3.5) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') # Draw three lines that couple separate the data for m, b, d in [(1, 0.65, 0.33),...
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Fit the model:
from sklearn.svm import SVC clf = SVC(kernel='linear') clf.fit(X, y)
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Plot the boundary:
def plot_svc_decision_function(clf, ax=None): """Plot the decision function for a 2D SVC""" if ax is None: ax = plt.gca() x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30) y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30) Y, X = np.meshgrid(y, x) P = np.zeros_like(X) for i, xi in enu...
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
In the following plot the dashed lines touch a couple of the points known as *support vectors*, which are stored in the ``support_vectors_`` attribute of the classifier:
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none');
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Use IPython's ``interact`` functionality to explore how the distribution of points affects the support vectors and the discriminative fit:
from IPython.html.widgets import interact def plot_svm(N=100): X, y = make_blobs(n_samples=200, centers=2, random_state=0, cluster_std=0.60) X = X[:N] y = y[:N] clf = SVC(kernel='linear') clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plt.xlim(-1...
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Support Vector Machine with Kernels ClassifierKernels are useful when the decision boundary is not linear. A Kernel is some functional transformation of the input data. SVMs have clever tricks to ensure kernel calculations are efficient. In the example below, a linear boundary is not useful in separating the groups...
from sklearn.datasets.samples_generator import make_circles X, y = make_circles(100, factor=.1, noise=.1) clf = SVC(kernel='linear').fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf);
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
A simple model that could be useful is a **radial basis function**:
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2)) from mpl_toolkits import mplot3d def plot_3D(elev=30, azim=30): ax = plt.subplot(projection='3d') ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring') ax.view_init(elev=elev, azim=azim) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('r') ...
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
In three dimensions, there is a clear separation between the data. Run the SVM with the rbf kernel:
clf = SVC(kernel='rbf') clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none');
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Ejercicio Aplicando PCA: Principal Component AnalysisEn este notebook vamos a ver un ejemplo sencillo sobre el uso del PCA. Para ello, utilizaremos un dataset con datos sobre diferentes individuos y un indicador de si está residiendo en una vivienda que ha comprado o lo está haciendo en una de alquiler.Se tratará de u...
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.neighbors import K...
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
Cargamos datos de entradaLos datos de los individuos con un target que nos indique si está en una vivienda comprada o alquilada, son los siguientes:
dataframe = pd.read_csv(r"comprar_alquilar.csv") dataframe
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
Como podemos ver, son datos numéricos, por lo que no tendremos que realizar ningún tipo de conversión de categóricas. Visualicemos las dimensionesUno de los pasos principales que siempre decimos que es conveninete realizar, es el análisis de los datos. Para ello, vamos a analizar las distribuciones de los datos en bas...
pca = PCA(len(X_cols)) pca.fit(X_train_scaled) X_train_scaled_pca = pca.transform(X_train_scaled) X_test_scaled_pca = pca.transform(X_test_scaled) print(X_train_scaled.shape) print(X_train_scaled_pca.shape)
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
Varianza explicadaGracias al objeto PCA, se calculan automáticamente ciertos parámetros:
# Varianza explicada (sobre 1): pca.explained_variance_ratio_ # Valores singulares/autovalores: relacionados con la varianza explicada pca.singular_values_ # Autovectores: pca.components_
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
Pasemos a representar ahora esta medida. Para ello, vamos a recurrir a una estructura que vimos hace tiempo:
# A partir de los autovalores, calculamos la varianza explicada var_exp = pca.explained_variance_ratio_*100 cum_var_exp = np.cumsum(pca.explained_variance_ratio_*100) # Representamos en un diagrama de barras la varianza explicada por cada autovalor, y la acumulada plt.figure(figsize=(6, 4)) plt.bar(range(len(pca.expla...
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
EJERCICIOAhora que tenemos las componentes principales, calcula la correlación entre las nuevas variables entre sí. ¿Tiene sentido lo que sale? Predicción basada en PCAAhora que tenemos calculadas las nuevas varaibles, vamos a proceder a utilizar el algoritmo que habíamos pensado. Lo único que cambiaremos son las var...
# 1. Seleccionar las n primeras variables de lo que nos devuelve el PCA: X_ejercicio_train = X_train_scaled_pca[:, :n_var_90] X_ejercicio_test = X_test_scaled_pca[:, :n_var_90] # 2. Invocar el PCA con el valor n de las varaibles que queremos pca_b = PCA(n_var_90) X_ejercicio_train_b = pca_b.fit_transform(X_train_scaled...
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
what I want to do manually:amn_groups = { 'AMN_group_"pets friendly"': [ 'AMN_cat(s)', 'AMN_dog(s)', 'AMN_"other pet(s)"', 'AMN_"pets allowed"', 'AMN_"pets live on this property"'], 'AMN_group_"safety measures"': [ 'AMN_"lock on bedroom door"', 'AMN_"safety card"'], 'AMN_group_"winter friendly"': ...
from sklearn.decomposition import PCA pca = PCA(n_components=3) from sklearn.preprocessing import Normalizer nml = Normalizer() amn_pca = pca.fit_transform( nml.fit_transform( amn_df ) ) amn_pca_df = pd.DataFrame(amn_pca) print(amn_pca_df.shape) amn_pca_df.head() amn_pca_df.to_csv('datasets/Asheville/amn_pca.csv', i...
_____no_output_____
OML
A failed attempt in Data Cleaning for the Asheville Dataset/Clustering - Asheville.ipynb
shilpiBose29/CIS519-Project
PCA
from sklearn.decomposition import PCA from sklearn.preprocessing import scale amns = amn_df.as_matrix() print("Scaling the values...") amns_scaled = scale(amns) print("Fit PCA...") pca = PCA(n_components='mle') pca.fit(amns_scaled) print("Cumulative Variance explains...") var1 = np.cumsum(pca.explained_variance_ratio...
Scaling the values... Fit PCA... Cumulative Variance explains... Plotting...
OML
A failed attempt in Data Cleaning for the Asheville Dataset/Clustering - Asheville.ipynb
shilpiBose29/CIS519-Project
Lunar Rock ClassficationUsing iamge Augmentation techniques
!pip install -U tensorflow-gpu from google.colab import drive drive.mount('/content/drive',force_remount=True) import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D from tensorflow.keras.prepr...
_____no_output_____
MIT
hackathons/lunar_image_classification/02_image_augmentation.ipynb
amitbcp/machine_learning_with_Scikit_Learn_and_TensorFlow
Evalaution Prediction to determine Threshold. submission_binary function is written in different cells below
filenames=test_generator.filenames results=pd.DataFrame({"Image_File":filenames, "Class":predict_class}) results['Image_File'] = results['Image_File'].apply(lambda x : x[12:]) # # results['Class'] = results[results.Score == True ] results['Class'] = results['Class'].map({True: 'Small', False: "Lar...
Distribution : Large 3772 Small 3762 Name: Class, dtype: int64
MIT
hackathons/lunar_image_classification/02_image_augmentation.ipynb
amitbcp/machine_learning_with_Scikit_Learn_and_TensorFlow
Db2 Connection Document This notebook contains the connect statement that will be used for connecting to Db2. The typical way of connecting to Db2 within a notebooks it to run the db2 notebook (`db2.ipynb`) and then issue the `%sql connect` statement:```sql%run db2.ipynb%sql connect to sample user ...```Rather than ha...
%sql CONNECT TO SAMPLE USER DB2INST1 USING db2inst1 HOST 10.0.0.2 PORT 50000
_____no_output_____
Apache-2.0
connection.ipynb
Db2-DTE-POC/Db2-Click-To-Containerize-Lab
Copyright 2020 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Hello, many worlds View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how a classical neural network can learn to correct qubit calibration errors. It introduces Cirq, a Python framework to create, edit, and invoke Noisy Intermediate Scal...
!pip install tensorflow==2.4.1
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Install TensorFlow Quantum:
!pip install tensorflow-quantum # Update package resources to account for version changes. import importlib, pkg_resources importlib.reload(pkg_resources)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Now import TensorFlow and the module dependencies:
import tensorflow as tf import tensorflow_quantum as tfq import cirq import sympy import numpy as np # visualization tools %matplotlib inline import matplotlib.pyplot as plt from cirq.contrib.svg import SVGCircuit
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
1. The Basics 1.1 Cirq and parameterized quantum circuitsBefore exploring TensorFlow Quantum (TFQ), let's look at some Cirq basics. Cirq is a Python library for quantum computing from Google. You use it to define circuits, including static and parameterized gates.Cirq uses SymPy symbols to represent free parameters.
a, b = sympy.symbols('a b')
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The following code creates a two-qubit circuit using your parameters:
# Create two qubits q0, q1 = cirq.GridQubit.rect(1, 2) # Create a circuit on these qubits using the parameters you created above. circuit = cirq.Circuit( cirq.rx(a).on(q0), cirq.ry(b).on(q1), cirq.CNOT(control=q0, target=q1)) SVGCircuit(circuit)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
To evaluate circuits, you can use the `cirq.Simulator` interface. You replace free parameters in a circuit with specific numbers by passing in a `cirq.ParamResolver` object. The following code calculates the raw state vector output of your parameterized circuit:
# Calculate a state vector with a=0.5 and b=-0.5. resolver = cirq.ParamResolver({a: 0.5, b: -0.5}) output_state_vector = cirq.Simulator().simulate(circuit, resolver).final_state_vector output_state_vector
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
State vectors are not directly accessible outside of simulation (notice the complex numbers in the output above). To be physically realistic, you must specify a measurement, which converts a state vector into a real number that classical computers can understand. Cirq specifies measurements using combinations of the Pa...
z0 = cirq.Z(q0) qubit_map={q0: 0, q1: 1} z0.expectation_from_state_vector(output_state_vector, qubit_map).real z0x1 = 0.5 * z0 + cirq.X(q1) z0x1.expectation_from_state_vector(output_state_vector, qubit_map).real
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
1.2 Quantum circuits as tensorsTensorFlow Quantum (TFQ) provides `tfq.convert_to_tensor`, a function that converts Cirq objects into tensors. This allows you to send Cirq objects to our quantum layers and quantum ops. The function can be called on lists or arrays of Cirq Circuits and Cirq Paulis:
# Rank 1 tensor containing 1 circuit. circuit_tensor = tfq.convert_to_tensor([circuit]) print(circuit_tensor.shape) print(circuit_tensor.dtype)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
This encodes the Cirq objects as `tf.string` tensors that `tfq` operations decode as needed.
# Rank 1 tensor containing 2 Pauli operators. pauli_tensor = tfq.convert_to_tensor([z0, z0x1]) pauli_tensor.shape
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
1.3 Batching circuit simulationTFQ provides methods for computing expectation values, samples, and state vectors. For now, let's focus on *expectation values*.The highest-level interface for calculating expectation values is the `tfq.layers.Expectation` layer, which is a `tf.keras.Layer`. In its simplest form, this la...
batch_vals = np.array(np.random.uniform(0, 2 * np.pi, (5, 2)), dtype=np.float32)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Batching circuit execution over parameter values in Cirq requires a loop:
cirq_results = [] cirq_simulator = cirq.Simulator() for vals in batch_vals: resolver = cirq.ParamResolver({a: vals[0], b: vals[1]}) final_state_vector = cirq_simulator.simulate(circuit, resolver).final_state_vector cirq_results.append( [z0.expectation_from_state_vector(final_state_vector, { ...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The same operation is simplified in TFQ:
tfq.layers.Expectation()(circuit, symbol_names=[a, b], symbol_values=batch_vals, operators=z0)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
2. Hybrid quantum-classical optimizationNow that you've seen the basics, let's use TensorFlow Quantum to construct a *hybrid quantum-classical neural net*. You will train a classical neural net to control a single qubit. The control will be optimized to correctly prepare the qubit in the `0` or `1` state, overcoming a...
# Parameters that the classical NN will feed values into. control_params = sympy.symbols('theta_1 theta_2 theta_3') # Create the parameterized circuit. qubit = cirq.GridQubit(0, 0) model_circuit = cirq.Circuit( cirq.rz(control_params[0])(qubit), cirq.ry(control_params[1])(qubit), cirq.rx(control_params[2])...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
2.2 The controllerNow define controller network:
# The classical neural network layers. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ])
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Given a batch of commands, the controller outputs a batch of control signals for the controlled circuit. The controller is randomly initialized so these outputs are not useful, yet.
controller(tf.constant([[0.0],[1.0]])).numpy()
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
2.3 Connect the controller to the circuit Use `tfq` to connect the controller to the controlled circuit, as a single `keras.Model`. See the [Keras Functional API guide](https://www.tensorflow.org/guide/keras/functional) for more about this style of model definition.First define the inputs to the model:
# This input is the simulated miscalibration that the model will learn to correct. circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` dtype=tf.string, name='circuits_input') # Commands wil...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Next apply operations to those inputs, to define the computation.
dense_2 = controller(commands_input) # TFQ layer for classically controlled circuits. expectation_layer = tfq.layers.ControlledPQC(model_circuit, # Observe Z operators = cirq.Z(qubit)) expectation = expectation_layer([circuits_in...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Now package this computation as a `tf.keras.Model`:
# The full Keras model is built from our layers. model = tf.keras.Model(inputs=[circuits_input, commands_input], outputs=expectation)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The network architecture is indicated by the plot of the model below.Compare this model plot to the architecture diagram to verify correctness.Note: May require a system install of the `graphviz` package.
tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
This model takes two inputs: The commands for the controller, and the input-circuit whose output the controller is attempting to correct. 2.4 The dataset The model attempts to output the correct correct measurement value of $\hat{Z}$ for each command. The commands and correct values are defined below.
# The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired Z expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]], dtype=np.float32)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
This is not the entire training dataset for this task. Each datapoint in the dataset also needs an input circuit. 2.4 Input circuit definitionThe input-circuit below defines the random miscalibration the model will learn to correct.
random_rotations = np.random.uniform(0, 2 * np.pi, 3) noisy_preparation = cirq.Circuit( cirq.rx(random_rotations[0])(qubit), cirq.ry(random_rotations[1])(qubit), cirq.rz(random_rotations[2])(qubit) ) datapoint_circuits = tfq.convert_to_tensor([ noisy_preparation ] * 2) # Make two copied of this circuit
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
There are two copies of the circuit, one for each datapoint.
datapoint_circuits.shape
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
2.5 Training With the inputs defined you can test-run the `tfq` model.
model([datapoint_circuits, commands]).numpy()
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Now run a standard training process to adjust these values towards the `expected_outputs`.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() model.compile(optimizer=optimizer, loss=loss) history = model.fit(x=[datapoint_circuits, commands], y=expected_outputs, epochs=30, verbose=0) plt.plot(history.hi...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
From this plot you can see that the neural network has learned to overcome the systematic miscalibration. 2.6 Verify outputsNow use the trained model, to correct the qubit calibration errors. With Cirq:
def check_error(command_values, desired_values): """Based on the value in `command_value` see how well you could prepare the full circuit to have `desired_value` when taking expectation w.r.t. Z.""" params_to_prepare_output = controller(command_values).numpy() full_circuit = noisy_preparation + model_circuit ...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The value of the loss function during training provides a rough idea of how well the model is learning. The lower the loss, the closer the expectation values in the above cell is to `desired_values`. If you aren't as concerned with the parameter values, you can always check the outputs from above using `tfq`:
model([datapoint_circuits, commands])
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
3 Learning to prepare eigenstates of different operatorsThe choice of the $\pm \hat{Z}$ eigenstates corresponding to 1 and 0 was arbitrary. You could have just as easily wanted 1 to correspond to the $+ \hat{Z}$ eigenstate and 0 to correspond to the $-\hat{X}$ eigenstate. One way to accomplish this is by specifying a ...
# Define inputs. commands_input = tf.keras.layers.Input(shape=(1), dtype=tf.dtypes.float32, name='commands_input') circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` ...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Here is the controller network:
# Define classical NN. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ])
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Combine the circuit and the controller into a single `keras.Model` using `tfq`:
dense_2 = controller(commands_input) # Since you aren't using a PQC or ControlledPQC you must append # your model circuit onto the datapoint circuit tensor manually. full_circuit = tfq.layers.AddCircuit()(circuits_input, append=model_circuit) expectation_output = tfq.layers.Expectation()(full_circuit, ...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
3.2 The datasetNow you will also include the operators you wish to measure for each datapoint you supply for `model_circuit`:
# The operators to measure, for each command. operator_data = tfq.convert_to_tensor([[cirq.X(qubit)], [cirq.Z(qubit)]]) # The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
3.3 TrainingNow that you have your new inputs and outputs you can train once again using keras.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() two_axis_control_model.compile(optimizer=optimizer, loss=loss) history = two_axis_control_model.fit( x=[datapoint_circuits, commands, operator_data], y=expected_outputs, epochs=30, verbose=1) plt.plot(hi...
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The loss function has dropped to zero. The `controller` is available as a stand-alone model. Call the controller, and check its response to each command signal. It would take some work to correctly compare these outputs to the contents of `random_rotations`.
controller.predict(np.array([0,1]))
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
A Simple Neural Network from Scratch with PyTorch and Google Colab In this tutorial we will implement a simple neural network from scratch using PyTorch. The idea of the tutorial is to teach you the basics of PyTorch and how it can be used to implement a neural network from scratch. I will go over some of the basic f...
!pip3 install torch torchvision
Collecting torch [?25l Downloading https://files.pythonhosted.org/packages/7e/60/66415660aa46b23b5e1b72bc762e816736ce8d7260213e22365af51e8f9c/torch-1.0.0-cp36-cp36m-manylinux1_x86_64.whl (591.8MB)  100% |████████████████████████████████| 591.8MB 26kB/s tcmalloc: large alloc 1073750016 bytes == 0x61f82000 @ 0x...
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
The `torch` module provides all the necessary **tensor** operators you will need to implement your first neural network from scratch in PyTorch. That's right! In PyTorch everything is a Tensor, so this is the first thing you will need to get used to.
import torch import torch.nn as nn
_____no_output_____
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
DataLet's start by creating some sample data using the `torch.tensor` command. In Numpy, this could be done with `np.array`. Both functions serve the same purpose, but in PyTorch everything is a Tensor as opposed to a vector or matrix. We define types in PyTorch using the `dtype=torch.xxx` command. In the data below, ...
X = torch.tensor(([2, 9], [1, 5], [3, 6]), dtype=torch.float) # 3 X 2 tensor y = torch.tensor(([92], [100], [89]), dtype=torch.float) # 3 X 1 tensor xPredicted = torch.tensor(([4, 8]), dtype=torch.float) # 1 X 2 tensor
_____no_output_____
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
You can check the size of the tensors we have just created with the `size` command. This is equivalent to the `shape` command used in tools such as Numpy and Tensorflow.
print(X.size()) print(y.size())
torch.Size([3, 2]) torch.Size([3, 1])
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
ScalingBelow we are performing some scaling on the sample data. Notice that the `max` function returns both a tensor and the corresponding indices. So we use `_` to capture the indices which we won't use here because we are only interested in the max values to conduct the scaling. Perfect! Our data is now in a very ni...
# scale units X_max, _ = torch.max(X, 0) xPredicted_max, _ = torch.max(xPredicted, 0) X = torch.div(X, X_max) xPredicted = torch.div(xPredicted, xPredicted_max) y = y / 100 # max test score is 100 print(xPredicted)
tensor([0.5000, 1.0000])
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
Notice that there are two functions `max` and `div` that I didn't discuss above. They do exactly what they imply: `max` finds the maximum value in a vector... I mean tensor; and `div` is basically a nice little function to divide two tensors. Model (Computation Graph)Once the data has been processed and it is in the ...
class Neural_Network(nn.Module): def __init__(self, ): super(Neural_Network, self).__init__() # parameters # TODO: parameters can be parameterized instead of declaring them here self.inputSize = 2 self.outputSize = 1 self.hiddenSize = 3 # weights ...
_____no_output_____
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
For the purpose of this tutorial, we are not going to be talking math stuff, that's for another day. I just want you to get a gist of what it takes to build a neural network from scratch using PyTorch. Let's break down the model which was declared via the class above. Class HeaderFirst, we defined our model via a clas...
NN = Neural_Network() for i in range(1000): # trains the NN 1,000 times print ("#" + str(i) + " Loss: " + str(torch.mean((y - NN(X))**2).detach().item())) # mean sum squared loss NN.train(X, y) NN.saveWeights(NN) NN.predict()
#0 Loss: 0.28770461678504944 #1 Loss: 0.19437099993228912 #2 Loss: 0.129642054438591 #3 Loss: 0.08898762613534927 #4 Loss: 0.0638350322842598 #5 Loss: 0.04783045873045921 #6 Loss: 0.037219222635030746 #7 Loss: 0.029889358207583427 #8 Loss: 0.024637090042233467 #9 Loss: 0.020752854645252228 #10 Loss: 0.01780204102396965...
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
Policy Gradients on HIV SimulatorAn example of using WhyNot for reinforcement learning. WhyNot presents a unified interface with the [OpenAI gym](https://github.com/openai/gym), which makes it easy to run sequential decision making experiments on simulators in WhyNot.In this notebook we compare four different policies...
%load_ext autoreload %autoreload 2 import whynot.gym as gym import numpy as np import matplotlib.pyplot as plt import torch from scripts import utils %matplotlib inline
/Users/miller_john/anaconda3/envs/whynot/lib/python3.7/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm
MIT
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
HIV SimulatorThe HIV simulator is a differential equation simulator based onAdams, Brian Michael, et al. Dynamic multidrug therapies for HIV: Optimal and STI control approaches. North Carolina State University. Center for Research in Scientific Computation, 2004. APA.This HIV model has a set of 6 state and 20 simulati...
# Make the HIV environment and set random seed. env = gym.make('HIV-v0') np.random.seed(1) env.seed(1) torch.manual_seed(1)
_____no_output_____
MIT
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
Compared Policies Base Policy ClassWe define a base `Policy` class. Every policy has a `sample_action` function that takes an observation and returns an action. NNPolicyA 1-layer feed forward neural network with state dimension as input dimension, one hidden layer of 8 neurons (the state dim is 6), and action dimensio...
class NoTreatmentPolicy(utils.Policy): """The policy of always no treatment.""" def __init__(self): super(NoTreatmentPolicy, self).__init__(env) def sample_action(self, obs): return 0 class MaxTreatmentPolicy(utils.Policy): """The policy of always applying both RT inhibitor and...
_____no_output_____
MIT
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
Policy Gradient Implementation DetailsFor a given state $s$, a policy can be written as a probability distribution $\pi_\theta(s, a)$ over actions $a$, where $\theta$ is the parameter of the policy.The reinforcement learning objective is to learn a $\theta^*$ that maximizes the objective function $\;\;\;\; J(\theta)...
learned_policy = utils.run_training_loop( env=env, n_iter=300, max_episode_length=100, batch_size=1000, learning_rate=1e-3) policies = { "learned_policy": learned_policy, "no_treatment": NoTreatmentPolicy(), "max_treatment": MaxTreatmentPolicy(), "random": RandomPolicy(), } utils.plot_sample_traject...
Total reward for learned_policy: 4802102.5 Total reward for no_treatment: 1762320.5 Total reward for max_treatment: 2147030.5 Total reward for random: 2171225.0
MIT
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
Testing Fastpages Notebook Blog Post> A tutorial of fastpages for Jupyter notebooks.- toc: true - badges: true- comments: true- categories: [jupyter]- image: images/chart-preview.png AboutThis notebook is a demonstration of some of capabilities of [fastpages](https://github.com/fastai/fastpages) with notebooks.With `...
#hide_input print('The comment #hide_input was used to hide the code that produced this.')
The comment #hide_input was used to hide the code that produced this.
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
#collapse-hide import pandas as pd import altair as alt
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
#collapse-show cars = 'https://vega.github.io/vega-datasets/data/cars.json' movies = 'https://vega.github.io/vega-datasets/data/movies.json' sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv' stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv' flights = 'https://vega.github.io/vega-datasets/data/...
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Interactive Charts With AltairCharts made with Altair remain interactive. Example charts taken from [this repo](https://github.com/uwdata/visualization-curriculum), specifically [this notebook](https://github.com/uwdata/visualization-curriculum/blob/master/altair_interaction.ipynb).
# hide df = pd.read_json(movies) # load movies data genres = df['Major_Genre'].unique() # get unique field values genres = list(filter(lambda d: d is not None, genres)) # filter out None values genres.sort() # sort alphabetically #hide mpaa = ['G', 'PG', 'PG-13', 'R', 'NC-17', 'Not Rated']
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Example 1: DropDown
# single-value selection over [Major_Genre, MPAA_Rating] pairs # use specific hard-wired values as the initial selected values selection = alt.selection_single( name='Select', fields=['Major_Genre', 'MPAA_Rating'], init={'Major_Genre': 'Drama', 'MPAA_Rating': 'R'}, bind={'Major_Genre': alt.binding_selec...
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Example 2: Tooltips
alt.Chart(movies).mark_circle().add_selection( alt.selection_interval(bind='scales', encodings=['x']) ).encode( x='Rotten_Tomatoes_Rating:Q', y=alt.Y('IMDB_Rating:Q', axis=alt.Axis(minExtent=30)), # use min extent to stabilize axis title placement tooltip=['Title:N', 'Release_Date:N', 'IMDB_Rating:Q', '...
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Example 3: More Tooltips
# select a point for which to provide details-on-demand label = alt.selection_single( encodings=['x'], # limit selection to x-axis value on='mouseover', # select on mouseover events nearest=True, # select data point nearest the cursor empty='none' # empty selection includes no data points ) # d...
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Data TablesYou can display tables per the usual way in your blog:
movies = 'https://vega.github.io/vega-datasets/data/movies.json' df = pd.read_json(movies) # display table with pandas df[['Title', 'Worldwide_Gross', 'Production_Budget', 'Distributor', 'MPAA_Rating', 'IMDB_Rating', 'Rotten_Tomatoes_Rating']].head()
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Assignment - what's goin...
import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv') df.head() # !pip install pandas==0.23.4 # Weight seems to make the most sense as a dependent variable. We would expect weight to go down as exercis...
_____no_output_____
MIT
module3-databackedassertions/LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data
1. Finding Correlation from Scratch
factor = [] for i in df.values: factor.append(round(i[4]/i[2],5)) # i[2] = Views, i[4] = Comments df['view_to_comments'] = factor df.head() print("Minimum : ", min(df['view_to_comments'])) print("Maximum : ", max(df['view_to_comments'])) print(df['view_to_comments'].mode())
Minimum : 0.00013 Maximum : 0.05427 0 0.00137 dtype: float64
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
2. Adding Predicted Comments Column
comments = [] for i in df['views']: comments.append(int(i * .00137)) df['pred_comments'] = comments df.head()
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
Ploting Correlation 3. Correlation b/w Comments and Views
data = [] for i in df.values: data.append([i[2],i[4]]) df_ = pd.DataFrame(data, columns = ['views','comments']) views = list(df_.sort_values(by = 'views')['views']) comments = list(df_.sort_values(by = 'views')['comments']) fig, ax = plt.subplots(figsize = (15,4)) ax.plot(views,comments) plt.show() df.he...
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
4. Correlation b/w Views & [Comments, Predicted Comments]
data = [] for i in df.values: data.append([i[2],i[4],i[10]]) df_ = pd.DataFrame(data, columns = ['views','comments','pred_comments']) views = list(df_.sort_values(by = 'views')['views']) likes = list(df_.sort_values(by = 'views')['comments']) likes_ = list(df_.sort_values(by = 'views')['pred_comments...
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
5. Finding Loss Using MSE 5.1) Finding M-Error
total_error = [] for i in df.values: t = i[4]-i[10] # i[4] is Actual Comments, i[10] is Predicted Comments if (t >= 0): total_error.append(t) else: total_error.append(-t) sum(total_error)/len(total_error)
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
5.2) Finding View to Comments Ratio
view_to_comments = [] for i in df.values: view_to_comments.append(round(i[4]/i[2],5)) df['view_to_comments'] = view_to_comments st = int(df['view_to_comments'].min() * 100000) end = int(df['view_to_comments'].max() * 100000) factors = [] for i in range(st,end + 1 , 1): factors.append(i/100000)
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization