markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
La boucle for permet répéter une action pour toutes les valeurs d'une liste. En utilisant une boucle for, l'exemple ci-haut peut se réécrire plus facilement:
for k in range(1,12):
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Pour différencier les lignes, il est possible d'afficher plus d'informations:
from sympy import Eq for k in range(2, 10):
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Affectation d'une variable Pour affecter une valeur dans une variable, on se rappelle que cela se fait en Python comme en C ou C++ ou Java avec la syntaxe:
a = 5
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
La syntaxe a == 5 est réservée pour le test d'égalité. Mise à jour d'une variable Quand une instruction d'affectation est exécutée, l'expression de droite (à savoir l'expression qui vient après le signe = d'affectation) est évaluée en premier. Cela produit une valeur. Ensuite, l'assignation est faite, de sorte que la v...
n = 5 n = 3 * n + 1
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Ligne 2 signifie obtenir la valeur courante de n, la multiplier par trois et ajouter un, et affecter la réponse à n. Donc, après avoir exécuté les deux lignes ci-dessus, n va pointer / se référer à l'entier 16. Si vous essayez d'obtenir la valeur d'une variable qui n'a jamais été attribuée, vous obtenez une erreur:
W = x + 1
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Avant de pouvoir mettre à jour une variable, vous devez l'initialiser à une valeur de départ, habituellement avec une valeur simple:
sous_total = 0 sous_total = sous_total + 1
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
La mise à jour d'une variable en lui ajoutant 1 à celle-ci est très commune. On appelle cela un incrément de la variable; soustraire 1 est appelé un décrément. Le code sous_total = sous_total + 1 calcule le résultat de la partie droite dans un nouvel espace en mémoire et ensuite cette nouvelle valeur est affectée à la ...
sous_total += 1
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Quelques exemples L'exemple suivant illustre comment calculer la somme des éléments d'une liste en utilisant une variable s initialisée à zéro avant la boucle:
L = [134, 13614, 73467, 1451, 134, 88] s = 0 for a in L: s
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
On écrit la même chose en utilisant le signe += pour incrémenter la variable s :
s = 0 for a in L: s
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
On vérifie que le calcul est bon:
sum(L)
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
L'exemple suivant double chacune des lettres d'une chaîne de caractères:
s = 'gaston' t = '' for lettre in s: t
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Lorsque la variable de la boucle n'est pas utilisée dans le bloc d'instruction la convention est d'utiliser la barre de soulignement (_) pour l'indiquer. Ici, on calcule les puissances du nombre 3. On remarque que l'expression d'assignation k *= 3 est équivalente à k = k * 3 :
k = 1 for _ in range(10):
NotesDeCours/13-boucle-for.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
<img src="image/mean_variance.png" style="height: 75%;width: 75%; position: relative; right: 5%"> Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in...
# Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ # TODO: Implement Min-Max scaling...
traffic_sign/tensorflow/CarND-TensorFlow-Lab/lab.ipynb
gon1213/SDC
gpl-3.0
<img src="image/weight_biases.png" style="height: 60%;width: 60%; position: relative; right: 10%"> Problem 2 For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature ...
features_count = 784 labels_count = 10 # TODO: Set the features and labels tensors features = tf.placeholder(tf.float32) labels = tf.placeholder(tf.float32) # TODO: Set the weights and biases tensors weights = tf.Variable(tf.truncated_normal((features_count,labels_count))) biases = tf.Variable(tf.zeros(labels_count))...
traffic_sign/tensorflow/CarND-TensorFlow-Lab/lab.ipynb
gon1213/SDC
gpl-3.0
<img src="image/learn_rate_tune.png" style="height: 60%;width: 60%"> Problem 3 Below are 3 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy. Parameter configurations: Confi...
# TODO: Find the best parameters for each configuration epochs = 1 batch_size = 100 learning_rate = 0.1 ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 ...
traffic_sign/tensorflow/CarND-TensorFlow-Lab/lab.ipynb
gon1213/SDC
gpl-3.0
Test Set the epochs, batch_size, and learning_rate with the best learning parameters you discovered in problem 3. You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at leas...
# TODO: Set the epochs, batch_size, and learning_rate with the best parameters from problem 3 epochs = 10 batch_size = 200 learning_rate = 0.01 ### DON'T MODIFY ANYTHING BELOW ### # The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_coun...
traffic_sign/tensorflow/CarND-TensorFlow-Lab/lab.ipynb
gon1213/SDC
gpl-3.0
Now we will use one of the simplest clustering algorithms, K-means. This is an iterative algorithm which searches for three cluster centers such that the distance from each point to its cluster is minimizied.
from sklearn.cluster import KMeans from numpy.random import RandomState rng = RandomState(42) kmeans = KMeans(n_clusters=3, random_state=rng).fit(X_pca) import numpy as np np.round(kmeans.cluster_centers_, decimals=2) kmeans.labels_[:10] kmeans.labels_[-10:]
AstroML/notebooks/04_iris_clustering.ipynb
diego0020/va_course_2015
mit
The K-means algorithm has been used to infer cluster labels for the points. Let's call the plot_2D function again, but color the points based on the cluster labels rather than the iris species.
plot_2D(X_pca, kmeans.labels_, ["c0", "c1", "c2"]) plot_2D(X_pca, iris.target, iris.target_names)
AstroML/notebooks/04_iris_clustering.ipynb
diego0020/va_course_2015
mit
Read the data file (taken from http://cosmo.nyu.edu/~eak306/SDSS-LRG.html ) converted to ascii with comoving distance etc. in V01 reading from pkl files for faster read
# Saving the objects: with open('datDR12Q.pkl', 'w') as f: # Python 3: open(..., 'wb') pickle.dump(dat, f) # Getting back the objects: with open('datDR12Q.pkl') as f: # Python 3: open(..., 'rb') dat = pickle.load(f) dat Ez = lambda x: 1/m.sqrt(0.3*(1+x)**3+0.7) np.vectorize(Ez) #Calculate comoving distance...
DR12Q/DR12Q_correl_V01_LCDMr2.ipynb
rohinkumar/galsurveystudy
mit
Read the data file (taken from http://cosmo.nyu.edu/~eak306/SDSS-LRG.html ) converted to ascii with comoving distance etc. in V01 reading from pkl files for faster read
# Saving the objects: with open('datDR12Q.pkl', 'w') as f: # Python 3: open(..., 'wb') pickle.dump(dat, f) # Getting back the objects: with open('datDR12Q.pkl') as f: # Python 3: open(..., 'rb') dat = pickle.load(f) dat bins=np.arange(0.,0.08,0.005) print bins binsq=bins**2 binsq len(dat) LCDMmetricsq(...
DR12Q/DR12Q_correl_V01_LCDMr2.ipynb
rohinkumar/galsurveystudy
mit
BallTree.two_point_correlation works almost 10 times faster! with leaf_size=5 Going with it to the random catalog
dataR=ascii.read("./output/rand200kDR12Q.dat") dataR len(dataR) len(dat) rdr12f = open("./output/DR12Qsrarf.dat",'w') rdr12f.write("z\t ra\t dec\t s\t rar\t decr \n") for i in range(0,len(dataR)): rdr12f.write("%f\t " %dataR['z'][i]) rdr12f.write("%f\t %f\t " %(dataR['ra'][i],dataR['dec'][i])) rdr12f.w...
DR12Q/DR12Q_correl_V01_LCDMr2.ipynb
rohinkumar/galsurveystudy
mit
Next, let's upload a PNG image which we will apply all kinds of transformations on, and resize it to 500x500.
# Please assign the real file name of the image to image_name. image_name = '' uploaded_files = files.upload() size = (500, 500) # (width, height) image = Image.open(BytesIO(uploaded_files[image_name])).resize(size) display(image)
datathon/nusdatathon18/tutorials/image_preprocessing.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Now that we have the image uploaded, let's try rotate the image by 90 degrees cunter-clockwise.
image = image.transpose(Image.ROTATE_90) display(image)
datathon/nusdatathon18/tutorials/image_preprocessing.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Now let's flip the image horizontally.
image = image.transpose(Image.FLIP_LEFT_RIGHT) display(image)
datathon/nusdatathon18/tutorials/image_preprocessing.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
As a next step, let's adjust the contrast of the image. The base value is 1 and here we are increasing it by 20%.
contrast = ImageEnhance.Contrast(image) image = contrast.enhance(1.2) display(image)
datathon/nusdatathon18/tutorials/image_preprocessing.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
And brightness and sharpness.
brightness = ImageEnhance.Brightness(image) image = brightness.enhance(1.1) display(image) sharpness = ImageEnhance.Sharpness(image) image = sharpness.enhance(1.2) display(image)
datathon/nusdatathon18/tutorials/image_preprocessing.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
We load a famous social graph published in 1977, called Zachary's Karate club graph. This graph represents the friendships between members of a Karate Club. The club's president and the instructor were involved in a dispute, resulting in a split-up of this group. Here, we simply display the graph with matplotlib (using...
g = nx.karate_club_graph() plt.figure(figsize=(6,4)); nx.draw(g)
Day05_GraphAlgorithms1/notebooks/02 - Visualization.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
Now, we're going to display this graph in the notebook with d3.js. The first step is to bring this graph to Javascript. We choose here to export the graph in JSON. Note that d3.js generally expects each edge to be an object with a source and a target. Also, we specify which side each member has taken (club attribute).
from networkx.readwrite import json_graph data = json_graph.node_link_data(g) with open('graph.json', 'w') as f: json.dump(data, f, indent=4)
Day05_GraphAlgorithms1/notebooks/02 - Visualization.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
The next step is to create an HTML object that will contain the visualization. Here, we create a &lt;div&gt; element in the notebook. We also specify a few CSS styles for nodes and links (also called edges).
%%html <div id="d3-example"></div> <style> .node {stroke: #fff; stroke-width: 1.5px;} .link {stroke: #999; stroke-opacity: .6;} </style>
Day05_GraphAlgorithms1/notebooks/02 - Visualization.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
The last step is trickier. We write the Javascript code to load the graph from the JSON file, and display it with d3.js. Knowing the basics of d3.js is required here (see the documentation of d3.js). We also give detailled explanations in the code comments below. (http://d3js.org)
%%javascript // We load the d3.js library from the Web. require.config({paths: {d3: "http://d3js.org/d3.v3.min"}}); require(["d3"], function(d3) { // The code in this block is executed when the // d3.js library has been loaded. // First, we specify the size of the canvas containing // the visualiz...
Day05_GraphAlgorithms1/notebooks/02 - Visualization.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
Bokeh 라이브러리 임포트 Bokeh는 원래 정적인 웹사이트로 렌더링(2차원의 화상에 광원·위치·색상 등 외부의 정보를 고려하여 사실감을 불어넣어, 3차원 화상을 만드는 과정을 뜻하는 컴퓨터그래픽스 용어) 할 html 파일을 출력하는 것을 목표로 한다. 따라서 출력을 저장할 html 파일 패스를 지정해야 한다. 만약 주피터 노트북에서 작업한다면 다음과 같이 output_notebook 명령을 실행해야 한다.
import bokeh.plotting as bp # 주피터 노트북이 아닌 파일로 출력하는 경우 # bp.output_file("../images/msft_1.html", title="Bokeh Example (Static)") # 주피터 노트북에서 실행하여 출력하는 경우 bp.output_notebook()
통계, 머신러닝 복습/160517화_4일차_시각화 Visualization/4.웹 플롯을 위한 bokeh 패키지 소개.ipynb
kimkipyo/dss_git_kkp
mit
플롯팅 이제 플롯을 위한 준비 작업을 완료하였다. 우선 figure 명령으로 Figure 클래스 객체를 생성해야한다. 이를 p라는 변수에 저장하자. http://bokeh.pydata.org/en/latest/docs/reference/plotting.html#bokeh.plotting.figure.figure
p = bp.figure(title='Historical Stock Quotes', # 플롯 제목 x_axis_type = 'datetime', # x 축은 날짜 정보 tools = '')
통계, 머신러닝 복습/160517화_4일차_시각화 Visualization/4.웹 플롯을 위한 bokeh 패키지 소개.ipynb
kimkipyo/dss_git_kkp
mit
다음으로 Figure 클래스의 메서드를 호출하여 실제 플롯 객체를 추가한다. 우선 라인 플롯을 그리기 위해 line 메서드을 실행한다. http://bokeh.pydata.org/en/latest/docs/reference/plotting.html#bokeh.plotting.figure.Figure.line
p.line( data['Date'], # x 좌표 data['Close'], # y 좌표 color ='#0066cc', # 선 색상 legend ='MSFT', # 범례 이름 )
통계, 머신러닝 복습/160517화_4일차_시각화 Visualization/4.웹 플롯을 위한 bokeh 패키지 소개.ipynb
kimkipyo/dss_git_kkp
mit
이제 show 명령어를 호출하여 실제 차트를 렌더링 한다.
bp.show(p)
통계, 머신러닝 복습/160517화_4일차_시각화 Visualization/4.웹 플롯을 위한 bokeh 패키지 소개.ipynb
kimkipyo/dss_git_kkp
mit
상호작용 툴 추가하기 만약 차트에 상호작용을 위한 툴을 추가하고 싶다면 Figure 객체 생성시 tools 인수를 설정한다.
p = bp.figure(title='Historical Stock Quotes', # 플롯 제목 x_axis_type ='datetime', # x 축은 날짜 정보 tools = 'pan, wheel_zoom, box_zoom, reset, previewsave') p.line( data['Date'], # x 좌표 data['Close'], # y 좌표 color ='#0066cc', # 선 색상 legend ='MSFT', # 범례 이름 ) bp.show(p)
통계, 머신러닝 복습/160517화_4일차_시각화 Visualization/4.웹 플롯을 위한 bokeh 패키지 소개.ipynb
kimkipyo/dss_git_kkp
mit
Note that the sequence size Nzc is lower then the number of subcarriers that will have elements of the Zadoff-Chu sequence. That is $Nzc \leq 300/2 = 150$. Therefore, we will append new elements (creating a cyclic sequence).
# Considering a_u currently has 139 elements, we need to append 11 elements to make 150 # TODO: Make this automatically depending on the Nsc and Nzc values a_u1 = np.hstack([a_u1, a_u1[0:11]]) a_u2 = np.hstack([a_u2, a_u2[0:11]]) a_u3 = np.hstack([a_u3, a_u3[0:11]])
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Create shifted sequences for 3 users First we arbitrarely choose some cyclic shift indexes and then we call zadoffchu.getShiftedZF to get the shifted sequence.
m_u1 = 1 # Cyclic shift index m_u2 = 4 m_u3 = 7 r1 = get_shifted_root_seq(a_u1, m_u1, denominator=8) r2 = get_shifted_root_seq(a_u2, m_u2, denominator=8) r3 = get_shifted_root_seq(a_u3, m_u3, denominator=8)
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Generate channels from users to the BS Now it's time to transmit the shifted sequences. We need to create the fading channels from two users to some BS.
speedTerminal = 3/3.6 # Speed in m/s fcDbl = 2.6e9 # Central carrier frequency (in Hz) timeTTIDbl = 1e-3 # Time of a single TTI subcarrierBandDbl = 15e3 # Subcarrier bandwidth (in Hz) numOfSubcarriersPRBInt = 12 # Number of subcarriers in each PRB # xxxxxxxxxx Dependent ...
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Finally we have a channel (freq. response) for each user.
# Each channel is the frequency response in 300 subcarriers H1 = freqResponse1[:,0] H2 = freqResponse2[:,0] H3 = freqResponse3[:,0] h1 = np.fft.ifft(H1) h2 = np.fft.ifft(H2) h3 = np.fft.ifft(H3) plt.figure(figsize=(16,6)) plt.subplot(1,2,1) plt.plot(np.abs(H1)) plt.title('Channel in Freq. Domain') plt.subplot(1,2,2) ...
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Perform the transmission First we need to prepare the input data from our shifted Zadoff-Chu sequences. To makes things clear, let's start transmiting a single sequence and we won't include the white noise. Since we use a comb to transmit the SRS sequence, we will use Nsc/2 subcarriers from the Nsc subcarriers from a c...
comb_indexes = np.arange(0, Nsc, 2) # Note that this is the received signal in the frequency domain # Here we are not summing users Y1 = H1[comb_indexes] * r1 Y2 = H2[comb_indexes] * r2 Y3 = H3[comb_indexes] * r3 # Complete transmit signal summing all users Y = Y1 + Y2 + Y3; print("Size of Y: {0}".format(Y.size))
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
According to the paper, ... the received frequency-domain sequence Y is element-wise multiplied with the complex conjugate of the expected root sequence X before the IDFT. This provides in one shot the concatenated CIRs of all UEs multiplexed on the same root sequence. Just for checking let's get the plot of the rec...
# Just for checking let's get the plot of the received signal if only users 1 transmits. y1 = np.fft.ifft(np.conj(r1) * Y1) plt.figure(figsize=(16,6)) plt.subplot(1,2,1) plt.stem(np.abs(y1[0:40]), use_line_collection=True) plt.title("Estimated impulse response") plt.subplot(1,2,2) plt.stem(np.abs(h1[0:40]), use_line_co...
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
And for user 2.
# Just for checking let's get the plot of the received signal if only users 1 transmits. y2 = np.fft.ifft(np.conj(r2) * Y2) plt.figure(figsize=(16,6)) plt.subplot(1,2,1) plt.stem(np.abs(y2[0:40]), use_line_collection=True) plt.title("Estimated impulse response") plt.subplot(1,2,2) plt.stem(np.abs(h2[0:40]), use_line_co...
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
And for user 3.
# Just for checking let's get the plot of the received signal if only users 1 transmits. y3 = np.fft.ifft(np.conj(r3) * Y3) plt.figure(figsize=(16,6)) plt.subplot(1,2,1) plt.stem(np.abs(y3[0:40]), use_line_collection=True) plt.title("Estimated impulse response") plt.subplot(1,2,2) plt.stem(np.abs(h3[0:40]), use_line_co...
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Now let's get the plot of the signal considering that all users transmitted. Notice how the part due to user 1 in the plot is the same channel when only user 1 transmitted. This indicates that Zadoff-chu 0 cross correlation is indeed working.
y = np.fft.ifft(np.conj(a_u1) * Y, 150) plt.figure(figsize=(12,6)) plt.stem(np.abs(y), use_line_collection=True) plt.show()
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Estimate the channels Since we get a concatenation of the impulse response of the different users, we need to know for each users we need to know the first and the last sample index corresponding to the particular user's impulse response. Since we have Nsc subcarriers, from which we will use $Nsc/2$, and we have 3 user...
m_u1
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
For an index equal to 1 the starting sample of the first user will be 101 and the ending sample will be 101+50-1=150.
def plot_channel_responses(h, tilde_h): """Plot the estimated and true channel responses Parameters ---------- h : numpy complex array The true channel impulse response tilde_h : numpy complex array The estimated channel impulse response """ H = np.fft.fft(h) tilde_H = n...
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Now we will compute the squared error in each subcarrier.
tilde_H1 = np.fft.fft(tilde_h1, Nsc) plot_normalized_squared_error(H1, tilde_H1) y = np.fft.ifft(np.conj(r2) * (Y), 150) tilde_h2 = y[0:20] tilde_H2 = np.fft.fft(tilde_h2, Nsc) tilde_Y2 = tilde_H2[comb_indexes] * r2 plot_channel_responses(h2, tilde_h2) tilde_H2 = np.fft.fft(tilde_h2, Nsc) plot_normalized_squared_err...
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Estimated the channels from corrupted (white noise) signal Now we will add some white noise to Y
# Add white noise noise_var = 1e-2 Y_noised = Y + np.sqrt(noise_var/2.) * (np.random.randn(Nsc//2) + 1j * np.random.randn(Nsc//2)) y_noised = np.fft.ifft(np.conj(r2) * (Y_noised), 150) tilde_h2_noised = y_noised[0:20] plot_channel_responses(h2, tilde_h2_noised) tilde_H2_noised = np.fft.fft(tilde_h2_noised, Nsc) plot...
ipython_notebooks/ZadoffchuChannelEstimation.ipynb
darcamo/pyphysim
gpl-2.0
Lab 2 - Logistic Regression (LR) with MNIST This lab corresponds to Module 2 of the "Deep Learning Explained" course. We assume that you have successfully completed Lab 1 (Downloading the MNIST data). In this lab we will build and train a Multiclass Logistic Regression model using the MNIST data. Introduction Problem:...
# Figure 1 Image(url= "http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png", width=200, height=200)
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Goal: Our goal is to train a classifier that will identify the digits in the MNIST dataset. Approach: There are 4 stages in this lab: - Data reading: We will use the CNTK Text reader. - Data preprocessing: Covered in part A (suggested extension section). - Model creation: Multiclass Logistic Regression model. - Trai...
# Import the relevant components from __future__ import print_function # Use a function definition from future version (say 3.x from 2.7 interpreter) import matplotlib.image as mpimg import matplotlib.pyplot as plt import numpy as np import sys import os import cntk as C %matplotlib inline
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available,...
# Select the right target device when this notebook is being tested: if 'TEST_DEVICE' in os.environ: if os.environ['TEST_DEVICE'] == 'cpu': C.device.try_set_default_device(C.device.cpu()) else: C.device.try_set_default_device(C.device.gpu(0)) # Test for CNTK version if not C.__version__ == "2.0...
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Initialization
# Ensure we always get the same amount of randomness np.random.seed(0) C.cntk_py.set_fixed_random_seed(1) C.cntk_py.force_deterministic_algorithms() # Define the data dimensions input_dim = 784 num_output_classes = 10
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Data reading There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in dist...
# Read a CTF formatted text (as mentioned above) using the CTF deserializer from a file def create_reader(path, is_training, input_dim, num_label_classes): labelStream = C.io.StreamDef(field='labels', shape=num_label_classes, is_sparse=False) featureStream = C.io.StreamDef(field='features', shape=input_dim...
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Model Creation A multiclass logistic regression (LR) network is a simple building block that has been effectively powering many ML applications in the past decade. The figure below summarizes the model in the context of the MNIST data. LR is a simple linear model that takes as input, a vector of numbers describing th...
print(input_dim) print(num_output_classes)
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Network input and output: - input variable (a key CNTK concept): An input variable is a container in which we fill different observations, in this case image pixels, during model learning (a.k.a.training) and model evaluation (a.k.a. testing). Thus, the shape of the input must match the shape of the data that will b...
input = C.input_variable(input_dim) label = C.input_variable(num_output_classes)
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Logistic Regression network setup The CNTK Layers module provides a Dense function that creates a fully connected layer which performs the above operations of weighted input summing and bias addition.
def create_model(features): with C.layers.default_options(init = C.glorot_uniform()): r = C.layers.Dense(num_output_classes, activation = None)(features) #r = C.layers.Dense(num_output_classes, activation = None)(C.ops.splice(C.ops.sqrt(features), features, C.ops.square(features))) return r
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
z will be used to represent the output of a network.
# Scale the input to 0-1 range by dividing each pixel by 255. input_s = input/255 z = create_model(input_s) print(input_s) print(input)
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Training Below, we define the Loss function, which is used to guide weight changes during training. As explained in the lectures, we use the softmax function to map the accumulated evidences or activations to a probability distribution over the classes (Details of the softmax function and other activation functions)....
loss = C.cross_entropy_with_softmax(z, label) loss
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Evaluation Below, we define the Evaluation (or metric) function that is used to report a measurement of how well our model is performing. For this problem, we choose the classification_error() function as our metric, which returns the average error over the associated samples (treating a match as "1", where the model's...
label_error = C.classification_error(z, label)
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Configure training The trainer strives to reduce the loss function by different optimization approaches, Stochastic Gradient Descent (sgd) being one of the most popular. Typically, one would start with random initialization of the model parameters. The sgd optimizer would calculate the loss or error between the predict...
# Instantiate the trainer object to drive the model training learning_rate = 0.1 # 0.2 lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch) learner = C.sgd(z.parameters, lr_schedule) trainer = C.Trainer(z, (loss, label_error), [learner])
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
First let us create some helper functions that will be needed to visualize different functions associated with training.
# Define a utility function to compute the moving average sum. # A more efficient implementation is possible with np.cumsum() function def moving_average(a, w=5): if len(a) < w: return a[:] # Need to send a copy of the array return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(...
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
<a id='#Run the trainer'></a> Run the trainer We are now ready to train our fully connected neural net. We want to decide what data we need to feed into the training engine. In this example, each iteration of the optimizer will work on minibatch_size sized samples. We would like to train on all 60000 observations. Addi...
# Initialize the parameters for the trainer minibatch_size = 64 num_samples_per_sweep = 60000 num_sweeps_to_train_with = 10 num_minibatches_to_train = (num_samples_per_sweep * num_sweeps_to_train_with) / minibatch_size num_minibatches_to_train # Create the reader to training data set reader_train = create_reader(tra...
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Let us plot the errors over the different training minibatches. Note that as we progress in our training, the loss decreases though we do see some intermediate bumps.
# Compute the moving average loss to smooth out the noise in SGD plotdata["avgloss"] = moving_average(plotdata["loss"]) plotdata["avgerror"] = moving_average(plotdata["error"]) # Plot the training loss and the training error import matplotlib.pyplot as plt plt.figure(1) plt.subplot(211) plt.plot(plotdata["batchsize"]...
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Evaluation / Testing Now that we have trained the network, let us evaluate the trained network on the test data. This is done using trainer.test_minibatch.
# Read the training data reader_test = create_reader(test_file, False, input_dim, num_output_classes) test_input_map = { label : reader_test.streams.labels, input : reader_test.streams.features, } # Test data for trained model test_minibatch_size = 512 num_samples = 10000 num_minibatches_to_test = num_sampl...
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us ro...
out = C.softmax(z)
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Let us test a small minibatch sample from the test data.
# Read the data for evaluation reader_eval = create_reader(test_file, False, input_dim, num_output_classes) eval_minibatch_size = 25 eval_input_map = {input: reader_eval.streams.features} data = reader_test.next_minibatch(eval_minibatch_size, input_map = test_input_map) img_label = data[label].asarray() img_data = ...
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
As you can see above, our model is not yet perfect. Let us visualize one of the test images and its associated label. Do they match?
# Plot a random image sample_number = 5 plt.imshow(img_data[sample_number].reshape(28,28), cmap="gray_r") plt.axis('off') img_gt, img_pred = gtlabel[sample_number], pred[sample_number] print("Image Label: ", img_pred)
DAT236x Deep Learning Explained/Lab2_LogisticRegression.ipynb
bourneli/deep-learning-notes
mit
Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
live = preg[preg.outcome == 1] print live wgt_cdf = thinkstats2.Cdf(live.totalwgt_lb, label='')
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Display the CDF.
thinkplot.Cdf(wgt_cdf) thinkplot.Show(xlabel='birthweight', ylabel = 'CDF', title = 'Cumulative Distribution of Birthweights')
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Find out how much you weighed at birth, if you can, and compute CDF(x).
wgt_cdf.PercentileRank(8.2) # wgt_cdf.PercentileRank(live.totalwgt_lb.mean())
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.
others = live[live.pregordr > 1] others_wgt_cdf = thinkstats2.Cdf(others.totalwgt_lb) others_wgt_cdf.PercentileRank(8.2)
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Compute the percentile rank of your birthweight Compute the median birth weight by looking up the value associated with p=0.5.
wgt_cdf.Value(0.5)
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75.
iqr = (wgt_cdf.Percentile(25), wgt_cdf.Percentile(75)) iqr
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Make a random selection from <tt>cdf</tt>.
wgt_cdf.Random()
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Draw a random sample from <tt>cdf</tt>.
wgt_cdf.Sample(10)
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.
values = wgt_cdf.Sample(1000) values_hist = thinkstats2.Hist(values, 'values') ranks = [wgt_cdf.PercentileRank(v) for v in values] ranks_hist = thinkstats2.Hist(ranks, 'ranks') thinkplot.PrePlot(3, rows=3) thinkplot.SubPlot(1) thinkplot.Hist(values_hist, label='values Hist') thinkplot.SubPlot(2) values_cdf = thinkst...
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
rand_vals = [np.random.random() for i in range(100)] rv_pmf = thinkstats2.Pmf(rand_vals, label="random values") thinkplot.Hist(rv_pmf)
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Assuming that the PMF doesn't work very well, try plotting the CDF instead.
rv_cdf = thinkstats2.Cdf(rand_vals, label="random values") thinkplot.Cdf(rv_cdf)
code/chap04ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.
# Create the Cart-Pole game environment env = gym.make('CartPole-v0')
reinforcement/Q-learning-cart.ipynb
tkurfurst/deep-learning
mit
We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use...
env.reset() rewards = [] for _ in range(100): env.render() state, reward, done, info = env.step(env.action_space.sample()) # take a random action rewards.append(reward) if done: rewards = [] env.reset() env.render(close=True) env.reset()
reinforcement/Q-learning-cart.ipynb
tkurfurst/deep-learning
mit
To shut the window showing the simulation, use env.close(). If you ran the simulation above, we can look at the rewards:
print(rewards[-20:]) print(sum(rewards)) print(len(rewards))
reinforcement/Q-learning-cart.ipynb
tkurfurst/deep-learning
mit
The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left an...
class QNetwork: def __init__(self, learning_rate=0.01, state_size=4, action_size=2, hidden_size=10, name='QNetwork'): # state inputs to the Q-network with tf.variable_scope(name): self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inpu...
reinforcement/Q-learning-cart.ipynb
tkurfurst/deep-learning
mit
Exploration - Exploitation To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability ...
train_episodes = 1000 # max number of episodes to learn from max_steps = 200 # max steps in an episode gamma = 0.99 # future reward discount # Exploration parameters explore_start = 1.0 # exploration probability at start explore_stop = 0.01 # minimum expl...
reinforcement/Q-learning-cart.ipynb
tkurfurst/deep-learning
mit
Training Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
# Now train with experiences saver = tf.train.Saver() rewards_list = [] with tf.Session() as sess: # Initialize variables sess.run(tf.global_variables_initializer()) step = 0 for ep in range(1, train_episodes): total_reward = 0 t = 0 while t < max_steps: step += ...
reinforcement/Q-learning-cart.ipynb
tkurfurst/deep-learning
mit
Visualizing training Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
%matplotlib inline import matplotlib.pyplot as plt def running_mean(x, N): cumsum = np.cumsum(np.insert(x, 0, 0)) return (cumsum[N:] - cumsum[:-N]) / N eps, rews = np.array(rewards_list).T smoothed_rews = running_mean(rews, 10) plt.plot(eps[-len(smoothed_rews):], smoothed_rews) plt.plot(eps, rews, color='gr...
reinforcement/Q-learning-cart.ipynb
tkurfurst/deep-learning
mit
Conv layer #1 : 32 5x5 filters, ReLU Pooling layer #1 : 2x2 filter, stride 2 Conv layer #2 : 64 5x5, ReLU Pooling layer #2 : 2x2 filter, stride 2 Dense Layer #1 : 1024, with dropout regularization rate of 0.4 Dense layer #2(logit) : 10 neurons, one for each digit target class tf.layer module conv2d() max_pooling2d(...
def cnn_model_fn(features, labels, mode): input_layer = tf.reshape(features["x"], [-1, 28, 28, 1]) conv1 = tf.layers.conv2d( inputs=input_layer, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.relu) pool1 = tf.layers.max...
Tensorflow/mnist.ipynb
zzsza/TIL
mit
데이터 로드
mnist = tf.contrib.learn.datasets.load_dataset("mnist") train_data = mnist.train.images train_labels = np.asarray(mnist.train.labels, dtype=np.int32) eval_data = mnist.test.images eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
Tensorflow/mnist.ipynb
zzsza/TIL
mit
Create Estimator
mnist_classifier = tf.estimator.Estimator(model_fn=cnn_model_fn, model_dir="./tmp/mnist_convnet") tensors_to_log = {"probabilities": "softmax_tensor"} logging_hook = tf.train.LoggingTensorHook(tensors=tensors_to_log, every_n_iter=50)
Tensorflow/mnist.ipynb
zzsza/TIL
mit
Train model
%%time train_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x":train_data}, y=train_labels, batch_size=100, num_epochs=None, shuffle=True) mnist_classifier.train(input_fn=train_input_fn, steps=500, hooks=[logging_hook]) eval_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": eval_data},...
Tensorflow/mnist.ipynb
zzsza/TIL
mit
Lendo o pacote de dados É possível ler o pacote de dados diretamente a partir da URL:
# Gravar no Pandas url = 'https://github.com/dadosgovbr/catalogos-dados-brasil/raw/master/datapackage.json' storage = Storage.connect('pandas') package = Package(url) package.save(storage=storage)
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
Um pacote de dados pode conter uma quantidade de recursos. Pense em um recurso como uma tabela em um banco de dados. Cada um é um arquivo CSV. No contexto do armazenamento dos dados, esses recursos também são chamados de buckets (numa tradução livre, "baldes").
storage.buckets
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
Que são também Dataframes do Pandas:
type(storage['catalogos'])
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
Por isso, funcionam todas as operações que podem ser feitas com um DataFrames do Pandas:
storage['solucao']
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
Como, por exemplo, mostrar o início da tabela.
storage['catalogos'].head()
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
Ou ver quantos portais existem por tipo de solução, ou por UF, ou por poder, etc. Por tipo de solução
tipo_solucao = storage['catalogos'].groupby('Solução').count()['URL'].rename('quantidade') tipo_solucao px.bar( pd.DataFrame(tipo_solucao).reset_index(), x = 'Solução', y = 'quantidade', color = 'Solução', color_discrete_sequence = py.colors.qualitative.Set2 )
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
Por poder da república
poder = storage['catalogos'].groupby('Poder').count()['URL'].rename('quantidade') poder go.Figure( data=go.Pie( labels=poder.index, values=poder.values, hole=.4 ) ).show()
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
Por esfera
esfera = storage['catalogos'].groupby('Esfera').count()['URL'].rename('quantidade') esfera go.Figure( data=go.Pie( labels=esfera.index, values=esfera.values, hole=.4 ) ).show()
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
Por unidade federativa
uf = storage['catalogos'].groupby('UF').count()['URL'].rename('quantidade') uf px.bar( pd.DataFrame(uf).reset_index(), x = 'UF', y = 'quantidade', color = 'UF', color_discrete_sequence = py.colors.qualitative.Set3 )
scripts/uso/como-usar-com-o-pandas.ipynb
dadosgovbr/catalogos-dados-brasil
mit
All of the SHARPpy routines (parcel lifting, composite indices, etc.) reside within the SHARPTAB module. SHARPTAB contains 6 modules: params, winds, thermo, utils, interp, fire, constants, watch_type Each module has different functions: interp - interpolates different variables (temperature, dewpoint, wind, etc.) to a ...
import sharppy.sharptab as tab
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
Step 3: Making a Profile object. Before running any analysis routines on the data, we have to create a Profile object first. A Profile object describes the vertical thermodynamic and kinematic profiles and is the key object that all SHARPpy routines need to run. Any data source can be passed into a Profile object (i....
import numpy as np from StringIO import StringIO def parseSPC(spc_file): ## read in the file data = np.array([l.strip() for l in spc_file.split('\n')]) ## necessary index points title_idx = np.where( data == '%TITLE%')[0][0] start_idx = np.where( data == '%RAW%' )[0] + 1 finish_idx = np.where(...
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause