markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
expand タブをスペースに変換する(1) expand [-i, --initial] [-t NUMBER, --tabs=NUMBER] [FILE...] オプション -i, --initial 空白以外の文字直後のタブは無視する -t NUMBER, --tabs=NUMBER タブ幅を数値NUMBERで指定する(デフォルトは8桁) FILE テキスト・ファイルを指定する
%%bash expand -t 1 hightemp.txt
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
12. 1列目をcol1.txtに,2列目をcol2.txtに保存 各行の1列目だけを抜き出したものをcol1.txtに,2列目だけを抜き出したものをcol2.txtとしてファイルに保存せよ.確認にはcutコマンドを用いよ.
def write_col(col): with open("hightemp.txt", 'r') as f: writing = [i.split('\t')[col-1]+"\n" for i in f.readlines()] with open('col{}.txt'.format(col), 'w') as f: f.write("".join(writing)) write_col(1) write_col(2)
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
cut テキスト・ファイルの各行から一部分を取り出す expand [-i, --initial] [-t NUMBER, --tabs=NUMBER] [FILE...] オプション -b, --bytes byte-list byte-listで指定した位置のバイトだけ表示する -c, --characters character-list character-listで指定した位置の文字だけ表示する -d, --delimiter delim フィールドの区切りを設定する。初期設定値はタブ -f, --fields field-list field-li...
%%bash cut -f 1 hightemp.txt > cut_col1.txt cut -f 2 hightemp.txt > cut_col2.txt
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
13. col1.txtとcol2.txtをマージ 12で作ったcol1.txtとcol2.txtを結合し,元のファイルの1列目と2列目をタブ区切りで並べたテキストファイルを作成せよ.確認にはpasteコマンドを用いよ.
with open('col1.txt', 'r') as f1: col1 = [i.strip('\n') for i in f1.readlines()] with open('col2.txt', 'r') as f2: col2 = [i.strip('\n') for i in f2.readlines()] writing = "" for i in range(len(col1)): writing += col1[i] + '\t' + col2[i] + '\n' with open('marge.txt', 'w') as f: f.write(writing)
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
paste ファイルを水平方向に連結する paste [オプション] [FILE] オプション -d --delimiters=LIST タブの代わりに区切り文字をLISTで指定する -s, --serial 縦方向に連結する FILE 連結するファイルを指定する
%%bash paste col1.txt col2.txt > paste_marge.txt
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
14. 先頭からN行を出力 自然数Nをコマンドライン引数などの手段で受け取り, 入力のうち先頭のN行だけを表示せよ.確認にはheadコマンドを用いよ.
def head(N): with open('marge.txt') as f: return "".join(f.readlines()[:N+1]) print(head(3))
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
head ファイルの先頭部分を表示する head [-c N[bkm]] [-n N] [-qv] [--bytes=N[bkm]] [--lines=N] [--quiet] [--silent] [--verbose] [file...] オプション -c N, --bytes N ファイルの先頭からNバイト分の文字列を表示する。Nの後にbを付加したときは,指定した数の512倍のバイトを,kを付加したときは指定した数の1024倍のバイトを,mを付加したときには指定した数の1048576倍のバイトを表示する -n N, --lines N ファイルの先頭からN行を表示する -q, --quiet, --silent ファ...
%%bash head -n 3 marge.txt
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
15. 末尾のN行を出力 自然数Nをコマンドライン引数などの手段で受け取り, 入力のうち末尾のN行だけを表示せよ.確認にはtailコマンドを用いよ.
def tail(N): with open('marge.txt') as f: tail = "".join(f.readlines()[-1:-N:-1]) return tail print(tail(3))
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
16. ファイルをN分割する 自然数Nをコマンドライン引数などの手段で受け取り, 入力のファイルを行単位でN分割せよ.同様の処理をsplitコマンドで実現せよ.
def split_flie(name, N): with open(name, 'r') as f: split = "".join(f.readlines()[:N]) return split print(split_flie("marge.txt", 3))
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
split ファイルを分割する split [-lines] [-l lines] [-b bytes[bkm]] [-C bytes[bkm]] [--lines=lines] [--bytes=bytes[bkm]] [--linebytes=bytes[bkm]] [infile [outfile-prefix]] オプション -lines, -l lines, --lines=lines 元ファイルのlineで指定した行ごとに区切り,出力ファイルに書き出す -b bytes[bkm], --bytes=bytes[bkm] 元ファイルをbytesで示したバイト数で分割し,出力する。数字の後に記号を付加することにより単位...
%%bash split -l 3 marge.txt split_marge.txt
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
17. 1列目の文字列の異なり 1列目の文字列の種類(異なる文字列の集合)を求めよ. 確認にはsort, uniqコマンドを用いよ.
def kinds_col(file_name, N=0): with open(file_name, 'r') as f: tmp = f.readlines() return set([i.strip('\n') for i in tmp]) print(kinds_col('col1.txt'))
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
18. 各行を3コラム目の数値の降順にソート 各行を3コラム目の数値の逆順で整列せよ (注意: 各行の内容は変更せずに並び替えよ). 確認にはsortコマンドを用いよ (この問題はコマンドで実行した時の結果と合わなくてもよい).
def sorted_list(filename, col): with open(filename, 'r') as f: return_list = [i.strip("\n").split('\t') for i in f.readlines()] return sorted(return_list, key=lambda x: x[col], reverse=True) print(sorted_list("hightemp.txt", 2))
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
19. 各行の1コラム目の文字列の出現頻度を求め,出現頻度の高い順に並べる 各行の1列目の文字列の出現頻度を求め,その高い順に並べて表示せよ.確認にはcut, uniq, sortコマンドを用いよ.
def frequency_sort(filename, col): from collections import Counter with open(filename, 'r') as f: return_list = [i.strip("\n").split('\t')[col-1] for i in f.readlines()] return [i[0] for i in Counter(return_list).most_common()] print(frequency_sort("hightemp.txt", 1))
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
Inserting elements at the beginning of a list
%matplotlib inline from matplotlib.pyplot import plot from time import time data = [] for i in range(1000, 50001, 1000): L = [] before = time() for _ in range(i): L.insert(0, None) after = time() data.append((i, after - before)) plot(*tuple(zip(*data))) print()
Lectures/Lecture_3/cost_of_operations_on_lists.ipynb
YufeiZhang/Principles-of-Programming-Python-3
gpl-3.0
Size of a list initialised to a given number of elements
%matplotlib inline from matplotlib.pyplot import plot from sys import getsizeof data = [] for i in range(1, 51): data.append((i, getsizeof([None] * i))) plot(*tuple(zip(*data))) print()
Lectures/Lecture_3/cost_of_operations_on_lists.ipynb
YufeiZhang/Principles-of-Programming-Python-3
gpl-3.0
Size of a list to which elements are appended incrementally
%matplotlib inline from matplotlib.pyplot import plot from sys import getsizeof data = [] L = [] for i in range(1, 51): L.append(None) data.append((i, getsizeof(L))) plot(*tuple(zip(*data))) print()
Lectures/Lecture_3/cost_of_operations_on_lists.ipynb
YufeiZhang/Principles-of-Programming-Python-3
gpl-3.0
Any components not passed automatically default to 0. REBOUND can also accept orbital elements. Reference bodies As a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the syst...
sim.add(m=1., a=1.) sim.status()
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,
sim.add(m=1.e-3, a=100.)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet cen...
sim.add(primary=sim.particles[1], a=0.01)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, you can always access any orbital element through, e.g., sim.particles[1].inc (see the diagram, and table of orbital elements under the Orbit structu...
print(sim.particles[1].a) orbits = sim.calculate_orbits() for orbit in orbits: print(orbit)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last pa...
print(sim.particles[3].calculate_orbit(primary=sim.particles[1]))
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet). When you access orbital elements individually, e.g., sim.particles[1].inc, you always get Jacobi elements. If you need to specify the primary, you have to do it with sim.calc...
sim = rebound.Simulation() sim.add(m=1.) sim.add(a=1., e=0., inc=0.1, Omega=0.3, omega=0.1) print(sim.particles[1].orbit)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
The problem here is that $\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results for both $\omega$ and $f$, since the latter is also undefined as the angle from pericenter to the particle's position. How...
print(sim.particles[1].theta)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.
sim = rebound.Simulation() sim.add(m=1.) sim.add(a=1., e=0., inc=0.1, Omega=0.3, theta = 0.4) print(sim.particles[1].theta) import rebound sim = rebound.Simulation() sim.add(m=1.) sim.add(a=1., e=0.2, Omega=0.1) print(sim.particles[1].orbit)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\Omega$ is not a good variable, but we pass it anyway! In this case, $\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variabl...
print(sim.particles[1].pomega)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
We can specify the pericenter of the orbit with either $\omega$ or $\varpi$:
import rebound sim = rebound.Simulation() sim.add(m=1.) sim.add(a=1., e=0.2, pomega=0.1) print(sim.particles[1].orbit)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
Note that if the inclination is exactly zero, REBOUND sets $\Omega$ (which is undefined) to 0, so $\omega = \varpi$. Finally, we can specify the position of the particle along its orbit using mean (rather than true) longitudes or anomalies (for example, this might be useful for resonances). We can either use the me...
sim = rebound.Simulation() sim.add(m=1.) sim.add(a=1., e=0.1, Omega=0.3, M = 0.1) sim.add(a=1., Omega=0.3, l = 0.4) print(sim.particles[1].l) print(sim.particles[2].l)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
REBOUND calculates the mean longitude in such a way that it smoothly approaches $\theta$ in the limit of $e\rightarrow0$:
sim.particles[2].theta import rebound sim = rebound.Simulation() sim.add(m=1.) sim.add(a=1., e=0.1, omega=1.) print(sim.particles[1].orbit)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
In summary, you can specify the phase of the orbit through any one of the angles M, f, theta or l=$\lambda$. Additionally, one can instead use the time of pericenter passage T. This time should be set in the appropriate time units, and you'd initialize sim.t to the appropriate time you want to start the simulation. A...
import random import numpy as np def simulation(par): e,f = par e = 10**e f = 10**f sim = rebound.Simulation() sim.add(m=1.) a = 1. inc = random.random()*np.pi Omega = random.random()*2*np.pi sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, f=f) o=sim.particles[1].orbit if o.f < 0:...
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\theta$ as discussed above, we get much better results:
def simulation(par): e,theta = par e = 10**e theta = 10**theta sim = rebound.Simulation() sim.add(m=1.) a = 1. inc = random.random()*np.pi Omega = random.random()*2*np.pi omega = random.random()*2*np.pi sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, theta=theta) o=sim.particles[1]...
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
Hyperbolic & Parabolic Orbits REBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$:
sim.add(a=-0.2, e=1.4) sim.status()
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,
sim = rebound.Simulation() sim.add(m=1.) q = 0.1 a=-1.e14 e=1.+q/np.fabs(a) sim.add(a=a, e=e) print(sim.particles[1].orbit)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
Retrograde Orbits Orbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example,
sim = rebound.Simulation() sim.add(m=1.) sim.add(a=1.,inc=np.pi,e=0.1, Omega=0., pomega=1.) print(sim.particles[1].orbit)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
We passed $\Omega=0$ and $\varpi=1.$. For prograde orbits, $\varpi = \Omega + \omega$, so we'd expect $\omega = 1$, but instead we get $\omega=-1$. If we think about things physically, $\varpi$ is the angle from the $x$ axis to pericenter, measured in the positive direction (counterclockwise) defined by $z$. $\Omega...
import rebound sim = rebound.Simulation() sim.add(m=1.) sim.add(m=1.e-3, a=1., jacobi_masses=True) sim.add(m=1.e-3, a=5., jacobi_masses=True) sim.move_to_com()
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
The jacobi mass and default mass assigned by REBOUND always agree for the first particle, but differ for all the rest
print(sim.particles[1].a, sim.particles[2].a)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
We can calculate orbital elements using jacobi masses by using the same flag in sim.calculate_orbits
o = sim.calculate_orbits(jacobi_masses=True) print(o[0].a, o[1].a)
ipython_examples/OrbitalElements.ipynb
dtamayo/rebound
gpl-3.0
1. Support Vector Classification 1.1 Load the Iris dataset
iris = datasets.load_iris() X = iris.data[:,:2] y = iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
lecture4/ML-Anirban_Tutorial4.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
1.2 Use Support Vector Machine with different kinds of kernels and evaluate performance
def evaluate_on_test_data(model=None): predictions = model.predict(X_test) correct_classifications = 0 for i in range(len(y_test)): if predictions[i] == y_test[i]: correct_classifications += 1 accuracy = 100*correct_classifications/len(y_test) #Accuracy as a percentage return acc...
lecture4/ML-Anirban_Tutorial4.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
1.3 Visualize the decision boundaries
#Train SVMs with different kernels svc = svm.SVC(kernel='linear').fit(X_train, y_train) rbf_svc = svm.SVC(kernel='rbf', gamma=0.7).fit(X_train, y_train) poly_svc = svm.SVC(kernel='poly', degree=3).fit(X_train, y_train) #Create a mesh to plot in h = .02 # step size in the mesh x_min, x_max = X[:, 0].min() - 1, X[:, 0]...
lecture4/ML-Anirban_Tutorial4.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
1.4 Check the support vectors
#Checking the support vectors of the polynomial kernel (for example) print("The support vectors are:\n", poly_svc.support_vectors_)
lecture4/ML-Anirban_Tutorial4.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
2. Support Vector Regression 2.1 Load data from the Boston dataset
boston = datasets.load_boston() X = boston.data y = boston.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
lecture4/ML-Anirban_Tutorial4.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
2.2 Use Support Vector Machine with different kinds of kernels and evaluate performance
def evaluate_on_test_data(model=None): predictions = model.predict(X_test) sum_of_squared_error = 0 for i in range(len(y_test)): err = (predictions[i]-y_test[i]) **2 sum_of_squared_error += err mean_squared_error = sum_of_squared_error/len(y_test) RMSE = np.sqrt(mean_squared_error) ...
lecture4/ML-Anirban_Tutorial4.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Vamos geocodificar as cidades buscando-as pela Geocode API da Google. Você precisa obter uma API Key: https://console.cloud.google.com/apis/ Se quiser, pode usar o arquivo geocodificado que eu salvei.
import requests import json cidades = {} cidades_nao_encontradas = [] url = 'https://maps.googleapis.com/maps/api/geocode/json' params = dict( key='** USE SUA API KEY**' ) inx = 0 start=None # Inform None to process all cities. Inform a city to begin processing after it start_saving=False for city in list_cidades...
covid19_Brasil/Covid19_no_Brasil.ipynb
cleuton/datascience
apache-2.0
Eu salvei um arquivo chamado "cidades.json" com a geocodificação, de modo a evitar acessar a API novamenete. Agora, preciso baixar um mapa contendo o Brasil. Escolhi o centro usando o Google Maps e ajustei o zoom, o tamanho e a escala. Note que você vai precisar de uma chave de API.
#-13.6593766,-58.6914406 latitude = -13.6593766 longitude = -50.6914406 zoom = 4 size = 530 scale = 2 apikey = "** SUA API KEY**" gmapas = "https://maps.googleapis.com/maps/api/staticmap?center=" + str(latitude) + "," + str(longitude) + \ "&zoom=" + str(zoom) + \ "&scale=" + str(scale) + \ "&size=" + str(s...
covid19_Brasil/Covid19_no_Brasil.ipynb
cleuton/datascience
apache-2.0
Agora, preciso de uma função para usar a projeção de Mercator e calcular as bordas do mapa que eu baixei. Eu baixei desta resposta do StackOverflow: https://stackoverflow.com/questions/12507274/how-to-get-bounds-of-a-google-static-map E funciona melhor do que a que estava utilizando.
import MercatorProjection centerLat = latitude centerLon = longitude mapWidth = size mapHeight = size centerPoint = MercatorProjection.G_LatLng(centerLat, centerLon) corners = MercatorProjection.getCorners(centerPoint, zoom, mapWidth, mapHeight) print(corners)
covid19_Brasil/Covid19_no_Brasil.ipynb
cleuton/datascience
apache-2.0
Gerei um novo Dataframe contendo as latitudes, longitudes e quantidade de casos:
casos = df.groupby("city")['confirmed'].sum() df2 = pd.DataFrame.from_dict(cidades,orient='index') df2['casos'] = casos print(df2.head())
covid19_Brasil/Covid19_no_Brasil.ipynb
cleuton/datascience
apache-2.0
Agora, vou acrescentar um atributo com a cor do ponto, de acordo com uma heurística de quantidade de casos: Quanto mais, mais vermelho:
def calcular_cor(valor): cor = 'r' if valor <= 10: cor = '#ffff00' elif valor <= 30: cor = '#ffbf00' elif valor <= 50: cor = '#ff8000' return cor df2['cor'] = [calcular_cor(codigo) for codigo in df2['quantidade']] df2.head()
covid19_Brasil/Covid19_no_Brasil.ipynb
cleuton/datascience
apache-2.0
Vamos ordenar pela quantidade de casos:
df2 = df2.sort_values(['casos'])
covid19_Brasil/Covid19_no_Brasil.ipynb
cleuton/datascience
apache-2.0
Temos alguns "outliers", ou seja, coordenadas muito fora do país. Provavelmente, problemas de geocodificação. Vamos retirá-las:
print(df2.loc[(df2['latitude'] > 20) | (df2['longitude']< -93)]) df3 = df2.drop(df2[(df2.latitude > 20) | (df2.longitude < -93)].index)
covid19_Brasil/Covid19_no_Brasil.ipynb
cleuton/datascience
apache-2.0
Agora dá para plotar um gráfico utilizando aquela imagem baixada. Eu tive que ajustar as coordenadas de acordo com os limites do retângulo, calculados pela projeção de Mercator:
import matplotlib.image as mpimg mapa=mpimg.imread('./mapa.png') fig, ax = plt.subplots(figsize=(10, 10)) #{'N': 20.88699826581544, 'E': -15.535190599999996, 'S': -43.89198802990045, 'W': -85.84769059999999} plt.imshow(mapa, extent=[corners['W'],corners['E'],corners['S'],corners['N']], alpha=1.0, aspect='auto') ax.scat...
covid19_Brasil/Covid19_no_Brasil.ipynb
cleuton/datascience
apache-2.0
Get & inspect the data First, we'll get some sample data from the built-in collection. Fisher's famous iris dataset is a great place to start. The datasets are of type bunch, which is dictionary-like.
# import the 'iris' dataset from sklearn from sklearn import datasets iris = datasets.load_iris()
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Use some of the keys in the bunch to poke around the data and get a sense for what we have to work with. Generally, the sklearn built-in data has data and target keys whose values (arrays) we'll use for our machine learning.
print("data dimensions (array): {} \n ".format(iris.data.shape)) print("bunch keys: {} \n".format(iris.keys())) print("feature names: {} \n".format(iris.feature_names)) # the "description" is a giant text blob that will tell you more # about the contents of the bunch print(iris.DESCR)
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Have a look at the actual data we'll be using. Note that the data array has four features for each sample (consistent with the "feature names" above), and the labels can only take on the values 0-2. I'm still getting familiar with slicing, indexing, etc numpy arrays, so I find it helpful to have the docs open somewhere...
# preview 'idx' rows/cols of the data idx = 6 print("example iris features: \n {} \n".format(iris.data[:idx])) print("example iris labels: {} \n".format(iris.target[:idx])) print("all possible labels: {} \n".format(np.unique(iris.target)))
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
It's always a good idea to throw out some scatter plots (if the data is appropriate) to see the space our data covers. Since we have four features, we can just grab some pairs of the data and make simple scatter plots.
plt.figure(figsize = (16,4)) plt.subplot(131) plt.scatter(iris.data[:, 0:1], iris.data[:, 1:2]) plt.xlabel("sepal length (cm)") plt.ylabel("sepal width (cm)") plt.axis("tight") plt.subplot(132) plt.scatter(iris.data[:, 1:2], iris.data[:, 2:3]) plt.xlabel("sepal width (cm)") plt.ylabel("petal length (cm)") plt.axis("t...
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
And, what the heck, this is a perfectly good opportunity to try out a 3D plot, too, right? We'll also add in the target from the dataset - that is, the actual labels that we're getting to - as the coloring. Since we only have three dimensions to plot, we have to leave something out.
from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize = (10,8)) ax = plt.axes(projection='3d') ax.view_init(15, 60) # (elev, azim) : adjust these to change viewing angle! x = iris.data[:, 0:1] y = iris.data[:, 1:2] z = iris.data[:, 2:3] # add the last dimension for use in e.g. the color! label1 = iris....
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Since this dataset has a collection of "ground truth" labels (label2 in the previous graph), this is an example of supervised learning. We tell the algorithm the right answer a whole bunch of times, and look for it to figure out the best way to predict labels of future data samples. Learning & predicting In sklearn, mo...
iris_X = iris.data iris_y = iris.target r = random.randint(0,100) np.random.seed(r) idx = np.random.permutation(len(iris_X)) subset = 25 iris_X_train = iris_X[idx[:-subset]] # all but the last 'subset' rows iris_y_train = iris_y[idx[:-subset]] iris_X_test = iris_X[idx[-subset:]] # the last 'subset' rows iris_y_te...
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
To see that we're choosing the training and test samples, we can again plot them to see how they're distributed. If you re-run the random.seed code, it should choose a new random collection of the data.
plt.figure(figsize= (6,6)) plt.scatter(iris_X_train[:, 0:1] , iris_X_train[:, 1:2] , c="blue" , s=30 , label="train data" ) plt.scatter(iris_X_test[:, 0:1] , iris_X_test[:, 1:2] , c="red" , s=30 , label="test data" ) plt.x...
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Now, we use the sklearn package and create a nearest-neighbor classification estimator (with all the default values). After instantiating the object, we use its fit() method and pass it the training data - both features and labels. Note that the __repr__() here tells you about the default values if you want to adjust t...
from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier() # fit the model using the training data and labels knn.fit(iris_X_train, iris_y_train)
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
At this point, the trained knn estimator object has, internally, the "best" mapping from input to output. It can now be used to predict new data via the predict() method. In this case, the prediction is which class the new samples' features best fit - a simple 1D array of labels.
# predict the labels for the test data, using the trained model iris_y_predict = knn.predict(iris_X_test) # show the results (labels) iris_y_predict
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Since this data came labeled, we can actually look at the actual correct answers. As this list grows in size, it's trickier to note the differences or similarities. But, still, it looks like it did a pretty good job.
iris_y_test
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Thankfully, sklearn also has many built-in ways to gauge the "accuracy" of our trained estimator. The simplest is just "what fraction of our classifications did we get right?" Clearly easier than inspecting by eye. Note: occasionally, this estimator actually reaches 100%. If you increase the "subset" that's cut out for...
from sklearn.metrics import accuracy_score accuracy_score(iris_y_test, iris_y_predict)
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
So we know the successful prediction percentage, but we can also do a visual inspection of how the labels differ. Even for this small dataset, it can be tricky to spot the differences between the true values and predicted values; an accuracy_score in the 90% range means that only one or two samples will be incorrectly ...
plt.figure(figsize = (12,6)) plt.subplot(221) plt.scatter(iris_X_test[:, 0:1] # real data , iris_X_test[:, 1:2] , c=iris_y_test # real labels , s=100 , alpha=0.6 ) plt.ylabel("sepal width (cm)") plt.title("real labels") plt.subplot(223) plt.scatter(iris_X_test[:, 0:...
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Alternatively, if you have more numpy and matplotlib skillz than I currently have, you can also visualize the resulting model of a similar estimator like so: (source)
from IPython.core.display import Image Image(filename='./iris_knn.png', width=500)
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Once more, but with model persistence Now let's work an example of unsupervised learning. After some time with exploration like in the last example, we'll get a handle on our data, the features that will be helpful, and the general pipeline of analysis. In order to make the analysis more portable (and also when issues ...
# boston home sales in the 1970s boston = datasets.load_boston() boston.keys() # get the full info print(boston.DESCR)
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
The feature_names are less obvious in this dataset, but if you print out the DESCR above, there's a more detailed explanation of the features.
print("dimensions: {}, {} \n".format(boston.data.shape, boston.target.shape)) print("features (defs in DESCR): {} \n".format(boston.feature_names)) print("first row: {}\n".format(boston.data[:1]))
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
While a better model would take incorporate all of these features (with appropriate normalization), let's focus on just one for the moment. Column 5 is the number of rooms in each house. Seems reasonable that this would be a decent predictor of sale price. Let's have a quick look at the data.
rooms = boston.data[:, 5] plt.figure(figsize = (6,6)) plt.scatter(rooms, boston.target, alpha=0.5) plt.xlabel("room count") plt.ylabel("cost ($1000s)") plt.axis("tight")
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Ok, so we can work with this - there's definitely some correlation between these two variables. Let's imagine that we just knew we wanted to fit this immediately, without all the inspection. Furthermore, we wanted to build a model, fit the estimator, and then keep it around for use in an analysis later on (via pickle)....
# here comes the data boston = datasets.load_boston() # a little goofyness to get a "column vector" for the estimator b_X_tmp = boston.data[:, np.newaxis] b_X = b_X_tmp[:, :, 5] b_y = boston.target # split it out into train/test again r = random.randint(0,100) np.random.seed(r) idx = np.random.permutation(len(b_X)) ...
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Fit the estimator (and this time let's print out the fitted model parameters)...
# get our estimator & fit to the training data from sklearn.linear_model import LinearRegression lr = LinearRegression() print(lr.fit(b_X_train, b_y_train)) print("Coefficients: {},{} \n".format(lr.coef_, lr.intercept_))
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
And now let's imagine we spent a ton of time building this model, so we want to save it to disk:
import pickle p = pickle.dumps(lr) print(p) # looks super useful, right?
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Write the pickle to disk, and then navigate to this location in another shell and cat the file. It's pretty much what you see above. Pretty un-helpful to the eye.
# write this fitted estimator (python object) to a byte stream pickle.dump(lr,open('./lin-reg.pkl', 'wb')) !cat ./lin-reg.pkl
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
But, now imagine we had another process that had some new data and we wanted to use this pre-existing model to predict some results. We just read in the faile, and deserialize it. You can even check the coefficients to see that it's "the same" model.
# now, imagine you've previously created this file and stored it off somewhere... new_lr = pickle.load(open('./lin-reg.pkl', 'rb')) print(new_lr) print("Coefficient (compare to previous output): {}, {} \n".format(new_lr.coef_, new_lr.intercept_))
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
And now we can use it to predict the target for our housing data (remember, we use the "test" data for measuring the success of our estimator.
b_y_predict = new_lr.predict(b_X_test) #b_y_predict # you can have a look at the result if you want
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Now, we can have a look at how we did. Below, we can look at the best-fit line through all of the data (both "test" and "train"). Then, we also compare the predicted fit results (test data) to the actual true results.
plt.figure(figsize= (12,5)) plt.subplot(121) plt.scatter(b_X, b_y, c="red") plt.plot(b_X, new_lr.predict(b_X), '-k') plt.axis('tight') plt.xlabel('room count') plt.ylabel('predicted price ($1000s)') plt.title("fit to all data") plt.subplot(122) plt.scatter(b_y_test, b_y_predict, c="green") plt.plot([0, 50], [0, 50], ...
sklearn-101/sklearn-101.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Load raw data
!ls -l ../data/taxi-traffic* !head ../data/taxi-traffic*
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/4a_streaming_data_training.ipynb
turbomanage/training-data-analyst
apache-2.0
Use tf.data to read the CSV files These functions for reading data from the csv files are similar to what we used in the Introduction to Tensorflow module. Note that here we have an addtional feature traffic_last_5min.
CSV_COLUMNS = [ 'fare_amount', 'dayofweek', 'hourofday', 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'traffic_last_5min' ] LABEL_COLUMN = 'fare_amount' DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]] def features_and_labels(row_da...
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/4a_streaming_data_training.ipynb
turbomanage/training-data-analyst
apache-2.0
Build a simple keras DNN model
# Build a keras DNN model using Sequential API def build_model(dnn_hidden_units): model = Sequential(DenseFeatures(feature_columns=feature_columns.values())) for num_nodes in dnn_hidden_units: model.add(Dense(units=num_nodes, activation="relu")) model.add(Dense(units=1, activation="linear"...
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/4a_streaming_data_training.ipynb
turbomanage/training-data-analyst
apache-2.0
Next, we can call the build_model to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.
HIDDEN_UNITS = [32, 8] model = build_model(dnn_hidden_units=HIDDEN_UNITS) BATCH_SIZE = 1000 NUM_TRAIN_EXAMPLES = 10000 * 6 # training dataset will repeat, wrap around NUM_EVALS = 60 # how many times to evaluate NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample trainds = create_dataset( pattern='.....
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/4a_streaming_data_training.ipynb
turbomanage/training-data-analyst
apache-2.0
Export and deploy model
OUTPUT_DIR = "./export/savedmodel" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) tf.saved_model.save(model, EXPORT_PATH) # with default serving function os.environ['EXPORT_PATH'] = EXPORT_PATH %%bash PR...
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/4a_streaming_data_training.ipynb
turbomanage/training-data-analyst
apache-2.0
Load and prepare data for modeling
# load data path = '../../03_regression/data/train.csv' frame = h2o.import_file(path=path) # assign target and inputs y = 'SalePrice' X = [name for name in frame.columns if name not in [y, 'Id']]
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
Impute missing values
# determine column types # impute reals, enums = [], [] for key, val in frame.types.items(): if key in X: if val == 'enum': enums.append(key) else: reals.append(key) _ = frame[reals].impute(method='median') _ = frame[enums].impute(method='mode') # split into tr...
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
Train a predictive model
# train GBM model model = H2OGradientBoostingEstimator(ntrees=100, max_depth=10, distribution='huber', learn_rate=0.1, stopping_rounds=5, ...
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
Determine important variables for use in sensitivity analysis
model.varimp_plot()
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
Helper function for finding quantile indices
def get_quantile_dict(y, id_, frame): """ Returns the percentiles of a column y as the indices for another column id_. Args: y: Column in which to find percentiles. id_: Id column that stores indices for percentiles of y. frame: H2OFrame containing y and id_. Returns: ...
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
Get validation data ranges
print('lowest SalePrice:\n', preds[preds['Id'] == int(sale_quantile_dict[0])]['SalePrice']) print('lowest prediction:\n', preds[preds['Id'] == int(pred_quantile_dict[0])]['predict']) print('highest SalePrice:\n', preds[preds['Id'] == int(sale_quantile_dict[99])]['SalePrice']) print('highest prediction:\n', preds[preds[...
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
This result alone is interesting. The model appears to be struggling to accurately predict low and high values for SalePrice. This behavior should be corrected to increase the accuracy of predictions. A strategy for improving predictions for these homes with extreme values might be to weight them higher during training...
# look at current row print(preds[preds['Id'] == int(pred_quantile_dict[0])]) # find current error observed = preds[preds['Id'] == int(pred_quantile_dict[0])]['SalePrice'][0,0] predicted = preds[preds['Id'] == int(pred_quantile_dict[0])]['predict'][0,0] print('Error: %.2f%%' % (100*(abs(observed - predicted)/observed)...
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
While the model does not seem to handle low-valued homes very well, making the home with the lowest predicted price less appealling does not seem to make the model's predictions any worse. While this prediction behavior appears somewhat stable, which would normally be desirable, this is not particularly good news as th...
# look at current row print(preds[preds['Id'] == int(pred_quantile_dict[99])]) # find current error observed = preds[preds['Id'] == int(pred_quantile_dict[99])]['SalePrice'][0,0] predicted = preds[preds['Id'] == int(pred_quantile_dict[99])]['predict'][0,0] print('Error: %.2f%%' % (100*(abs(observed - predicted)/observ...
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
This result may point to unstable predictions for the higher end of SalesPrice. Shutdown H2O
h2o.cluster().shutdown(prompt=True)
10_model_interpretability/src/sensitivity_analysis.ipynb
jphall663/GWU_data_mining
apache-2.0
Generate a left cerebellum volume source space Generate a volume source space of the left cerebellum and plot its vertices relative to the left cortical surface source space and the freesurfer segmentation file.
# Author: Alan Leggitt <alan.leggitt@ucsf.edu> # # License: BSD (3-clause) import numpy as np from scipy.spatial import ConvexHull from mayavi import mlab from mne import setup_source_space, setup_volume_source_space from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = data_pa...
0.13/_downloads/plot_left_cerebellum_volume_source.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Setup the source spaces
# setup a cortical surface source space and extract left hemisphere surf = setup_source_space(subj, subjects_dir=subjects_dir, add_dist=False, overwrite=True) lh_surf = surf[0] # setup a volume source space of the left cerebellum cortex volume_label = 'Left-Cerebellum-Cortex' sphere = (0, 0, ...
0.13/_downloads/plot_left_cerebellum_volume_source.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot the positions of each source space
# extract left cortical surface vertices, triangle faces, and surface normals x1, y1, z1 = lh_surf['rr'].T faces = lh_surf['use_tris'] normals = lh_surf['nn'] # normalize for mayavi normals /= np.sum(normals * normals, axis=1)[:, np.newaxis] # extract left cerebellum cortex source positions x2, y2, z2 = lh_cereb[0]['r...
0.13/_downloads/plot_left_cerebellum_volume_source.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compare volume source locations to segmentation file in freeview
# Export source positions to nift file nii_fname = data_path + '/MEG/sample/mne_sample_lh-cerebellum-cortex.nii' # Combine the source spaces src = surf + lh_cereb src.export_volume(nii_fname, mri_resolution=True) # Uncomment the following lines to display source positions in freeview. ''' # display image in freeview...
0.13/_downloads/plot_left_cerebellum_volume_source.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Response of the Single Degree of Freedom (SDOF) System the response is given by the following formula: $$ a(t) = e^{-\zeta \omega \cdot t} \cdot \left[ 2 \zeta cos \left(\omega \sqrt{1-\zeta^2}t \right) + \frac{1-2\zeta^2} {\sqrt{1-\zeta^2}} sin\left(\omega \sqrt{1-\zeta^2}t \right) \right] $$
# time and frequency vectors t = np.arange(0, 30, 0.002) f = np.arange(2, 45, 2**(1.0/6.0)) #frequency change by 1/6 octave # relative damping factor dz = 0.05 # omega w1 = 2*np.pi*f[1] # acceleration response calculation a1 = np.exp(-dz*w1*t) * (2*dz*np.cos(w1*np.sqrt(1-dz**2)*t) + (1-2*dz**2)/np.sqrt(1-dz**2)*np.si...
Seismic/SDOF.ipynb
rafburzy/Python_EE
bsd-3-clause
Exication of the test object given by the formula $$ a(t) = \sum\limits_{i} A_i sin \left( \omega_i t + \phi_i \right) + \Psi(t) $$
#Aux variables definition ZPA = 0.4 #Amplitudes AA = [np.random.random()*ZPA for k in f] #Random angle fi = [np.random.random()*np.pi/2 for k in f] # excitation for frequency range B = np.empty((len(t), len(f))) for j in range(len(w)): for i in range(len(t)): B[i,j] = AA[j]*np.sin(w[j]*t[i] + fi[j]) ...
Seismic/SDOF.ipynb
rafburzy/Python_EE
bsd-3-clause
Window functions definition (Psi function)
window = np.hamming(len(C)) window2 = np.bartlett(len(C)) window3 = np.blackman(len(C)) window4 = np.hanning(len(C)) window5 = np.kaiser(len(C), beta=3.5) plt.figure(figsize=(12,6)) plt.plot(t, window, label='hamming') plt.plot(t, window2, label='barlett') plt.plot(t, window3, label='blackman') plt.plot(t, window4, la...
Seismic/SDOF.ipynb
rafburzy/Python_EE
bsd-3-clause
Exitation with window function
plt.figure(figsize=(12,10)) plt.plot(t, C*window5, color='orange', alpha=0.35) plt.plot(t, C*window, alpha=0.35) plt.xlabel('Time [s]') plt.ylabel('Acceleration [g]');
Seismic/SDOF.ipynb
rafburzy/Python_EE
bsd-3-clause
Frequency Response Function (FRF) of SDOF Spatial parameter model <img src='FRF.png'>
# definition of parameters # spring constant [N/m] k = 40 # mass [kg] m = 2 # damping c = 5 # frequency f1 = np.arange(0, 45, 0.001) # angular frequency omega = 2*np.pi*f1 # frequency response function (FRF) H = 1 / (-omega**2*m + 1j*omega*c + k) plt.figure(figsize=(10,8)) plt.subplot(211) plt.semilogx(omega, np...
Seismic/SDOF.ipynb
rafburzy/Python_EE
bsd-3-clause
Modal parameter model (because in reality mass, spring constant and damping coefficient are not known) <br> <img src='modal.png'>
# examplary data from testing # modal constant C = 1.4 # resonance frequency (undamped nat freq) omega_00 = 2*np.pi*5.8 # damping dzeta = 6.6/100 # damped natural freq omega_d = omega_00 * np.sqrt(1 - dzeta**2) # residue R = -1j* C * 0.54 R_con = np.conj(R) # sigma sigma = np.sqrt(omega_00**2 - omega_d**2) # po...
Seismic/SDOF.ipynb
rafburzy/Python_EE
bsd-3-clause
Model is along the concept described in https://www.w3.org/TR/prov-primer/
from IPython.display import display, Image Image(filename='key-concepts.png') import sys #sys.path.append('/home/stephan/Repos/ENES-EUDAT/submission_forms') sys.path.append('C:\\Users\\Stephan Kindermann\\Documents\\GitHub\\submission_forms') from dkrz_forms import form_handler from dkrz_forms.config import * name_s...
test/prov/old/prov-submission.ipynb
IS-ENES-Data/submission_forms
apache-2.0
Example name spaces (from DOI: 10.3390/ijgi5030038 , mehr unter https://github.com/tsunagun/vocab/blob/master/all_20130125.csv) owl Web Ontology Language http://www.w3.org/2002/07/owl# dctype DCMI Type Vocabulary http://purl.org/dc/dcmitype/ dco DCO Ontology http://info.deepcarbon.net/sch...
# later: organize things in bundles data_manager_ats = {'foaf:givenName':'Peter','foaf:mbox':'lenzen@dkzr.de'} d1.entity('sub:empty') def add_stage(agent,activity,in_state,out_state): # in_stage exists, out_stage is generated d1.agent(agent, data_manager_ats) d1.activity(activity) d1.entity(out_state) ...
test/prov/old/prov-submission.ipynb
IS-ENES-Data/submission_forms
apache-2.0
assign information to provenance graph nodes and edges
%matplotlib inline d1.plot() d1.serialize() import json ingest_prov_file = open('ingest_prov_1.json','w') prov_data = d1.serialize() prov_data_json = json.dumps(prov_data) ingest_prov_file.write(prov_data) ingest_prov_file.close() #d1.wasAttributedTo(data_submission,'????')
test/prov/old/prov-submission.ipynb
IS-ENES-Data/submission_forms
apache-2.0