markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Indexing Lists (...and Text objects)
text4[173] text4.index('awaken') text5[16715:16735] text6[1600:1625]
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
Computing with Language: Simple Statistics
saying = 'After all is said and done more is said than done'.split() tokens = sorted(set(saying)) tokens[-2:]
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
Frequency Distributions
fdist1 = FreqDist(text1) print(fdist1) fdist1.most_common(50) fdist1['whale']
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
50 most frequent words account for almost half of the book
plt.figure(figsize=(18,10)) fdist1.plot(50, cumulative=True)
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
Fine-grained Selection of Words Looking at long words of a text (maybe these will be more meaningful words?)
V = set(text1) long_words = [w for w in V if len(w) > 15] sorted(long_words)
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
words that are longer than 7 characters and occur more than 7 times
fdist5 = FreqDist(text5) sorted(w for w in set(text5) if len(w) > 7 and fdist5[w] > 7)
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
collocation - sequence of words that occur together unusually often (red wine is a collocation, vs. the wine is not)
list(nltk.bigrams(['more', 'is', 'said', 'than', 'done'])) # bigrams() returns a generator
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
collocations are just frequent bigrams -- we want to focus on the cases that involve rare words** collocations() returns bigrams that occur more often than expected, based on word frequency
text4.collocations() text8.collocations()
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
counting other things word length distribution in text1
[len(w) for w in text1][:10] fdist = FreqDist(len(w) for w in text1) print(fdist) fdist fdist.most_common() fdist.max() fdist[3] fdist.freq(3)
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
words of length 3 (~50k) make up ~20% of all words in the book Back to Python: Making Decisions and Taking Control skipping basic python stuff More accurate vocabulary size counting -- convert all strings to lowercase
len(text1) len(set(text1)) len(set(word.lower() for word in text1))
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
Only include alphabetic words -- no punctuation
len(set(word.lower() for word in text1 if word.isalpha()))
01_language_processing_and_python.ipynb
sandipchatterjee/nltk_book_notes
mit
Structure Prediction In this notebook, we aim to do a test of the structure substitution algorithm implemented in SMACT Before we can do predictions, we need to create our cation mutator, database and a table, and a list of hypothetical compositions Procedure Create a list of hypothetical compositions Create a database of structures Create Cation Mutator Predict structures Part 1: Compositions These compositions have generated in a different notebook and the results have been loaded in
comps=pd.read_csv("Li-Garnet_Comps_sus.csv") comps.head()
examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb
WMD-group/SMACT
mit
Part 2: Database Let's follow this procedure 1. Create a SMACT database and add a table which contains the result of our query
DB=StructureDB("Test.db") SP=StructurePredictor(CM, DB, "Garnets")
examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb
WMD-group/SMACT
mit
Part 3: Cation Mutator In order to set up the cation mutator, we must do the following: 1. Generate a dataframe of lambda values for all species we which to consider 2. Instantiate the CationMutator class with the lambda dataframe
#Create the CationMutator class #Here we use the default lambda table CM=CationMutator.from_json()
examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb
WMD-group/SMACT
mit
Part 4: Structure Prediction Prerequisites: Part 1, Part 2 & Part 3 Procedure: 1. Instantiate the StructurePredictor class with the cation mutator (part 1), database (part 2) and table (part 2) 2. Predict Structures 3. Compare predicted structures with Database
#Corrections cond_df=CM.complete_cond_probs() species=list(cond_df.columns) comps_copy=comps[['A','B','C','D']] df_copy_bool=comps_copy.isin(species) x=comps_copy[df_copy_bool].fillna(0) x=x[x.A != 0] x=x[x.B != 0] x=x[x.C != 0] x=x[x.D != 0] x=x.reset_index(drop=True) #x.to_csv("./Garnet_Comps_Corrected_Pym.csv", index=False) x.head() inner_merged=pd.merge(x, comps) inner_merged.to_csv("./Li-Garnet_Comps_Corrected_Pym.csv", index=False) inner_merged.head() print(inner_merged.head()) print("") print(f"We have reduced our search space from {comps.shape[0]} to {inner_merged.shape[0]}") #x=x[:100] #Create a list of test species test_specs_list=[[parse_spec(inner_merged["A"][i]),parse_spec(inner_merged["B"][i]),parse_spec(inner_merged["C"][i]),parse_spec(inner_merged["D"][i]) ] for i in range(inner_merged.shape[0])] #Set up a for loop to store from datetime import datetime start = datetime.now() from operator import itemgetter preds=[] parents_list=[] probs_list=[] for test_specs in test_specs_list: predictions=list(SP.predict_structs(test_specs, thresh=10e-4, include_same=False )) predictions.sort(key=itemgetter(1), reverse=True) parents = [x[2].composition() for x in predictions] probs = [x[1] for x in predictions] preds.append(predictions) parents_list.append(parents) probs_list.append(probs) print(f"Time taken to predict the crystal structures of our search space of {inner_merged.shape[0]} with a threshold of 0.0001 is {datetime.now()-start} ") #print(parents_list) print("") #print(probs_list)
examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb
WMD-group/SMACT
mit
Part 5: Storing the results
#Add predictions to dataframe import pymatgen as mg pred_structs=[] probs=[] parent_structs=[] parent_pretty_formula=[] for i in preds: if len(i)==0: pred_structs.append(None) probs.append(None) parent_structs.append(None) parent_pretty_formula.append(None) else: pred_structs.append(i[0][0].as_poscar()) probs.append(i[0][1]) parent_structs.append(i[0][2].as_poscar()) parent_pretty_formula.append(mg.Structure.from_str(i[0][2].as_poscar(), fmt="poscar").composition.reduced_formula) #Add prediction results to dataframe inner_merged["predicted_structure"]=pred_structs inner_merged["probability"]=probs inner_merged["Parent formula"]=parent_pretty_formula inner_merged["parent_structure"]=parent_structs inner_merged[35:40] #output the intermediary results into a dataframe outdir="./Li-SP_results" if not os.path.exists(outdir): os.mkdir(outdir) fullpath=os.path.join(outdir,"pred_results.csv") inner_merged.to_csv(fullpath) #Filter dataframe to remove blank entries from dataframe results=inner_merged.dropna() results=results.reset_index(drop=True) results.head() #Check if composition exists in our local database in_db=[] for i in results["predicted_structure"]: comp=SmactStructure.from_poscar(i).composition() if len(DB.get_structs(comp, "Garnets"))!=0: in_db.append("Yes") else: in_db.append("No") results["In DB?"]=in_db print(results["In DB?"].value_counts()) #Check the ratio of In DB?:Not in DB in_db_count=results["In DB?"].value_counts() from matplotlib import pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.axis('equal') ax.pie(in_db_count,labels=["No","Yes"], autopct='%1.2f%%') plt.savefig(f"{outdir}/Li-Garnets_Li-In_DB_SP.png") plt.show() import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize=(8,6)) ax1=sns.histplot(data=results, x="probability", hue="In DB?",multiple="stack") #ax2=sns.histplot(new_series, label="New Garnets") #plt.savefig("Prediction_Probability_Distribution_pym.png") g=sns.FacetGrid(results, col="In DB?", height=6, aspect=1) g.map(sns.histplot,"probability") g.savefig("./Li-SP_results/Prob_dist.png") results.head() #new_structures= #pym_Li_DB=StructureDB("pym_Li.DB") #pym_Li_DB.add_table("New") #pym_Li_DB.add_table("not_new") #pym_Li_DB.add_structs(new_structures,"New") #pym_Li_DB.add_structs(exist_structures,"not_new") #pym_Li_DB.add_table("not_new") #pym_Li_DB.add_structs(new_structures,"New") #pym_Li_DB.add_structs(exist_structures,"not_new") #Periodic table BS #Get element names from pymatgen.util.plotting import periodic_table_heatmap A_els=pd.Series([parse_spec(i)[0] for i in results["A"]]) B_els=pd.Series([parse_spec(i)[0] for i in results["B"]]) C_els=pd.Series([parse_spec(i)[0] for i in results["C"]]) #get dict of counts A_els_counts=A_els.value_counts().to_dict() B_els_counts=B_els.value_counts().to_dict() C_els_counts=C_els.value_counts().to_dict() ax1=periodic_table_heatmap(elemental_data=A_els_counts, cbar_label="Counts") ax1.savefig(f"{outdir}/periodic_table_A.png") ax2=periodic_table_heatmap(elemental_data=B_els_counts, cbar_label="Counts") ax2.savefig(f"{outdir}/periodic_table_B.png") ax3=periodic_table_heatmap(elemental_data=C_els_counts, cbar_label="Counts") ax3.savefig(f"{outdir}/periodic_table_C.png")
examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb
WMD-group/SMACT
mit
Accessing Text from the Web and from Disk Electronic Books Gutenberg上每一本電子書都有編號,只要知道編號就可以下載電子書,例如"Crime and Punishment"是第2554號,可以用下面的方法取得。
from urllib import urlopen url = "http://www.gutenberg.org/files/2554/2554.txt" raw = urlopen(url).read() type(raw)
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
註:如果要使用proxy下載,可以用 proxy = {'http': 'http://www.someproxy.com:3128'} raw = urlopen(url, proxies=proxy).read() 剛下載的raw是字串形態,包含許多不必要的字,所以需要tokenization,使單字分離。
tokens = nltk.word_tokenize(raw) tokens[:10]
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
產生的token可以進一步轉換成nltk text形態,以便進一步處理。
text = nltk.Text(tokens) text text.collocations()
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Dealing with HTML
url = "http://news.bbc.co.uk/2/hi/health/2284783.stm" html = urlopen(url).read() html[:60] from bs4 import BeautifulSoup bs = BeautifulSoup(html, "lxml") tokens = nltk.word_tokenize(bs.get_text()) tokens[:5]
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
concordance列出單字出現的每個地方
text = nltk.Text(tokens) text.concordance('gene')
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Strings: Lowest Level Text 字串是一種immutable的資料形態,設定值之後就不能修改。因此所有字串函式都會產生複製後的字串,而不會修改原始的字串。 用三個引號"""來定義字串,可以將換行一起放進去。
s = """hello, my friend""" print s
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
str支援+和*兩個operator。+代表連接兩個字串,*代表重複一個字串。
'must' + 'maybe' + 'hello' 'bug-' * 5
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
用[i]可以存取第i+1個字元,如果字串長度是n,則i的範圍為0到n-1。 用[-i]可以存取後面數過來第i個字元,如果字串長度是n,則i的範圍為-1到-n。
s = 'hello, world' s[0], s[1], s[:5], s[-5:]
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
其他常見的字串函式: s.find(t): 找到字串s內第一個字串t的位置,介於0~n-1間,找不到傳回-1 s.rfind(t): 從右邊找字串s內第一個字串t的位置,介於0~n-1間,找不到傳回-1 s.index(t): 功能如s.find(t),但找不到時會產生ValueError s.rindex(t): 功能如s.rfind(t),但找不到時會產生ValueError s.join([a,b,c]): 將字串a,b,c結合,中間插入s,最後結果為asbsc s.split(t): 用字串t為分隔,將s拆成多個字串,預設t為空白字元 s.splitlines(): 將字串s根據換行拆成多個字串 s.lower(): 將所有字母換成小寫 s.upper(): 將所有字母換成大寫 s.titlecase(): 每個單字第一個字母換成大寫 s.strip(): 刪除前後的空白字元 s.replace(t, u): 將s中的字串t替換成字串u Text Processing with Unicode
path = nltk.data.find('corpora/unicode_samples/polish-lat2.txt') import codecs f = codecs.open(path, encoding='latin2') for line in f: print line, line.encode('utf-8'), line.encode('unicode_escape'), '\n' # 傳回unicode的編號 print ord(u'許'), ord(u'洪'), ord(u'蓋') print repr(u'許洪蓋')
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
unicodedata可以印出unicode中對字元的描述
import unicodedata lines = codecs.open(path, encoding='latin2').readlines() line = lines[2] print line.encode('unicode_escape') for c in line: if ord(c) > 127: print '%s U+%04x %s' % (c.encode('utf8'), ord(c), unicodedata.name(c))
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Regular Expressions .: 對應任意一個字元 ^abc: 字串開頭為abc abc$: 字串結尾為abc [abc]: 對應[]內出現的字元,例如a或b或c [A-Z0-9]: 對應範圍內的字元,例如A到Z以及0到9 ed|ing|s: 對應數種指定的字串 *: 前一個符號重複0次以上 +: 前一個符號重複1次以上 ?: 前一個符號重複0次或1次 {n}: 前一個符號重複恰好n次 {n,}: 前一個符號重複至少n次 {,n}: 前一個符號重複最多n次 {m,n}: 前一個符號重複最少m次、最多n次 a(b|c)+: 括號表示|運算的範圍
import re # 一般re.search傳回的是一個 _sre.SRE_Match 物件 # 將物件轉換成 bool 值,如果 True 代表有找到符合的字串 bool(re.search('ed$', 'played')), bool(re.search('ed$', 'happy')) # re.findall 可以取出部分的字串,要取出的部分由()的範圍決定 re.findall('\[\[(.+?)[\]|\|]+', 'My Name is [[Bany]] Hung, this is my [[Dog|Animal]]') # re.sub 會取代字串 re.sub('\[\[.+?[\]|\|]+', '###', 'My Name is [[Bany]] Hung, this is my [[Dog|Animal]]')
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Normalize Text 正規化是指將文字變成統一的格式,例如轉成小寫、取字根。
porter = nltk.PorterStemmer() lanc = nltk.LancasterStemmer() w = ['I','was','playing','television','in','the','painted','garden'] [(a, porter.stem(a), lanc.stem(a)) for a in w]
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
作stemming之前,要將原始資料存在dict中,以方便使用。
class IndexedText(object): def __init__(self, stemmer, text): self._text = text self._stemmer = stemmer self._index = nltk.Index((self._stem(word), i) for (i, word) in enumerate(text)) def concordance(self, word, width=40): key = self._stem(word) wc = int(width/4) # words of context for i in self._index[key]: lcontext = ' '.join(self._text[i-wc:i]) rcontext = ' '.join(self._text[i:i+wc]) ldisplay = '%*s' % (width, lcontext[-width:]) rdisplay = '%-*s' % (width, rcontext[:width]) print ldisplay, rdisplay def _stem(self, word): return self._stemmer.stem(word).lower() grail = nltk.corpus.webtext.words('grail.txt') text = IndexedText(porter, grail) text.concordance('lie')
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Lemmatization
s = ['women', 'are', 'living'] wnl = nltk.WordNetLemmatizer() zip(s, [wnl.lemmatize(t, 'v') for t in s], [wnl.lemmatize(t, 'n') for t in s])
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Formatting: From Lists to Strings
s = ['We', 'called', 'him', 'Tortoise', 'because', 'he', 'taught', 'us', '.'] ' '.join(s) ';'.join(s)
NLP_With_Python/Ch3.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Let's first get familiar with different unsupervised anomaly detection approaches and algorithms. In order to visualise the output of the different algorithms we consider a toy data set consisting in a two-dimensional Gaussian mixture. Generating the data set
from sklearn.datasets import make_blobs X, y = make_blobs(n_features=2, centers=3, n_samples=500, random_state=42) X.shape plt.figure() plt.scatter(X[:, 0], X[:, 1]) plt.show()
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Anomaly detection with density estimation
from sklearn.neighbors.kde import KernelDensity # Estimate density with a Gaussian kernel density estimator kde = KernelDensity(kernel='gaussian') kde = kde.fit(X) kde kde_X = kde.score_samples(X) print(kde_X.shape) # contains the log-likelihood of the data. The smaller it is the rarer is the sample from scipy.stats.mstats import mquantiles alpha_set = 0.95 tau_kde = mquantiles(kde_X, 1. - alpha_set) n_samples, n_features = X.shape X_range = np.zeros((n_features, 2)) X_range[:, 0] = np.min(X, axis=0) - 1. X_range[:, 1] = np.max(X, axis=0) + 1. h = 0.1 # step size of the mesh x_min, x_max = X_range[0] y_min, y_max = X_range[1] xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) grid = np.c_[xx.ravel(), yy.ravel()] Z_kde = kde.score_samples(grid) Z_kde = Z_kde.reshape(xx.shape) plt.figure() c_0 = plt.contour(xx, yy, Z_kde, levels=tau_kde, colors='red', linewidths=3) plt.clabel(c_0, inline=1, fontsize=15, fmt={tau_kde[0]: str(alpha_set)}) plt.scatter(X[:, 0], X[:, 1]) plt.legend() plt.show()
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
now with One-Class SVM The problem of density based estimation is that they tend to become inefficient when the dimensionality of the data increase. It's the so-called curse of dimensionality that affects particularly density estimation algorithms. The one-class SVM algorithm can be used in such cases.
from sklearn.svm import OneClassSVM nu = 0.05 # theory says it should be an upper bound of the fraction of outliers ocsvm = OneClassSVM(kernel='rbf', gamma=0.05, nu=nu) ocsvm.fit(X) X_outliers = X[ocsvm.predict(X) == -1] Z_ocsvm = ocsvm.decision_function(grid) Z_ocsvm = Z_ocsvm.reshape(xx.shape) plt.figure() c_0 = plt.contour(xx, yy, Z_ocsvm, levels=[0], colors='red', linewidths=3) plt.clabel(c_0, inline=1, fontsize=15, fmt={0: str(alpha_set)}) plt.scatter(X[:, 0], X[:, 1]) plt.scatter(X_outliers[:, 0], X_outliers[:, 1], color='red') plt.legend() plt.show()
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Support vectors - Outliers The so-called support vectors of the one-class SVM form the outliers
X_SV = X[ocsvm.support_] n_SV = len(X_SV) n_outliers = len(X_outliers) print('{0:.2f} <= {1:.2f} <= {2:.2f}?'.format(1./n_samples*n_outliers, nu, 1./n_samples*n_SV))
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Only the support vectors are involved in the decision function of the One-Class SVM. Plot the level sets of the One-Class SVM decision function as we did for the true density. Emphasize the Support vectors.
plt.figure() plt.contourf(xx, yy, Z_ocsvm, 10, cmap=plt.cm.Blues_r) plt.scatter(X[:, 0], X[:, 1], s=1.) plt.scatter(X_SV[:, 0], X_SV[:, 1], color='orange') plt.show()
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
<div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> **Change** the `gamma` parameter and see it's influence on the smoothness of the decision function. </li> </ul> </div>
# %load solutions/22_A-anomaly_ocsvm_gamma.py
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Isolation Forest Isolation Forest is an anomaly detection algorithm based on trees. The algorithm builds a number of random trees and the rationale is that if a sample is isolated it should alone in a leaf after very few random splits. Isolation Forest builds a score of abnormality based the depth of the tree at which samples end up.
from sklearn.ensemble import IsolationForest iforest = IsolationForest(n_estimators=300, contamination=0.10) iforest = iforest.fit(X) Z_iforest = iforest.decision_function(grid) Z_iforest = Z_iforest.reshape(xx.shape) plt.figure() c_0 = plt.contour(xx, yy, Z_iforest, levels=[iforest.threshold_], colors='red', linewidths=3) plt.clabel(c_0, inline=1, fontsize=15, fmt={iforest.threshold_: str(alpha_set)}) plt.scatter(X[:, 0], X[:, 1], s=1.) plt.legend() plt.show()
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
<div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> Illustrate graphically the influence of the number of trees on the smoothness of the decision function? </li> </ul> </div>
# %load solutions/22_B-anomaly_iforest_n_trees.py
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Illustration on Digits data set We will now apply the IsolationForest algorithm to spot digits written in an unconventional way.
from sklearn.datasets import load_digits digits = load_digits()
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
The digits data set consists in images (8 x 8) of digits.
images = digits.images labels = digits.target images.shape i = 102 plt.figure(figsize=(2, 2)) plt.title('{0}'.format(labels[i])) plt.axis('off') plt.imshow(images[i], cmap=plt.cm.gray_r, interpolation='nearest') plt.show()
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
To use the images as a training set we need to flatten the images.
n_samples = len(digits.images) data = digits.images.reshape((n_samples, -1)) data.shape X = data y = digits.target X.shape
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Let's focus on digit 5.
X_5 = X[y == 5] X_5.shape fig, axes = plt.subplots(1, 5, figsize=(10, 4)) for ax, x in zip(axes, X_5[:5]): img = x.reshape(8, 8) ax.imshow(img, cmap=plt.cm.gray_r, interpolation='nearest') ax.axis('off')
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Let's use IsolationForest to find the top 5% most abnormal images. Let's plot them !
from sklearn.ensemble import IsolationForest iforest = IsolationForest(contamination=0.05) iforest = iforest.fit(X_5)
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Compute the level of "abnormality" with iforest.decision_function. The lower, the more abnormal.
iforest_X = iforest.decision_function(X_5) plt.hist(iforest_X);
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Let's plot the strongest inliers
X_strong_inliers = X_5[np.argsort(iforest_X)[-10:]] fig, axes = plt.subplots(2, 5, figsize=(10, 5)) for i, ax in zip(range(len(X_strong_inliers)), axes.ravel()): ax.imshow(X_strong_inliers[i].reshape((8, 8)), cmap=plt.cm.gray_r, interpolation='nearest') ax.axis('off')
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Let's plot the strongest outliers
fig, axes = plt.subplots(2, 5, figsize=(10, 5)) X_outliers = X_5[iforest.predict(X_5) == -1] for i, ax in zip(range(len(X_outliers)), axes.ravel()): ax.imshow(X_outliers[i].reshape((8, 8)), cmap=plt.cm.gray_r, interpolation='nearest') ax.axis('off')
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
<div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> Rerun the same analysis with all the other digits </li> </ul> </div>
# %load solutions/22_C-anomaly_digits.py
notebooks/22.Unsupervised_learning-anomaly_detection.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Select Directories and file paths
start_year = interactive(return_input, value = widgets.Dropdown( options=[2009, 2010, 2011, 2012, 2013], value=2009, description='Select start year:', disabled=False) ) end_year = interactive(return_input, value = widgets.Dropdown( options=[2011, 2012, 2013, 2014, 2015, 2016], value=2015, description='Select start year:', disabled=False) ) from IPython.display import display display(start_year, end_year) print(start_year.result, end_year.result) test_widget = core.jupyter_eventhandlers.MultiCheckboxWidget(['Bottenfauna', 'Växtplankton','Siktdjup','Näringsämnen']) test_widget # Display the widget if __name__ == '__main__': nr_marks = 60 print('='*nr_marks) print('Running module "lv_test_file.py"') print('-'*nr_marks) print('') #root_directory = os.path.dirname(os.path.abspath(__file__)) # works in root_directory = os.getcwd() # works in notebook resources_directory = root_directory + '/resources' filter_directory = root_directory + '/workspaces/default/filters' data_directory = root_directory + '/workspaces/default/data' # est_core.StationList(root_directory + '/test_data/Stations_inside_med_typ_attribute_table_med_delar_av_utsjö.txt') core.ParameterList() #-------------------------------------------------------------------------- print('{}\nSet directories and file paths'.format('*'*nr_marks)) raw_data_file_path = data_directory + '/raw_data/data_BAS_2000-2009.txt' first_filter_data_directory = data_directory + '/filtered_data' first_data_filter_file_path = filter_directory + '/selection_filters/first_data_filter.txt' winter_data_filter_file_path = filter_directory + '/selection_filters/winter_data_filter.txt' summer_data_filter_file_path = filter_directory + '/selection_filters/summer_data_filter.txt' tolerance_filter_file_path = filter_directory + '/tolerance_filters/tolerance_filter_template.txt'
.ipynb_checkpoints/lv_notebook-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Set up filters TODO: Store selection filters as attributes or something that allows us to not have to call them before calculating indicators/qualityfactors
print('{}\nInitiating filters'.format('*'*nr_marks)) first_filter = core.DataFilter('First filter', file_path = first_data_filter_file_path) winter_filter = core.DataFilter('winter_filter', file_path = winter_data_filter_file_path) winter_filter.save_filter_file(filter_directory + '/selection_filters/winter_data_filter_save.txt') # mothod available summer_filter = core.DataFilter('summer_filter', file_path = summer_data_filter_file_path) summer_filter.save_filter_file(filter_directory + '/selection_filters/summer_data_filter_save.txt') # mothod available tolerance_filter = core.ToleranceFilter('test_tolerance_filter', file_path = tolerance_filter_file_path) print('done\n{}.'.format('*'*nr_marks))
.ipynb_checkpoints/lv_notebook-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Load reference values
print('{}\nLoading reference values'.format('*'*nr_marks)) core.RefValues() core.RefValues().add_ref_parameter_from_file('DIN_winter', resources_directory + '/classboundaries/nutrients/classboundaries_din_vinter.txt') core.RefValues().add_ref_parameter_from_file('TOTN_winter', resources_directory + '/classboundaries/nutrients/classboundaries_totn_vinter.txt') core.RefValues().add_ref_parameter_from_file('TOTN_summer', resources_directory + '/classboundaries/nutrients/classboundaries_totn_summer.txt') print('done\n{}.'.format('*'*nr_marks))
.ipynb_checkpoints/lv_notebook-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Data Select data and create DataHandler instance
# Handler (raw data) raw_data = core.DataHandler('raw') raw_data.add_txt_file(raw_data_file_path, data_type='column')
.ipynb_checkpoints/lv_notebook-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Apply filters to selected data
# Use first filter filtered_data = raw_data.filter_data(first_filter) # Save filtered data (first filter) as a test filtered_data.save_data(first_filter_data_directory) # Load filtered data (first filter) as a test loaded_filtered_data = core.DataHandler('first_filtered') loaded_filtered_data.load_data(first_filter_data_directory)
.ipynb_checkpoints/lv_notebook-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Calculate Quality elements Create an instance of NP Qualityfactor class
qf_NP = core.QualityFactorNP() # use set_data_handler to load the selected data to the QualityFactor qf_NP.set_data_handler(data_handler = loaded_filtered_data)
.ipynb_checkpoints/lv_notebook-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Filter parameters in QualityFactorNP THIS SHOULD BE DEFAULT
print('{}\nApply season filters to parameters in QualityFactor\n'.format('*'*nr_marks)) # First general filter qf_NP.filter_data(data_filter_object = first_filter) # winter filter qf_NP.filter_data(data_filter_object = winter_filter, indicator = 'TOTN_winter') qf_NP.filter_data(data_filter_object = winter_filter, indicator = 'DIN_winter') # summer filter qf_NP.filter_data(data_filter_object = summer_filter, indicator = 'TOTN_summer') print('done\n{}.'.format('*'*nr_marks))
.ipynb_checkpoints/lv_notebook-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Calculate Quality Factor EQR
print('{}\nApply tolerance filters to all indicators in QualityFactor and get result\n'.format('*'*nr_marks)) qf_NP.get_EQR(tolerance_filter) print(qf_NP.class_result) print('-'*nr_marks) print('done') print('-'*nr_marks)
.ipynb_checkpoints/lv_notebook-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Creating features Next, we need to create a set of features for the development set. py_entitymatching provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
# Generate features feature_table = em.get_features_for_matching(A, B)
notebooks/guides/end_to_end_em_guides/.ipynb_checkpoints/Basic EM Workflow Restaurants - 2-checkpoint.ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
Selecting the best matcher using cross-validation Now, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' and 'recall' metric to select the best matcher.
# Select the best ML matcher using CV result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'], k=5, target_attr='gold', metric='precision', random_state=0) result['cv_stats'] result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'], k=5, target_attr='gold', metric='recall', random_state=0) result['cv_stats']
notebooks/guides/end_to_end_em_guides/.ipynb_checkpoints/Basic EM Workflow Restaurants - 2-checkpoint.ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
1. Model-based parametric regression 1.1. The regression problem Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$ is available. The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples. 1.2. The i.i.d. assumption Most regression algorithms are grounded on the idea that all samples from the training and test sets have been generated independently by some common stochastic process. This is the reason why a model adjusted with the training data can make good predictions over new test samples. Mathematically, this means that all pairs $({\bf x}^{(k)}, s^{(k)})$ from the training and test sets are independent and identically distributed (i.i.d.) samples from some distribution $p_{{\bf X}, S}({\bf x}, s)$. Unfortunately, this distribution is generally unknown. <img src="figs/DataModel.png", width=180> NOTE: In the following, we will use capital letters, ${\bf X}$, $S$, ..., to denote random variables, and lower-case letters ${\bf x}$, s, ..., to the denote the values they can take. When there is no ambigüity, we will remove subindices of the density functions, $p_{{\bf X}, S}({\bf x}, s)= p({\bf x}, s)$ to simplify the mathematical notation. 1.3. Model-based regression If $p({\bf x}, s)$ were know, we could apply estimation theory to estimate $s$ from $p$. For instance, we could apply any of the following classical estimates: Maximum A Posterior (MAP): $$\hat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x})$$ Minimum Mean Square Error (MSE): $$\hat{s}_{\text{MSE}} = \mathbb{E}{S |{\bf x}}$$ Note that, since these estimators depend on $p(s |{\bf x})$, knowing the posterior distribution of the target variable is enough, and we do not need to know the joint distribution. Model based-regression methods exploit the idea of using the training data to estimate the posterior distribution $p(s|{\bf x})$ and then apply estimation theory to make predictions. <img src="figs/ModelBasedReg.png", width=280> 1.4. Parametric model-based regression How can we estimate the posterior probability function of the target variable $s$? In this section we will explore a parametric estimation method: let us assume that $p$ belongs to a parametric family of distributions $p(s|{\bf x},{\bf w})$, where ${\bf w}$ is some unknown parameter. We will use the training data to estimate ${\bf w}$ <img src="figs/ParametricReg.png", width=300> The estimation of ${\bf w}$ from a given dataset $\mathcal{D}$ is the goal of the following sections 1.5. Maximum Likelihood parameter estimation. The ML (Maximum Likelihood) principle is well-known in statistics and can be stated as follows: take the value of the parameter to be estimated (in our case, ${\bf w}$) that best explains the given observations (in our case, the training dataset $\mathcal{D}$). Mathematically, this can be expressed as follows: $$ \hat{\bf w}{\text{ML}} = \arg \max{\bf w} p(\mathcal{D}|{\bf w}) $$ To be more specific: let us group the target variables into a vector $$ {\bf s} = \left(s^{(1)}, \dots, s^{(K)}\right)^\top $$ and the input vectors into a matrix $$ {\bf X} = \left({\bf x}^{(1)}, \dots, {\bf x}^{(K)}\right)^\top $$ To compute the likelihood function we will assume that the inputs do not depend on ${\bf w}$ (i.e., $p({\bf x}|{\bf w}) = p({\bf x})$. This means that ${\bf w}$ is a parameter of the posterior distribution of $s$ and not a parameter of the marginal distribution. Then we can write $$p(\mathcal{D}|{\bf w}) = p({\bf s}, {\bf X}|{\bf w}) = p({\bf s} | {\bf X}, {\bf w}) p({\bf X}|{\bf w}) = p({\bf s} | {\bf X}, {\bf w}) p({\bf X}) $$ and we can express the estimation problem as the computation of $$ \hat{\bf w}{\text{ML}} = \arg \max{\bf w} p({\bf s}|{\bf X},{\bf w}) $$ 1.6. Summary. Let's summarize what we need to do in order to design a regression algorithm: Assume a parametric data model $p(s| {\bf x},{\bf w})$ Using the data model and the i.i.d. assumption, compute $p({\bf s}| {\bf X},{\bf w})$. Find an expression for ${\bf w}_{\text{ML}}$ Assuming ${\bf w} = {\bf w}_{\text{ML}}$, compute the MAP or the minimum MSE estimate of $s$ given ${\bf x}$. 2. ML estimation for a Gaussian model. 2.1. Step 1: The Gaussian generative model Let us assume that the target variables $s^{(k)}$ in dataset $\mathcal{D}$ are given by $$ s^{(k)} = {\bf w}^\top {\bf z}^{(k)} + \varepsilon^{(k)} $$ where ${\bf z}^{(k)}$ is the result of some transformation of the inputs, ${\bf z}^{(k)} = T({\bf x}^{(k)})$, and $\varepsilon^{(k)}$ are i.i.d. instances of a Gaussian random variable with mean zero and varianze $\sigma_\varepsilon^2$, i.e., $$ p_E(\varepsilon) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{\varepsilon^2}{2\sigma_\varepsilon^2}\right) $$ Assuming that the noise variables are independent on ${\bf x}$ and ${\bf w}$, then, for a given ${\bf x}$ and ${\bf w}$, the target variable is gaussian with mean ${\bf w}^\top {\bf z}^{(k)}$ and variance $\varepsilon^2$ $$ p(s|{\bf x}, {\bf w}) = p_E(s-{\bf w}^\top{\bf z}) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right) $$ 2.2. Step 2: Likelihood function Now we need to compute the likelihood function $p({\bf s}, {\bf X} | {\bf w})$. If the samples are i.i.d. we can write $$ p({\bf s}| {\bf X}, {\bf w}) = \prod_{k=1}^{K} p(s^{(k)}| {\bf x}^{(k)}, {\bf w}) = \prod_{k=1}^{K} \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{\left(s^{(k)}-{\bf w}^\top{\bf z}^{(k)}\right)^2}{2\sigma_\varepsilon^2}\right) \ = \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K \exp\left(-\sum_{k=1}^K \frac{\left(s^{(k)}-{\bf w}^\top{\bf z}^{(k)}\right)^2}{2\sigma_\varepsilon^2}\right) \ $$ Finally, grouping variables ${\bf z}^{(k)}$ in $${\bf Z} = \left({\bf z}^{(1)}, \dots, {\bf z}^{(K)}\right)^\top$$ we get $$ p({\bf s}| {\bf X}, {\bf w}) = \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K \exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right) $$ 2.3. Step 3: ML estimation. The <b>maximum likelihood</b> solution is then given by: $$ {\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w}) = \arg \min_{\bf w} \|{\bf s} - {\bf Z}{\bf w}\|^2 $$ Note that this is exactly the same optimization problem of the Least Squares (LS) regression algorithm. The solution is $$ {\bf w}_{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s} $$ 2.4. Step 4: Prediction function. The last step consists on computing an estimate of $s$ by assuming that the true value of the weight parameters is ${\bf w}\text{ML}$. In particular, the minimum MSE estimate is $$ \hat{s}\text{MSE} = \mathbb{E}{s|{\bf x},{\bf w}\text{ML}} $$ Knowing that, given ${\bf x}$ and ${\bf w}$, $s$ is normally distributed with mean ${\bf w}^\top {\bf z}$ we can write $$ \hat{s}\text{MSE} = {\bf w}_\text{ML}^\top {\bf z} $$ Exercise 1: Assume that the targets in the one-dimensional dataset given by
X = np.array([0.15, 0.41, 0.53, 0.80, 0.89, 0.92, 0.95]) s = np.array([0.09, 0.16, 0.63, 0.44, 0.55, 0.82, 0.95])
R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb
ML4DS/ML4all
mit
1.1. Represent a scatter plot of the data points
# <SOL> # </SOL>
R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb
ML4DS/ML4all
mit
1.2. Compute the ML estimate.
# Note that, to use lstsq, the input matrix must be K x 1 Xcol = X[:,np.newaxis] # Compute the ML estimate using linalg.lstsq from Numpy. # wML = <FILL IN>
R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb
ML4DS/ML4all
mit
1.3. Plot the likelihood as a function of parameter $w$ along the interval $-0.5\le w \le 2$, verifying that the ML estimate takes the maximum value.
K = len(s) wGrid = np.arange(-0.5, 2, 0.01) p = [] for w in wGrid: d = s - X*w # p.append(<FILL IN>) # Compute the likelihood for the ML parameter wML # d = <FILL IN> # pML = [<FILL IN>] # Plot the likelihood function and the optimal value plt.figure() plt.plot(wGrid, p) plt.stem(wML, pML) plt.xlabel('$w$') plt.ylabel('Likelihood function') plt.show()
R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb
ML4DS/ML4all
mit
1.4. Plot the prediction function over the data scatter plot
x = np.arange(0, 1.2, 0.01) # sML = <FILL IN> plt.figure() plt.scatter(X, s) # plt.plot(<FILL IN>) plt.xlabel('x') plt.ylabel('s') plt.axis('tight') plt.show()
R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb
ML4DS/ML4all
mit
Exercise 2: Assume the dataset $\mathcal{D} = {(x^{(k)}, s^{(k)}, k=1,\ldots, K}$ contains i.i.d. samples from a distribution with posterior density given by $$ p(s|x, w) = w x \exp(- w x s), \qquad s\ge0, \,\, x\ge 0, \,\, w\ge 0 $$ 2.1. Determine an expression for the likelihood function Solution: <SOL> </SOL> 2.2. Draw the likelihood function for the dataset in Exercise 1 in the range $0\le w\le 6$.
K = len(s) wGrid = np.arange(0, 6, 0.01) p = [] Px = np.prod(X) xs = np.dot(X,s) for w in wGrid: # p.append(<FILL IN>) plt.figure() # plt.plot(<FILL IN>) plt.xlabel('$w$') plt.ylabel('Likelihood function') plt.show()
R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb
ML4DS/ML4all
mit
2.3. Determine the coefficient $w_\text{ML}$ of the linear prediction function (i.e., using ${\bf Z}={\bf X}$). (Hint: you can maximize the log of the likelihood function instead of the likelihood function in order to simplify the differentiation) Solution: <SOL> </SOL> 2.4. Compute $w_\text{ML}$ for the dataset in Exercise 1
# wML = <FILL IN> print(wML)
R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb
ML4DS/ML4all
mit
Series Validator
# Create validator's instance validator = pv.IntegerSeriesValidator(min_value=0, max_value=10) series = pd.Series([0, 3, 6, 9]) # This series is valid. print(validator.is_valid(series)) series = pd.Series([0, 4, 8, 12]) # This series is invalid. because that includes 12 number. print(validator.is_valid(series))
example/pandas_validator_example_en.ipynb
c-bata/pandas-validator
mit
DataFrame Validator DataFrameValidator class can validate panda's dataframe object. It can define easily like Django's model definition.
# Define validator class SampleDataFrameValidator(pv.DataFrameValidator): row_num = 5 column_num = 2 label1 = pv.IntegerColumnValidator('label1', min_value=0, max_value=10) label2 = pv.FloatColumnValidator('label2', min_value=0, max_value=10) # Create validator's instance validator = SampleDataFrameValidator() df = pd.DataFrame({'label1': [0, 1, 2, 3, 4], 'label2': [5.0, 6.0, 7.0, 8.0, 9.0]}) # This data frame is valid. print(validator.is_valid(df)) df = pd.DataFrame({'label1': [11, 12, 13, 14, 15], 'label2': [5.0, 6.0, 7.0, 8.0, 9.0]}) # This data frame is invalid. print(validator.is_valid(df)) df = pd.DataFrame({'label1': [0, 1, 2], 'label2': [5.0, 6.0, 7.0]}) # This data frame is invalid. print(validator.is_valid(df))
example/pandas_validator_example_en.ipynb
c-bata/pandas-validator
mit
Step 2: Fetch BTE response mapping file using BTE client
from biothings_explorer.registry import Registry reg = Registry() mapping = reg.registry['DISEASES']['mapping'] mapping
jupyter notebooks/Demo of JSON Transform.ipynb
biothings/biothings_explorer
apache-2.0
Step 3: Transform the output from DISEASES API to Biolink model
from biothings_explorer.json_transformer import Transformer tf = Transformer(json_doc, mapping) tf.transform()
jupyter notebooks/Demo of JSON Transform.ipynb
biothings/biothings_explorer
apache-2.0
Use Case 2: Stanford Biosample API Step 1: Fetch API response from Stanford Biosample API
import requests json_doc = requests.get('http://api.kp.metadatacenter.org/biosample/search?q=biolink:Disease=MONDO:0007915&limit=10').json() json_doc
jupyter notebooks/Demo of JSON Transform.ipynb
biothings/biothings_explorer
apache-2.0
Step 2: Fetch BTE response mapping file using BTE client
from biothings_explorer.registry import Registry reg = Registry() mapping = reg.registry['stanford_biosample_disease2sample']['mapping'] mapping
jupyter notebooks/Demo of JSON Transform.ipynb
biothings/biothings_explorer
apache-2.0
Step 3: Transform the output from Stanford Biosample API to Biolink model
from biothings_explorer.json_transformer import Transformer tf = Transformer(json_doc=json_doc, mapping=mapping) tf.transform()
jupyter notebooks/Demo of JSON Transform.ipynb
biothings/biothings_explorer
apache-2.0
Downloading RR Lyrae Data gatspy includes loaders for some convenient datasets. Here we'll take a look at some RR Lyrae light curves from Sesar 2010
from gatspy.datasets import fetch_rrlyrae rrlyrae = fetch_rrlyrae()
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
This dataset object has an ids attribute showing the light curve ids available, and a get_lightcurve method which returns the light curve:
len(rrlyrae.ids) lcid = rrlyrae.ids[0] t, y, dy, filts = rrlyrae.get_lightcurve(lcid)
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
Let's quickly visualize this data, to see what we're working with:
for filt in 'ugriz': mask = (filts == filt) plt.errorbar(t[mask], y[mask], dy[mask], fmt='.', label=filt) plt.gca().invert_yaxis() plt.legend(ncol=3, loc='upper left');
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
The dataset has a metadata attribute, which can tell us things like the period (as determined by Sesar 2010:
period = rrlyrae.get_metadata(lcid)['P'] period
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
Let's fold the lightcurve on this period and re-plot the points:
phase = t % period for filt in 'ugriz': mask = (filts == filt) plt.errorbar(phase[mask], y[mask], dy[mask], fmt='.', label=filt) plt.gca().invert_yaxis() plt.legend(ncol=3, loc='upper left');
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
Now the characteristic shape of the RR Lyrae light curve can be seen! Lomb-Scargle Fits to the Data Lomb-Scargle Period finding We can now use the Lomb-Scargle periodogram to attempt to fit the period. We'll use the classic (single-band) version, and examine the best period for each band individually. Because of the large (~10 year) baseline, the fit takes a few seconds per band:
from gatspy.periodic import LombScargle, LombScargleFast # LombScargleFast is slightly faster than LombScargle for the simplest case # Both have the same interface. model = LombScargleFast() model.optimizer.set(quiet=True, period_range=(0.2, 1.2)) print("Sesar 2010:", period) for filt in 'ugriz': mask = (filts == filt) model.fit(t[mask], y[mask], dy[mask]) print(filt + ': ', model.best_period)
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
We see that within each band, the lomb-scargle periodogram lands on the correct peak. This model.best_period hides some computation. To find the best period, the model optimizer determines the required resolution for a linear scan search, and computes the periodogram at each of these values. We can view the periodogram for each model by calling the periodogram method. We'll put this in a function for later use
def plot_periodogram(model, lcid): plt.figure() rrlyrae = fetch_rrlyrae() t, y, dy, filts = rrlyrae.get_lightcurve(lcid) period = rrlyrae.get_metadata(lcid)['P'] periods = np.linspace(0.2, 1.0, 5000) for i, filt in enumerate('ugriz'): mask = (filts == filt) model.fit(t[mask], y[mask], dy[mask]) power = model.periodogram(periods) plt.plot(periods, i + power, lw=1, label=filt) plt.xlim(periods[0], periods[-1]) plt.ylim(0, 5) plt.legend(ncol=3) plot_periodogram(LombScargle(), lcid)
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
The periodogram has very narrow peaks, of width approximately $\Delta P \approx P^2 / T$, where $T = (t_{max} - t_{min})$ is the baseline of the measurements. For 10 years of data, the width near the 0.6 day peak turns out to be around $10^{-4}$ days (about 8 seconds), which is why we need such fine sampling of the periodogram to see the peaks. Lomb-Scargle Model Fitting The Lomb-Scargle model corresponds to a single-term sinusoid fit to the data. We can plot these model fits using the predict() method. We'll encapsulate this in a function in order to reuse it below:
def plot_model(model, lcid): plt.figure() rrlyrae = fetch_rrlyrae() t, y, dy, filts = rrlyrae.get_lightcurve(lcid) period = rrlyrae.get_metadata(lcid)['P'] phase = t % period tfit = np.linspace(0, period, 100) for filt in 'ugriz': mask = (filts == filt) pts = plt.errorbar(phase[mask], y[mask], dy[mask], fmt='.', label=filt) model.fit(t[mask], y[mask], dy[mask]) yfit = model.predict(tfit, period=period) plt.plot(tfit, yfit, color=pts[0].get_color(), alpha=0.5) plt.gca().invert_yaxis() plt.legend(ncol=3, loc='upper left'); plot_model(LombScargle(), lcid)
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
The model is clearly fairly biased: that is, it doesn't have enough flexibility to truly fit the data. We can do a bit better by adding more terms to the Fourier series with the Nterms parameter
plot_model(LombScargle(Nterms=4), lcid)
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
This model can also be used within the periodogram, using the same interface as above:
plot_periodogram(LombScargle(Nterms=4), lcid)
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
Supersmoother periodogram and fits The package includes the SuperSmoother algorithm using the same API. Here we'll plot the periodogram and the model fits for the same data using this model. Because the interface is identical, we can simply swap-in the supersmoother model in the function from above:
from gatspy.periodic import SuperSmoother plot_periodogram(SuperSmoother(), lcid) plot_model(SuperSmoother(), lcid)
examples/SingleBand.ipynb
astroML/gatspy
bsd-2-clause
2. Usando indexação Também é possível realizar a translação periódica usando indexação por arrays. A operação que permite a periodicidade no cálculo dos índices é o módulo, implementado pelo operador % do NumPy. Esta é a forma que a função está implementada na toolbox ia898: ptrans.
def ptrans(f,t): import numpy as np g = np.empty_like(f) if f.ndim == 1: W = f.shape[0] col = np.arange(W) g = f[(col-t)%W] elif f.ndim == 2: H,W = f.shape rr,cc = t row,col = np.indices(f.shape) g = f[(row-rr)%H, (col-cc)%W] elif f.ndim == 3: Z,H,W = f.shape zz,rr,cc = t z,row,col = np.indices(f.shape) g = f[(z-zz)%Z, (row-rr)%H, (col-cc)%W] return g def ptrans2d(f,t): rr,cc = t H,W = f.shape r = rr%H c = cc%W g = np.empty_like(f) g[:r,:c] = f[H-r:H,W-c:W] g[:r,c:] = f[H-r:H,0:W-c] g[r:,:c] = f[0:H-r,W-c:W] g[r:,c:] = f[0:H-r,0:W-c] return g f = mpimg.imread('../data/cameraman.tif') f5=ptrans(f, np.array(f.shape)//3) plt.imshow(f,cmap='gray') plt.title('Original 2D image - Cameraman') plt.imshow(f5,cmap='gray') plt.title('Cameraman periodically translated')
2S2018/09 Translacao periodica.ipynb
robertoalotufo/ia898
mit
The Leibniz formula states that: $$ 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \ldots = \frac{\pi}{4} $$ So in other words: $$ \sum^{\infty}_{n = 0}\frac{(-1)^n}{2n + 1} = \frac{\pi}{4} $$ Let's see if we can understand the sum formula first. What we'll do is just take the top part of the fraction $(-1)^n$ and the bottom part $2n + 1$ and plot them seperately for some values of $n$:
with plt.xkcd(): x = np.arange(0, 10, 1) fig, axes = plt.subplots(1, 2, figsize=(10, 4)) for ax in axes: pu.setup_axes(ax) axes[0].plot(x, (-1)**x, 'bo', zorder=10) axes[0].set_xlim(0, 9) axes[0].set_ylim(-5, 5) axes[1].plot(x, (2*x) + 1) axes[1].set_xlim(0, 9) axes[1].set_ylim(0, 10)
leibniz_formula.ipynb
basp/notes
mit
So as $n$ gets bigger and bigger we have two things. One flips between $1$ and $-1$ and the other is just a linear value $y = 2n + 1$ that just keeps on getting bigger and bigger. Now the equation above tells us that if we take an near infinite sum of these values we will get closer and closer to the value of $\frac{\pi}{4}$ so let's see if that's true. Below are two lines, one line represents $y = \frac{(-1)^n}{2n + 1}$ and the other line is the sum of all the values of that equation for $y$ at $n = 0, 1, 2, \ldots, n$. You can see that it (slowly) converges to some value, namely the value $\frac{4}{\pi}$.
n = np.arange(0, 10, 1) f = lambda x: ((-1)**x) / (2*x + 1) with plt.xkcd(): fig, axes = plt.subplots(1, figsize=(8, 8)) pu.setup_axes(axes, xlim=(-1, 9), ylim=(-0.5, 1.2), yticks=[1], yticklabels=[1], xticks=[1,2,3,4,5,6,7,8]) plt.plot(n, f(n), zorder=10, label='THE INFINITE SERIES') plt.plot(n, [nsum(f, [0, n]) for n in n], label='SUMMATION OF THE SERIES') plt.annotate('THE LEIBNIZ FORMULA FOR PI', (1, 1)) axes.set_aspect(4.0) axes.legend(loc=4)
leibniz_formula.ipynb
basp/notes
mit
Now if we sum up all the terms of that line above for $x = 0, 1, 2, 3, \ldots, n$ we'll get closer and closer to $\frac{4}{\pi}$. Using mpmath we can calculate $\pi$ with pretty good detail using the mp.dps setting to control the precision.
leibniz = lambda n: ((-1)**n) / (2 * n + 1) mp.dps = 50 nsum(leibniz, [0, inf]) * 4
leibniz_formula.ipynb
basp/notes
mit
Of course we can compute it symbolically as well. These fractions get pretty crazy real quickly.
leibniz = S('((-1)^n)/(2*n+1)') n = S('n') sum([leibniz.subs(n, i) for i in range(100)])
leibniz_formula.ipynb
basp/notes
mit
Řešení bonusu - vlastní metody repr Metoda __repr__ vrací řetězec, který reprezentuje objekt například při výpisu v interaktivní konzoli. Tato metoda může vracet libovolný řetězec.
class CeleCislo(int): def je_sude(self): return self % 2 == 0 def __repr__(self): return "<Cele cislo {}>".format(self) y = 5 y x = CeleCislo(5) x
original/v1/s014-class/ostrava/Feedback_homeworks.ipynb
PyLadiesCZ/pyladies.cz
mit
Kompatibilita Kompatibilita zůstala díky dědění zachována.
a = 5 b = CeleCislo(7) a + b a - b a < b b >= a
original/v1/s014-class/ostrava/Feedback_homeworks.ipynb
PyLadiesCZ/pyladies.cz
mit
Vlastní objekty a speciální metody Pro domácí projekt to nebylo potřeba, ale pojďme si zkusit vytvořit vlastní třídu s implementovanými metodami pro využití operátorů v Pythonu.
class Pizza: def __init__(self, jmeno, ingredience): self.jmeno = jmeno self.ingredience = ingredience def __repr__(self): return "<Pizza '{}' na které je '{}'".format(self.jmeno, self.ingredience) p = Pizza("salámová", ["sýr", "paprikáš", "suchý salám"]) p
original/v1/s014-class/ostrava/Feedback_homeworks.ipynb
PyLadiesCZ/pyladies.cz
mit
Matematika Pro matematické operátory existují speciální metody, jejichž názvy odpovídají použitým operátorům/operacím.
class Pizza: def __init__(self, jmeno, ingredience): self.jmeno = jmeno self.ingredience = ingredience def __repr__(self): return "<Pizza '{}' na které je '{}'".format(self.jmeno, self.ingredience) def __add__(self, other): jmeno = self.jmeno + " " + other.jmeno ingredience = self.ingredience + other.ingredience return Pizza(jmeno, ingredience) p1 = Pizza("salámová", ["sýr", "paprikáš", "suchý salám"]) p2 = Pizza("hawai", ["máslo", "ananas"]) p1 + p2 p2 - p1
original/v1/s014-class/ostrava/Feedback_homeworks.ipynb
PyLadiesCZ/pyladies.cz
mit
Porovnávání Pro porovnání máme také speciální metody - pro každý operátor jednu.
class Pizza: def __init__(self, jmeno, ingredience): self.jmeno = jmeno self.ingredience = ingredience def __repr__(self): return "<Pizza '{}' na které je '{}'".format(self.jmeno, self.ingredience) def __add__(self, other): jmeno = self.jmeno + " " + other.jmeno ingredience = self.ingredience + other.ingredience return Pizza(jmeno, ingredience) def __lt__(self, other): return len(self.ingredience) < len(other.ingredience) p1 = Pizza("salámová", ["sýr", "paprikáš", "suchý salám"]) p2 = Pizza("hawai", ["máslo", "ananas"]) p3 = Pizza("pro chude", ["eidam"]) p4 = p1 + p2 p3 < p1 p3 > p1 p4 < p2
original/v1/s014-class/ostrava/Feedback_homeworks.ipynb
PyLadiesCZ/pyladies.cz
mit
Řazení Řazení je jen speciální případ porovnávání, kdy se mezi sebou porovnají všechny prvky a podle výsledku jednotlivých porovnání se seřadí. K tomu nám stačí mít implementovánu alespoň metodu __lt__.
pizzy = [p1, p2, p3, p4] pizzy sorted(pizzy)
original/v1/s014-class/ostrava/Feedback_homeworks.ipynb
PyLadiesCZ/pyladies.cz
mit
The first part of this tutorial shows you how to get the data you need from the FHD output directories. This relies mostly on functions in the fhd_pype module. The function fp.fhd_base() returns the full path to the directory where the FHD output is located. It defaults to the common Aug23 output folder on the MIT cluster, but this can be assigned to another string using fp.set_fhd_base(&lt;new_path&gt;).
print 'The default path: '+fp.fhd_base() fp.set_fhd_base(os.getcwd().strip('scripts')+'katalogss/data') print 'Our path: '+fp.fhd_base()
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause
We also need to define the version specifying the name of the run. This is equivalent to the case string in eor_firstpass_versions.pro, or the string following "fhd_" in the output directory name. The katalogss data directory includes a mock FHD output run and the relevant files for a single obsid.
fhd_run = 'mock_run' s = '%sfhd_%s'%(fp.fhd_base(),fhd_run) !ls -R $s
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause
Next we want to define the list of obsids we want to process. You can do this manually, or you can easily grab all obsids with deconvolution output for your run using:
obsids = fp.get_obslist(fhd_run) obsids
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause
Next we'll grab the source components and metadata for each obsid in the list. Note that if you don't supply the list of obsids, it will automatically run for all obsids.
comps = fp.fetch_comps(fhd_run, obsids=obsids) meta = fp.fetch_meta(fhd_run,obsids=obsids)
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause
This returns a dictionary of dictionaries. Since it takes some work to run, let's cache it a new katalogss output directory.
kgs_out = '%sfhd_%s/katalogss/'%(fp.fhd_base(),fhd_run) if not os.path.exists(kgs_out): os.mkdir(kgs_out) print 'saving %scomponent_arrays.p' pickle.dump([comps,meta], open(kgs_out+'components.p','w')) comps
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause
If you need to come back to this later, restore it with comps, meta = pickle.load(open(kgs_out+'component_arrays.p'))
#comps, meta = pickle.load(open(kgs_out+'components.p'))
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause