markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Indexing Lists (...and Text objects) | text4[173]
text4.index('awaken')
text5[16715:16735]
text6[1600:1625] | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Computing with Language: Simple Statistics | saying = 'After all is said and done more is said than done'.split()
tokens = sorted(set(saying))
tokens[-2:] | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Frequency Distributions | fdist1 = FreqDist(text1)
print(fdist1)
fdist1.most_common(50)
fdist1['whale'] | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
50 most frequent words account for almost half of the book | plt.figure(figsize=(18,10))
fdist1.plot(50, cumulative=True)
| 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Fine-grained Selection of Words
Looking at long words of a text (maybe these will be more meaningful words?) | V = set(text1)
long_words = [w for w in V if len(w) > 15]
sorted(long_words) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
words that are longer than 7 characters and occur more than 7 times | fdist5 = FreqDist(text5)
sorted(w for w in set(text5) if len(w) > 7 and fdist5[w] > 7) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
collocation - sequence of words that occur together unusually often (red wine is a collocation, vs. the wine is not) | list(nltk.bigrams(['more', 'is', 'said', 'than', 'done'])) # bigrams() returns a generator | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
collocations are just frequent bigrams -- we want to focus on the cases that involve rare words**
collocations() returns bigrams that occur more often than expected, based on word frequency | text4.collocations()
text8.collocations() | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
counting other things
word length distribution in text1 | [len(w) for w in text1][:10]
fdist = FreqDist(len(w) for w in text1)
print(fdist)
fdist
fdist.most_common()
fdist.max()
fdist[3]
fdist.freq(3) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
words of length 3 (~50k) make up ~20% of all words in the book
Back to Python: Making Decisions and Taking Control
skipping basic python stuff
More accurate vocabulary size counting -- convert all strings to lowercase | len(text1)
len(set(text1))
len(set(word.lower() for word in text1)) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Only include alphabetic words -- no punctuation | len(set(word.lower() for word in text1 if word.isalpha())) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Structure Prediction
In this notebook, we aim to do a test of the structure substitution algorithm implemented in SMACT
Before we can do predictions, we need to create our cation mutator, database and a table, and a list of hypothetical compositions
Procedure
Create a list of hypothetical compositions
Create a databas... | comps=pd.read_csv("Li-Garnet_Comps_sus.csv")
comps.head() | examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb | WMD-group/SMACT | mit |
Part 2: Database
Let's follow this procedure
1. Create a SMACT database and add a table which contains the result of our query | DB=StructureDB("Test.db")
SP=StructurePredictor(CM, DB, "Garnets") | examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb | WMD-group/SMACT | mit |
Part 3: Cation Mutator
In order to set up the cation mutator, we must do the following:
1. Generate a dataframe of lambda values for all species we which to consider
2. Instantiate the CationMutator class with the lambda dataframe | #Create the CationMutator class
#Here we use the default lambda table
CM=CationMutator.from_json() | examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb | WMD-group/SMACT | mit |
Part 4: Structure Prediction
Prerequisites: Part 1, Part 2 & Part 3
Procedure:
1. Instantiate the StructurePredictor class with the cation mutator (part 1), database (part 2) and table (part 2)
2. Predict Structures
3. Compare predicted structures with Database | #Corrections
cond_df=CM.complete_cond_probs()
species=list(cond_df.columns)
comps_copy=comps[['A','B','C','D']]
df_copy_bool=comps_copy.isin(species)
x=comps_copy[df_copy_bool].fillna(0)
x=x[x.A != 0]
x=x[x.B != 0]
x=x[x.C != 0]
x=x[x.D != 0]
x=x.reset_index(drop=True)
#x.to_csv("./Garnet_Comps_Corrected_Pym.csv... | examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb | WMD-group/SMACT | mit |
Part 5: Storing the results | #Add predictions to dataframe
import pymatgen as mg
pred_structs=[]
probs=[]
parent_structs=[]
parent_pretty_formula=[]
for i in preds:
if len(i)==0:
pred_structs.append(None)
probs.append(None)
parent_structs.append(None)
parent_pretty_formula.append(None)
else:
pred_st... | examples/Structure_Prediction/Li-Garnets_SP-Pym-new.ipynb | WMD-group/SMACT | mit |
Accessing Text from the Web and from Disk
Electronic Books
Gutenberg上每一本電子書都有編號,只要知道編號就可以下載電子書,例如"Crime and Punishment"是第2554號,可以用下面的方法取得。 | from urllib import urlopen
url = "http://www.gutenberg.org/files/2554/2554.txt"
raw = urlopen(url).read()
type(raw) | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
註:如果要使用proxy下載,可以用
proxy = {'http': 'http://www.someproxy.com:3128'}
raw = urlopen(url, proxies=proxy).read()
剛下載的raw是字串形態,包含許多不必要的字,所以需要tokenization,使單字分離。 | tokens = nltk.word_tokenize(raw)
tokens[:10] | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
產生的token可以進一步轉換成nltk text形態,以便進一步處理。 | text = nltk.Text(tokens)
text
text.collocations() | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
Dealing with HTML | url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = urlopen(url).read()
html[:60]
from bs4 import BeautifulSoup
bs = BeautifulSoup(html, "lxml")
tokens = nltk.word_tokenize(bs.get_text())
tokens[:5] | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
concordance列出單字出現的每個地方 | text = nltk.Text(tokens)
text.concordance('gene') | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
Strings: Lowest Level Text
字串是一種immutable的資料形態,設定值之後就不能修改。因此所有字串函式都會產生複製後的字串,而不會修改原始的字串。
用三個引號"""來定義字串,可以將換行一起放進去。 | s = """hello, my
friend"""
print s | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
str支援+和*兩個operator。+代表連接兩個字串,*代表重複一個字串。 | 'must' + 'maybe' + 'hello'
'bug-' * 5 | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
用[i]可以存取第i+1個字元,如果字串長度是n,則i的範圍為0到n-1。
用[-i]可以存取後面數過來第i個字元,如果字串長度是n,則i的範圍為-1到-n。 | s = 'hello, world'
s[0], s[1], s[:5], s[-5:] | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
其他常見的字串函式:
s.find(t): 找到字串s內第一個字串t的位置,介於0~n-1間,找不到傳回-1
s.rfind(t): 從右邊找字串s內第一個字串t的位置,介於0~n-1間,找不到傳回-1
s.index(t): 功能如s.find(t),但找不到時會產生ValueError
s.rindex(t): 功能如s.rfind(t),但找不到時會產生ValueError
s.join([a,b,c]): 將字串a,b,c結合,中間插入s,最後結果為asbsc
s.split(t): 用字串t為分隔,將s拆成多個字串,預設t為空白字元
s.splitlines(): 將字串s根據換行拆成多個字串
s.lower(): 將所... | path = nltk.data.find('corpora/unicode_samples/polish-lat2.txt')
import codecs
f = codecs.open(path, encoding='latin2')
for line in f:
print line, line.encode('utf-8'), line.encode('unicode_escape'), '\n'
# 傳回unicode的編號
print ord(u'許'), ord(u'洪'), ord(u'蓋')
print repr(u'許洪蓋') | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
unicodedata可以印出unicode中對字元的描述 | import unicodedata
lines = codecs.open(path, encoding='latin2').readlines()
line = lines[2]
print line.encode('unicode_escape')
for c in line:
if ord(c) > 127:
print '%s U+%04x %s' % (c.encode('utf8'), ord(c), unicodedata.name(c)) | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
Regular Expressions
.: 對應任意一個字元
^abc: 字串開頭為abc
abc$: 字串結尾為abc
[abc]: 對應[]內出現的字元,例如a或b或c
[A-Z0-9]: 對應範圍內的字元,例如A到Z以及0到9
ed|ing|s: 對應數種指定的字串
*: 前一個符號重複0次以上
+: 前一個符號重複1次以上
?: 前一個符號重複0次或1次
{n}: 前一個符號重複恰好n次
{n,}: 前一個符號重複至少n次
{,n}: 前一個符號重複最多n次
{m,n}: 前一個符號重複最少m次、最多n次
a(b|c)+: 括號表示|運算的範圍 | import re
# 一般re.search傳回的是一個 _sre.SRE_Match 物件
# 將物件轉換成 bool 值,如果 True 代表有找到符合的字串
bool(re.search('ed$', 'played')), bool(re.search('ed$', 'happy'))
# re.findall 可以取出部分的字串,要取出的部分由()的範圍決定
re.findall('\[\[(.+?)[\]|\|]+', 'My Name is [[Bany]] Hung, this is my [[Dog|Animal]]')
# re.sub 會取代字串
re.sub('\[\[.+?[\]|\|]+', '#... | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
Normalize Text
正規化是指將文字變成統一的格式,例如轉成小寫、取字根。 | porter = nltk.PorterStemmer()
lanc = nltk.LancasterStemmer()
w = ['I','was','playing','television','in','the','painted','garden']
[(a, porter.stem(a), lanc.stem(a)) for a in w] | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
作stemming之前,要將原始資料存在dict中,以方便使用。 | class IndexedText(object):
def __init__(self, stemmer, text):
self._text = text
self._stemmer = stemmer
self._index = nltk.Index((self._stem(word), i) for (i, word) in enumerate(text))
def concordance(self, word, width=40):
key = self._stem(word)
wc = int(width/4... | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
Lemmatization | s = ['women', 'are', 'living']
wnl = nltk.WordNetLemmatizer()
zip(s, [wnl.lemmatize(t, 'v') for t in s], [wnl.lemmatize(t, 'n') for t in s]) | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
Formatting: From Lists to Strings | s = ['We', 'called', 'him', 'Tortoise', 'because', 'he', 'taught', 'us', '.']
' '.join(s)
';'.join(s) | NLP_With_Python/Ch3.ipynb | banyh/ShareIPythonNotebook | gpl-3.0 |
Let's first get familiar with different unsupervised anomaly detection approaches and algorithms. In order to visualise the output of the different algorithms we consider a toy data set consisting in a two-dimensional Gaussian mixture.
Generating the data set | from sklearn.datasets import make_blobs
X, y = make_blobs(n_features=2, centers=3, n_samples=500,
random_state=42)
X.shape
plt.figure()
plt.scatter(X[:, 0], X[:, 1])
plt.show() | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Anomaly detection with density estimation | from sklearn.neighbors.kde import KernelDensity
# Estimate density with a Gaussian kernel density estimator
kde = KernelDensity(kernel='gaussian')
kde = kde.fit(X)
kde
kde_X = kde.score_samples(X)
print(kde_X.shape) # contains the log-likelihood of the data. The smaller it is the rarer is the sample
from scipy.stat... | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
now with One-Class SVM
The problem of density based estimation is that they tend to become inefficient when the dimensionality of the data increase. It's the so-called curse of dimensionality that affects particularly density estimation algorithms. The one-class SVM algorithm can be used in such cases. | from sklearn.svm import OneClassSVM
nu = 0.05 # theory says it should be an upper bound of the fraction of outliers
ocsvm = OneClassSVM(kernel='rbf', gamma=0.05, nu=nu)
ocsvm.fit(X)
X_outliers = X[ocsvm.predict(X) == -1]
Z_ocsvm = ocsvm.decision_function(grid)
Z_ocsvm = Z_ocsvm.reshape(xx.shape)
plt.figure()
c_0 =... | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Support vectors - Outliers
The so-called support vectors of the one-class SVM form the outliers | X_SV = X[ocsvm.support_]
n_SV = len(X_SV)
n_outliers = len(X_outliers)
print('{0:.2f} <= {1:.2f} <= {2:.2f}?'.format(1./n_samples*n_outliers, nu, 1./n_samples*n_SV)) | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Only the support vectors are involved in the decision function of the One-Class SVM.
Plot the level sets of the One-Class SVM decision function as we did for the true density.
Emphasize the Support vectors. | plt.figure()
plt.contourf(xx, yy, Z_ocsvm, 10, cmap=plt.cm.Blues_r)
plt.scatter(X[:, 0], X[:, 1], s=1.)
plt.scatter(X_SV[:, 0], X_SV[:, 1], color='orange')
plt.show() | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
**Change** the `gamma` parameter and see it's influence on the smoothness of the decision function.
</li>
</ul>
</div> | # %load solutions/22_A-anomaly_ocsvm_gamma.py | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Isolation Forest
Isolation Forest is an anomaly detection algorithm based on trees. The algorithm builds a number of random trees and the rationale is that if a sample is isolated it should alone in a leaf after very few random splits. Isolation Forest builds a score of abnormality based the depth of the tree at which ... | from sklearn.ensemble import IsolationForest
iforest = IsolationForest(n_estimators=300, contamination=0.10)
iforest = iforest.fit(X)
Z_iforest = iforest.decision_function(grid)
Z_iforest = Z_iforest.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_iforest,
levels=[iforest.threshold_],
... | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Illustrate graphically the influence of the number of trees on the smoothness of the decision function?
</li>
</ul>
</div> | # %load solutions/22_B-anomaly_iforest_n_trees.py | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Illustration on Digits data set
We will now apply the IsolationForest algorithm to spot digits written in an unconventional way. | from sklearn.datasets import load_digits
digits = load_digits() | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
The digits data set consists in images (8 x 8) of digits. | images = digits.images
labels = digits.target
images.shape
i = 102
plt.figure(figsize=(2, 2))
plt.title('{0}'.format(labels[i]))
plt.axis('off')
plt.imshow(images[i], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show() | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
To use the images as a training set we need to flatten the images. | n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
data.shape
X = data
y = digits.target
X.shape | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Let's focus on digit 5. | X_5 = X[y == 5]
X_5.shape
fig, axes = plt.subplots(1, 5, figsize=(10, 4))
for ax, x in zip(axes, X_5[:5]):
img = x.reshape(8, 8)
ax.imshow(img, cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off') | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Let's use IsolationForest to find the top 5% most abnormal images.
Let's plot them ! | from sklearn.ensemble import IsolationForest
iforest = IsolationForest(contamination=0.05)
iforest = iforest.fit(X_5) | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Compute the level of "abnormality" with iforest.decision_function. The lower, the more abnormal. | iforest_X = iforest.decision_function(X_5)
plt.hist(iforest_X); | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Let's plot the strongest inliers | X_strong_inliers = X_5[np.argsort(iforest_X)[-10:]]
fig, axes = plt.subplots(2, 5, figsize=(10, 5))
for i, ax in zip(range(len(X_strong_inliers)), axes.ravel()):
ax.imshow(X_strong_inliers[i].reshape((8, 8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off') | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Let's plot the strongest outliers | fig, axes = plt.subplots(2, 5, figsize=(10, 5))
X_outliers = X_5[iforest.predict(X_5) == -1]
for i, ax in zip(range(len(X_outliers)), axes.ravel()):
ax.imshow(X_outliers[i].reshape((8, 8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off') | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Rerun the same analysis with all the other digits
</li>
</ul>
</div> | # %load solutions/22_C-anomaly_digits.py | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Select
Directories and file paths | start_year = interactive(return_input,
value = widgets.Dropdown(
options=[2009, 2010, 2011, 2012, 2013],
value=2009,
description='Select start year:',
disabled=False)
)
end_year = interactive(return_input,
value = widgets.Dropdown(
options=[2011, 2012, 2... | .ipynb_checkpoints/lv_notebook-checkpoint.ipynb | ekostat/ekostat_calculator | mit |
Set up filters
TODO: Store selection filters as attributes or something that allows us to not have to call them before calculating indicators/qualityfactors | print('{}\nInitiating filters'.format('*'*nr_marks))
first_filter = core.DataFilter('First filter', file_path = first_data_filter_file_path)
winter_filter = core.DataFilter('winter_filter', file_path = winter_data_filter_file_path)
winter_filter.save_filter_file(filter_directory + '/selection_filters/wi... | .ipynb_checkpoints/lv_notebook-checkpoint.ipynb | ekostat/ekostat_calculator | mit |
Load reference values | print('{}\nLoading reference values'.format('*'*nr_marks))
core.RefValues()
core.RefValues().add_ref_parameter_from_file('DIN_winter', resources_directory + '/classboundaries/nutrients/classboundaries_din_vinter.txt')
core.RefValues().add_ref_parameter_from_file('TOTN_winter', resources_directory + '/cl... | .ipynb_checkpoints/lv_notebook-checkpoint.ipynb | ekostat/ekostat_calculator | mit |
Data
Select data and create DataHandler instance | # Handler (raw data)
raw_data = core.DataHandler('raw')
raw_data.add_txt_file(raw_data_file_path, data_type='column') | .ipynb_checkpoints/lv_notebook-checkpoint.ipynb | ekostat/ekostat_calculator | mit |
Apply filters to selected data | # Use first filter
filtered_data = raw_data.filter_data(first_filter)
# Save filtered data (first filter) as a test
filtered_data.save_data(first_filter_data_directory)
# Load filtered data (first filter) as a test
loaded_filtered_data = core.DataHandler('first_filtered')
loaded_... | .ipynb_checkpoints/lv_notebook-checkpoint.ipynb | ekostat/ekostat_calculator | mit |
Calculate Quality elements
Create an instance of NP Qualityfactor class | qf_NP = core.QualityFactorNP()
# use set_data_handler to load the selected data to the QualityFactor
qf_NP.set_data_handler(data_handler = loaded_filtered_data) | .ipynb_checkpoints/lv_notebook-checkpoint.ipynb | ekostat/ekostat_calculator | mit |
Filter parameters in QualityFactorNP
THIS SHOULD BE DEFAULT | print('{}\nApply season filters to parameters in QualityFactor\n'.format('*'*nr_marks))
# First general filter
qf_NP.filter_data(data_filter_object = first_filter)
# winter filter
qf_NP.filter_data(data_filter_object = winter_filter, indicator = 'TOTN_winter')
qf_NP.filter_data(data_filter_ob... | .ipynb_checkpoints/lv_notebook-checkpoint.ipynb | ekostat/ekostat_calculator | mit |
Calculate Quality Factor EQR | print('{}\nApply tolerance filters to all indicators in QualityFactor and get result\n'.format('*'*nr_marks))
qf_NP.get_EQR(tolerance_filter)
print(qf_NP.class_result)
print('-'*nr_marks)
print('done')
print('-'*nr_marks) | .ipynb_checkpoints/lv_notebook-checkpoint.ipynb | ekostat/ekostat_calculator | mit |
Creating features
Next, we need to create a set of features for the development set. py_entitymatching provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features. | # Generate features
feature_table = em.get_features_for_matching(A, B) | notebooks/guides/end_to_end_em_guides/.ipynb_checkpoints/Basic EM Workflow Restaurants - 2-checkpoint.ipynb | anhaidgroup/py_entitymatching | bsd-3-clause |
Selecting the best matcher using cross-validation
Now, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' and 'recall' metric to select the best matcher. | # Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
k=5,
target_attr='gold', metric='precision', random_state=0)
result['cv_stats']
result = em.select_matcher([dt, rf, svm, ln, lg, nb], ta... | notebooks/guides/end_to_end_em_guides/.ipynb_checkpoints/Basic EM Workflow Restaurants - 2-checkpoint.ipynb | anhaidgroup/py_entitymatching | bsd-3-clause |
1. Model-based parametric regression
1.1. The regression problem
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}^{(k)}, s^{(k)}... | X = np.array([0.15, 0.41, 0.53, 0.80, 0.89, 0.92, 0.95])
s = np.array([0.09, 0.16, 0.63, 0.44, 0.55, 0.82, 0.95]) | R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb | ML4DS/ML4all | mit |
1.1. Represent a scatter plot of the data points | # <SOL>
# </SOL> | R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb | ML4DS/ML4all | mit |
1.2. Compute the ML estimate. | # Note that, to use lstsq, the input matrix must be K x 1
Xcol = X[:,np.newaxis]
# Compute the ML estimate using linalg.lstsq from Numpy.
# wML = <FILL IN>
| R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb | ML4DS/ML4all | mit |
1.3. Plot the likelihood as a function of parameter $w$ along the interval $-0.5\le w \le 2$, verifying that the ML estimate takes the maximum value. | K = len(s)
wGrid = np.arange(-0.5, 2, 0.01)
p = []
for w in wGrid:
d = s - X*w
# p.append(<FILL IN>)
# Compute the likelihood for the ML parameter wML
# d = <FILL IN>
# pML = [<FILL IN>]
# Plot the likelihood function and the optimal value
plt.figure()
plt.plot(wGrid, p)
plt.stem(wML, pML)
plt.xlabel('$w$')
... | R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb | ML4DS/ML4all | mit |
1.4. Plot the prediction function over the data scatter plot | x = np.arange(0, 1.2, 0.01)
# sML = <FILL IN>
plt.figure()
plt.scatter(X, s)
# plt.plot(<FILL IN>)
plt.xlabel('x')
plt.ylabel('s')
plt.axis('tight')
plt.show() | R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb | ML4DS/ML4all | mit |
Exercise 2:
Assume the dataset $\mathcal{D} = {(x^{(k)}, s^{(k)}, k=1,\ldots, K}$ contains i.i.d. samples from a distribution with posterior density given by
$$
p(s|x, w) = w x \exp(- w x s), \qquad s\ge0, \,\, x\ge 0, \,\, w\ge 0
$$
2.1. Determine an expression for the likelihood function
Solution:
<SOL>
</SOL>
2.2... | K = len(s)
wGrid = np.arange(0, 6, 0.01)
p = []
Px = np.prod(X)
xs = np.dot(X,s)
for w in wGrid:
# p.append(<FILL IN>)
plt.figure()
# plt.plot(<FILL IN>)
plt.xlabel('$w$')
plt.ylabel('Likelihood function')
plt.show() | R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb | ML4DS/ML4all | mit |
2.3. Determine the coefficient $w_\text{ML}$ of the linear prediction function (i.e., using ${\bf Z}={\bf X}$).
(Hint: you can maximize the log of the likelihood function instead of the likelihood function in order to simplify the differentiation)
Solution:
<SOL>
</SOL>
2.4. Compute $w_\text{ML}$ for the dataset in ... | # wML = <FILL IN>
print(wML) | R4.ML_Regression/.ipynb_checkpoints/Regression_ML_student-checkpoint.ipynb | ML4DS/ML4all | mit |
Series Validator | # Create validator's instance
validator = pv.IntegerSeriesValidator(min_value=0, max_value=10)
series = pd.Series([0, 3, 6, 9]) # This series is valid.
print(validator.is_valid(series))
series = pd.Series([0, 4, 8, 12]) # This series is invalid. because that includes 12 number.
print(validator.is_valid(series)) | example/pandas_validator_example_en.ipynb | c-bata/pandas-validator | mit |
DataFrame Validator
DataFrameValidator class can validate panda's dataframe object.
It can define easily like Django's model definition. | # Define validator
class SampleDataFrameValidator(pv.DataFrameValidator):
row_num = 5
column_num = 2
label1 = pv.IntegerColumnValidator('label1', min_value=0, max_value=10)
label2 = pv.FloatColumnValidator('label2', min_value=0, max_value=10)
# Create validator's instance
validator = SampleDataFrameVal... | example/pandas_validator_example_en.ipynb | c-bata/pandas-validator | mit |
Step 2: Fetch BTE response mapping file using BTE client | from biothings_explorer.registry import Registry
reg = Registry()
mapping = reg.registry['DISEASES']['mapping']
mapping | jupyter notebooks/Demo of JSON Transform.ipynb | biothings/biothings_explorer | apache-2.0 |
Step 3: Transform the output from DISEASES API to Biolink model | from biothings_explorer.json_transformer import Transformer
tf = Transformer(json_doc, mapping)
tf.transform() | jupyter notebooks/Demo of JSON Transform.ipynb | biothings/biothings_explorer | apache-2.0 |
Use Case 2: Stanford Biosample API
Step 1: Fetch API response from Stanford Biosample API | import requests
json_doc = requests.get('http://api.kp.metadatacenter.org/biosample/search?q=biolink:Disease=MONDO:0007915&limit=10').json()
json_doc | jupyter notebooks/Demo of JSON Transform.ipynb | biothings/biothings_explorer | apache-2.0 |
Step 2: Fetch BTE response mapping file using BTE client | from biothings_explorer.registry import Registry
reg = Registry()
mapping = reg.registry['stanford_biosample_disease2sample']['mapping']
mapping | jupyter notebooks/Demo of JSON Transform.ipynb | biothings/biothings_explorer | apache-2.0 |
Step 3: Transform the output from Stanford Biosample API to Biolink model | from biothings_explorer.json_transformer import Transformer
tf = Transformer(json_doc=json_doc, mapping=mapping)
tf.transform() | jupyter notebooks/Demo of JSON Transform.ipynb | biothings/biothings_explorer | apache-2.0 |
Downloading RR Lyrae Data
gatspy includes loaders for some convenient datasets.
Here we'll take a look at some RR Lyrae light curves from Sesar 2010 | from gatspy.datasets import fetch_rrlyrae
rrlyrae = fetch_rrlyrae() | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
This dataset object has an ids attribute showing the light curve ids available, and a get_lightcurve method which returns the light curve: | len(rrlyrae.ids)
lcid = rrlyrae.ids[0]
t, y, dy, filts = rrlyrae.get_lightcurve(lcid) | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
Let's quickly visualize this data, to see what we're working with: | for filt in 'ugriz':
mask = (filts == filt)
plt.errorbar(t[mask], y[mask], dy[mask], fmt='.', label=filt)
plt.gca().invert_yaxis()
plt.legend(ncol=3, loc='upper left'); | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
The dataset has a metadata attribute, which can tell us things like the period (as determined by Sesar 2010: | period = rrlyrae.get_metadata(lcid)['P']
period | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
Let's fold the lightcurve on this period and re-plot the points: | phase = t % period
for filt in 'ugriz':
mask = (filts == filt)
plt.errorbar(phase[mask], y[mask], dy[mask], fmt='.', label=filt)
plt.gca().invert_yaxis()
plt.legend(ncol=3, loc='upper left'); | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
Now the characteristic shape of the RR Lyrae light curve can be seen!
Lomb-Scargle Fits to the Data
Lomb-Scargle Period finding
We can now use the Lomb-Scargle periodogram to attempt to fit the period.
We'll use the classic (single-band) version, and examine the best period for each band individually. Because of the la... | from gatspy.periodic import LombScargle, LombScargleFast
# LombScargleFast is slightly faster than LombScargle for the simplest case
# Both have the same interface.
model = LombScargleFast()
model.optimizer.set(quiet=True, period_range=(0.2, 1.2))
print("Sesar 2010:", period)
for filt in 'ugriz':
mask = (filts ==... | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
We see that within each band, the lomb-scargle periodogram lands on the correct peak.
This model.best_period hides some computation. To find the best period, the model optimizer determines the required resolution for a linear scan search, and computes the periodogram at each of these values.
We can view the periodogram... | def plot_periodogram(model, lcid):
plt.figure()
rrlyrae = fetch_rrlyrae()
t, y, dy, filts = rrlyrae.get_lightcurve(lcid)
period = rrlyrae.get_metadata(lcid)['P']
periods = np.linspace(0.2, 1.0, 5000)
for i, filt in enumerate('ugriz'):
mask = (filts == filt)
model.fit(t[mask]... | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
The periodogram has very narrow peaks, of width approximately $\Delta P \approx P^2 / T$, where $T = (t_{max} - t_{min})$ is the baseline of the measurements. For 10 years of data, the width near the 0.6 day peak turns out to be around $10^{-4}$ days (about 8 seconds), which is why we need such fine sampling of the per... | def plot_model(model, lcid):
plt.figure()
rrlyrae = fetch_rrlyrae()
t, y, dy, filts = rrlyrae.get_lightcurve(lcid)
period = rrlyrae.get_metadata(lcid)['P']
phase = t % period
tfit = np.linspace(0, period, 100)
for filt in 'ugriz':
mask = (filts == filt)
pts = plt.errorbar(ph... | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
The model is clearly fairly biased: that is, it doesn't have enough flexibility to truly fit the data. We can do a bit better by adding more terms to the Fourier series with the Nterms parameter | plot_model(LombScargle(Nterms=4), lcid) | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
This model can also be used within the periodogram, using the same interface as above: | plot_periodogram(LombScargle(Nterms=4), lcid) | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
Supersmoother periodogram and fits
The package includes the SuperSmoother algorithm using the same API.
Here we'll plot the periodogram and the model fits for the same data using this model.
Because the interface is identical, we can simply swap-in the supersmoother model in the function from above: | from gatspy.periodic import SuperSmoother
plot_periodogram(SuperSmoother(), lcid)
plot_model(SuperSmoother(), lcid) | examples/SingleBand.ipynb | astroML/gatspy | bsd-2-clause |
2. Usando indexação
Também é possível realizar a translação periódica usando indexação por arrays. A operação que permite a periodicidade no cálculo dos índices é o módulo, implementado pelo operador % do NumPy. Esta é a forma que a função está implementada na toolbox ia898: ptrans. | def ptrans(f,t):
import numpy as np
g = np.empty_like(f)
if f.ndim == 1:
W = f.shape[0]
col = np.arange(W)
g = f[(col-t)%W]
elif f.ndim == 2:
H,W = f.shape
rr,cc = t
row,col = np.indices(f.shape)
g = f[(row-rr)%H, (col-cc)%W]
elif f.ndim == 3:
Z,H,W =... | 2S2018/09 Translacao periodica.ipynb | robertoalotufo/ia898 | mit |
The Leibniz formula states that:
$$
1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \ldots = \frac{\pi}{4}
$$
So in other words:
$$
\sum^{\infty}_{n = 0}\frac{(-1)^n}{2n + 1} = \frac{\pi}{4}
$$
Let's see if we can understand the sum formula first. What we'll do is just take the top part of the fraction $(-1... | with plt.xkcd():
x = np.arange(0, 10, 1)
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
for ax in axes: pu.setup_axes(ax)
axes[0].plot(x, (-1)**x, 'bo', zorder=10)
axes[0].set_xlim(0, 9)
axes[0].set_ylim(-5, 5)
axes[1].plot(x, (2*x) + 1)
axes[1].set_xlim(0, 9)
axes[1].set_ylim(0, 10... | leibniz_formula.ipynb | basp/notes | mit |
So as $n$ gets bigger and bigger we have two things. One flips between $1$ and $-1$ and the other is just a linear value $y = 2n + 1$ that just keeps on getting bigger and bigger. Now the equation above tells us that if we take an near infinite sum of these values we will get closer and closer to the value of $\frac{\p... | n = np.arange(0, 10, 1)
f = lambda x: ((-1)**x) / (2*x + 1)
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(8, 8))
pu.setup_axes(axes, xlim=(-1, 9), ylim=(-0.5, 1.2), yticks=[1], yticklabels=[1], xticks=[1,2,3,4,5,6,7,8])
plt.plot(n, f(n), zorder=10, label='THE INFINITE SERIES')
plt.plot(n, [nsum(... | leibniz_formula.ipynb | basp/notes | mit |
Now if we sum up all the terms of that line above for $x = 0, 1, 2, 3, \ldots, n$ we'll get closer and closer to $\frac{4}{\pi}$. Using mpmath we can calculate $\pi$ with pretty good detail using the mp.dps setting to control the precision. | leibniz = lambda n: ((-1)**n) / (2 * n + 1)
mp.dps = 50
nsum(leibniz, [0, inf]) * 4 | leibniz_formula.ipynb | basp/notes | mit |
Of course we can compute it symbolically as well. These fractions get pretty crazy real quickly. | leibniz = S('((-1)^n)/(2*n+1)')
n = S('n')
sum([leibniz.subs(n, i) for i in range(100)]) | leibniz_formula.ipynb | basp/notes | mit |
Řešení bonusu - vlastní metody repr
Metoda __repr__ vrací řetězec, který reprezentuje objekt například při výpisu v interaktivní konzoli. Tato metoda může vracet libovolný řetězec. | class CeleCislo(int):
def je_sude(self):
return self % 2 == 0
def __repr__(self):
return "<Cele cislo {}>".format(self)
y = 5
y
x = CeleCislo(5)
x | original/v1/s014-class/ostrava/Feedback_homeworks.ipynb | PyLadiesCZ/pyladies.cz | mit |
Kompatibilita
Kompatibilita zůstala díky dědění zachována. | a = 5
b = CeleCislo(7)
a + b
a - b
a < b
b >= a | original/v1/s014-class/ostrava/Feedback_homeworks.ipynb | PyLadiesCZ/pyladies.cz | mit |
Vlastní objekty a speciální metody
Pro domácí projekt to nebylo potřeba, ale pojďme si zkusit vytvořit vlastní třídu s implementovanými metodami pro využití operátorů v Pythonu. | class Pizza:
def __init__(self, jmeno, ingredience):
self.jmeno = jmeno
self.ingredience = ingredience
def __repr__(self):
return "<Pizza '{}' na které je '{}'".format(self.jmeno, self.ingredience)
p = Pizza("salámová", ["sýr", "paprikáš", "suchý salám"])
p | original/v1/s014-class/ostrava/Feedback_homeworks.ipynb | PyLadiesCZ/pyladies.cz | mit |
Matematika
Pro matematické operátory existují speciální metody, jejichž názvy odpovídají použitým operátorům/operacím. | class Pizza:
def __init__(self, jmeno, ingredience):
self.jmeno = jmeno
self.ingredience = ingredience
def __repr__(self):
return "<Pizza '{}' na které je '{}'".format(self.jmeno, self.ingredience)
def __add__(self, other):
jmeno = self.jmeno + " " + other.jmeno
... | original/v1/s014-class/ostrava/Feedback_homeworks.ipynb | PyLadiesCZ/pyladies.cz | mit |
Porovnávání
Pro porovnání máme také speciální metody - pro každý operátor jednu. | class Pizza:
def __init__(self, jmeno, ingredience):
self.jmeno = jmeno
self.ingredience = ingredience
def __repr__(self):
return "<Pizza '{}' na které je '{}'".format(self.jmeno, self.ingredience)
def __add__(self, other):
jmeno = self.jmeno + " " + other.jmeno
... | original/v1/s014-class/ostrava/Feedback_homeworks.ipynb | PyLadiesCZ/pyladies.cz | mit |
Řazení
Řazení je jen speciální případ porovnávání, kdy se mezi sebou porovnají všechny prvky a podle výsledku jednotlivých porovnání se seřadí. K tomu nám stačí mít implementovánu alespoň metodu __lt__. | pizzy = [p1, p2, p3, p4]
pizzy
sorted(pizzy) | original/v1/s014-class/ostrava/Feedback_homeworks.ipynb | PyLadiesCZ/pyladies.cz | mit |
The first part of this tutorial shows you how to get the data you need from the FHD output directories. This relies mostly on functions in the fhd_pype module.
The function fp.fhd_base() returns the full path to the directory where the FHD output is located. It defaults to the common Aug23 output folder on the MIT clu... | print 'The default path: '+fp.fhd_base()
fp.set_fhd_base(os.getcwd().strip('scripts')+'katalogss/data')
print 'Our path: '+fp.fhd_base()
| scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
We also need to define the version specifying the name of the run. This is equivalent to the case string in eor_firstpass_versions.pro, or the string following "fhd_" in the output directory name.
The katalogss data directory includes a mock FHD output run and the relevant files for a single obsid. | fhd_run = 'mock_run'
s = '%sfhd_%s'%(fp.fhd_base(),fhd_run)
!ls -R $s | scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
Next we want to define the list of obsids we want to process. You can do this manually, or you can easily grab all obsids with deconvolution output for your run using: | obsids = fp.get_obslist(fhd_run)
obsids | scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
Next we'll grab the source components and metadata for each obsid in the list. Note that if you don't supply the list of obsids, it will automatically run for all obsids. | comps = fp.fetch_comps(fhd_run, obsids=obsids)
meta = fp.fetch_meta(fhd_run,obsids=obsids) | scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
This returns a dictionary of dictionaries. Since it takes some work to run, let's cache it a new katalogss output directory. | kgs_out = '%sfhd_%s/katalogss/'%(fp.fhd_base(),fhd_run)
if not os.path.exists(kgs_out): os.mkdir(kgs_out)
print 'saving %scomponent_arrays.p'
pickle.dump([comps,meta], open(kgs_out+'components.p','w'))
comps | scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
If you need to come back to this later, restore it with
comps, meta = pickle.load(open(kgs_out+'component_arrays.p')) | #comps, meta = pickle.load(open(kgs_out+'components.p')) | scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.