markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Misc noodles
np.indices? "0"*4 "-".join([str(x) for x in [2,3,4]]) print("a"); print("b")
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
Question: Is it necessary to know the order of parametres to send values to a function?
straight_line(x=2,intercept=7,slope=3)
Week 3/Lecture_5_Introdution_to_Functions.ipynb
bpgc-cte/python2017
mit
Passing values to functions
list_zeroes=[0 for x in range(0,5)] print(list_zeroes) def case1(list1): list1[1]=1 print(list1) case1(list_zeroes) print(list_zeroes) #Passing variables to a function list_zeroes=[0 for x in range(0,5)] print(list_zeroes) def case2(list1): list1=[2,3,4,5,6] print(list1) case2(l...
Week 3/Lecture_5_Introdution_to_Functions.ipynb
bpgc-cte/python2017
mit
Conclusion: If the input is a mutable datatype, and we make changes to it, then the changes are refelected back on the original variable. (Case-1) If the input is a mutable datatype, and we assign a new value to it, then the changes are not refelected back on the original variable. (Case-2) Default Parameters
def calculator(num1,num2,operator='+'): if (operator == '+'): result = num1 + num2 elif(operator == '-'): result = num1 - num2 return result n1=int(input("Enter value 1: ")) n2=int(input("Enter value 2: ")) v_1 = calculator(n1,n2) print(v_1) v_2 = calculator(n1,n2,'-') print(v_...
Week 3/Lecture_5_Introdution_to_Functions.ipynb
bpgc-cte/python2017
mit
Initialization of variables within function definition
def f(a, L=[]): L.append(a) return L print(f(1)) print(f(2)) print(f(3)) # Caution ! The list L[] was initialised only once. #The paramter initialization to the default value happens at function definition and not at function call.
Week 3/Lecture_5_Introdution_to_Functions.ipynb
bpgc-cte/python2017
mit
* operator 1. Unpacks a list or tuple into positional arguments ** operator 2. Unpacks a dictionary into keyword arguments Types of parametres Formal parameters (Done above, repeat) Keyword Arguments (Done above, repeat) *variable_name : interprets the arguments as a tuple **variable_name : interprets the arguments ...
def sum(*values): s = 0 for v in values: s = s + v return s s = sum(1, 2, 3, 4, 5) print(s) def get_a(**values): return values['a'] s = get_a(a=1, b=2) # returns 1 print(s) def sum(*values, **options): s = 0 for i in values: s = s + i if "neg" in options: if ...
Week 3/Lecture_5_Introdution_to_Functions.ipynb
bpgc-cte/python2017
mit
2. Data Structure The data structure was interesting. It may be adequate for OSM, but it is certainly not how I would like for it to figure in the MongoDB collection. Some information is represented directly as attributes of a main XML data primitive element, while others, as attributes of child elements tagged "tag". ...
def smarter_nestify(l, record): """Takes a list [a1, a2, a3, ... , an, value] and a pre-existing dictionary structure returns a nested dictionary object {a1 : {a2 : {a3 : ... {an : value} ...}}}, respecting the pre-existing dictionary records, that is, for each recursion step if a dictionary ai already exis...
DANDP3.ipynb
asignor/DANDP3
mit
3. Repeated Attribute Keys This was an interesting issue that stems from not discarding the tags containing colons. The problem is tag 'k' attributes that have different functions being called the exact same in the XML data. For example, we had: <tag 'k'='lanes:psv:forward' 'v'='1'> and <tag 'k'='lanes' 'v...
from pymongo import MongoClient import pprint client = MongoClient() db = client sp = db.my_osm.cme #shorthand since all the queries will be in same collection
DANDP3.ipynb
asignor/DANDP3
mit
1. Bad Key 'type' Creating Incorrect Parsing At first, the desired data format had a 'type' key in the main JSON node, to designate a map node, way or relation (please note the word "node" here has two very different meanings). So I ran the query below to find out how many relations were in the dataset.
relations = sp.find({'type' : 'relation'})
DANDP3.ipynb
asignor/DANDP3
mit
The original output of this query was 4. It seems odd that a metropolis so big would have 4 relations. Investigating further, I found there were 261671 ways and 1900291 nodes.
4 + 261671 + 1900291 - 2168319 #this should return 0
DANDP3.ipynb
asignor/DANDP3
mit
What this effectively means is that there is some kind of discrepancy, and it is not small.
types = sp.distinct('type') pprint.pprint(types)
DANDP3.ipynb
asignor/DANDP3
mit
The above output is totally unexpected, as we should see only "node", "relation" or "way". A little research was helpful in pinpointing the issue, mainly that relation data primitive were translating into nodes with incorrect 'type' assignment, because many of the relations themselves had a 'type' attribute or tag chil...
types = sp.distinct('data_prim') pprint.pprint(types) cursor = sp.find({'data_prim' : 'way'}) a = len(list(cursor)) print a, 'ways' cursor = sp.find({'data_prim' : 'node'}) b = len(list(cursor)) print b, 'nodes' cursor = sp.find({'data_prim' : 'relation'}) c = len(list(cursor)) print c, 'relations' print 'discrepancy...
DANDP3.ipynb
asignor/DANDP3
mit
2. Seamarks and unexpected postcodes One of the strange things in my data was the occurrence of the tag "seamark". This is one of the features that make heavy use of the colons structure in the OSM XML schema, so it was in the back of my head. A simple query reveales how many of them there are in the data.
sp.find({'seamark' : {'$exists' : 1}}).count()
DANDP3.ipynb
asignor/DANDP3
mit
The reason why this is strange is upon further investigation in the OSM Wiki, I found these are features that should occur in oceanic coasts. Querying the data base for examples I found some were lighhouses and buoyes. The problem is the metropolitan area I was supposed to be analysing contains NO sea coast. The query ...
cities = sp.distinct('address.city') cities.sort() print cities len(cities)
DANDP3.ipynb
asignor/DANDP3
mit
This seems to indicate the area is, in fact, what is called Expanded Metropolipan Complex of São Paulo or Complexo Metropolitano Expandido (in this terminology "São Paulo" is implied), also called Paulistan Macrometropolis or Macrometrópole Paulista. It is not, as I thought, São Paulo Metropolitan Area, or Região Metro...
sp.find({'address.postcode' : {'$regex' : '^1[2-9]'}}).count() sp.find_one({'address.postcode' : {'$regex' : '^1[2-9]'}})
DANDP3.ipynb
asignor/DANDP3
mit
2. Postcode Format Inconsistencies Using the '$regex' operator, I was able to audit how postcodes format. By Brazilian convention, the format we should see is 'ddddd-ddd', or the regex '^([0-9]){5}([-])([0-9]){3}$'.
sp.find({'address.postcode' : {'$exists' : 1}}).count() sp.find({'address.postcode' : {'$regex' : '^([0-9]){5}([-])([0-9]){3}$'}}).count()
DANDP3.ipynb
asignor/DANDP3
mit
The query shows there are some inconsistencies. I want to peek at 10 examples and see some cases.
pipe = [{'$match' : {'address.postcode' : { '$regex' : '^(?!^^([0-9]){5}([-])([0-9]){3}$).*$'}}}, { '$limit' : 10 }, {'$project' : {'address' : 1 }}] list(sp.aggregate(pipe))
DANDP3.ipynb
asignor/DANDP3
mit
It seems there is a mix of incorrect format, such as '05025010' instead of '05025-010' and typos like an extra number or a missing one. My solution is in the first case, reformat, the second, discard. This was included in shape_element, and the data re-parsed and re-loaded into MongoDB.
pipe = [{'$match' : {'address.postcode' : { '$regex' : '^(?!^^([0-9]){5}([-])([0-9]){3}$).*$'}}}, { '$limit' : 10 }, {'$project' : {'address' : 1 }}] list(sp.aggregate(pipe))
DANDP3.ipynb
asignor/DANDP3
mit
As seen above, a query for 10 postcodes that do not fit the format now returns an empty list, showing the problem was fixed. OBS: A similar process can be done for phone numbers, however it is an intricate process for brazilian phone numbers are there are many different formats. (At this point numbers can have 8 to 10 ...
t = sp.find({'address.postcode' : {'$regex' : '^2'}}) pprint.pprint(list(t))
DANDP3.ipynb
asignor/DANDP3
mit
The above postcode is incorrect, it is supposed to be 02545-000. 4. Missing Postcodes (CEPs)
a = sp.find({'address' : {'$exists' : 1}}).count() b = sp.find({'address' : {'$exists' : 1}, 'address.postcode' : {'$exists' : 1}}).count() c = sp.find({'address' : {'$exists' : 1}, 'address.postcode' : {'$exists' : 0}}).count() print 'number of addresses:', a print 'number of addresses with CEP:', b print 'number of ...
DANDP3.ipynb
asignor/DANDP3
mit
Almost half of the addresses do not have postcodes (called "CEP" in Brazil). One follow-up project would be to scrape the CEPs from a reputable website (like Correios) and feed the CEPs back into the database. A good measure would be to obtain them by coordinates and by address both and analyse discrepancies. Statistic...
sp.find().count()
DANDP3.ipynb
asignor/DANDP3
mit
number of documents by data primitive:
cursor = sp.find({'data_prim' : 'way'}) a = len(list(cursor)) print a, 'ways' cursor = sp.find({'data_prim' : 'node'}) b = len(list(cursor)) print b, 'nodes' cursor = sp.find({'data_prim' : 'relation'}) c = len(list(cursor)) print c, 'relations'
DANDP3.ipynb
asignor/DANDP3
mit
unique user ids:
len(sp.distinct('created.uid'))
DANDP3.ipynb
asignor/DANDP3
mit
the document that has the highest number of versions:
versions = sp.distinct('created.version') print max(versions) pprint.pprint(list(sp.find({'created.version' : max(versions)})))
DANDP3.ipynb
asignor/DANDP3
mit
'amenities' that occur the most, top 10
pipe = [{'$match' : {'amenity': {'$exists' : 1}}}, {'$group': {'_id': '$amenity', 'count': {'$sum': 1}}}, {'$sort' : {'count': -1}}, {'$limit' : 10}, {'$project' : {'amenity' : 1, 'count': 1}} ] c = sp.aggregate(pipe) pprint.pprint(list(c))
DANDP3.ipynb
asignor/DANDP3
mit
top 10 religions
pipe = [{'$match' : {'amenity': 'place_of_worship'}}, {'$group': {'_id': '$religion', 'count': {'$sum': 1}}}, {'$sort' : {'count': -1}}, {'$limit' : 10}, {'$project' : {'religion' : 1, 'count': 1}} ] c = sp.aggregate(pipe) pprint.pprint(list(c))
DANDP3.ipynb
asignor/DANDP3
mit
This shows this data is severely incomplete and does not lend itself for statistics, not on religion, anyway. There are lenty more than 2 Jewish places of worship in Sao Paulo. number of pizza places:
sp.find({'cuisine': 'pizza'}).count()
DANDP3.ipynb
asignor/DANDP3
mit
Again, I am sure that it actually is more. Examples of Curiosity Queries This section contains queries that were performed motivated by personal curiosity. If the choices seem arbitrary, it is because they are. How many street names have a military rank in the name?
def street_starts_with(letters): """takes a string and returns a regex string to be used with operator $regex to query sp collections for streets starting with the string""" a = ['Acostamento', u'Pra\xe7a', 'Alameda', 'Viela', 'Estrada', 'Rua', 'Ac...
DANDP3.ipynb
asignor/DANDP3
mit
How many street names start with X? And Z?
x = sp.find({'data_prim':'way', 'name': {'$regex': street_starts_with('X')}}).count() print 'Starting with X:', x z = sp.find({'data_prim':'way', 'name': {'$regex': street_starts_with('Z')}}).count() print 'Starting with Z:', z
DANDP3.ipynb
asignor/DANDP3
mit
Unarchive
import tempfile import zipfile import os.path zipFile = "./openSubtitles-5000.json.zip" print( "Unarchiving ...") temp_dir = tempfile.mkdtemp() zip_ref = zipfile.ZipFile(zipFile, 'r') zip_ref.extractall(temp_dir) zip_ref.close() openSubtitlesFile = os.path.join(temp_dir, "openSubtitles-5000.json") print ("file unarc...
python-sklearn-kmeans/kmeans_clustering.ipynb
david-hagar/NLP-Analytics
mit
Tokenizing and Filtering a Vocabulary
import json from sklearn.feature_extraction.text import CountVectorizer #from log_progress import log_progress maxDocsToload = 50000 titles = [] def make_corpus(file): with open(file) as f: for i, line in enumerate(f): doc = json.loads(line) titles.append(doc.get('Title','')) ...
python-sklearn-kmeans/kmeans_clustering.ipynb
david-hagar/NLP-Analytics
mit
Feature Vocabulary
print( "Vocabulary length = ", len(count_vectorizer.vocabulary_)) word = "data"; rainingIndex = count_vectorizer.vocabulary_[word]; print( "token index for \"%s\" = %d" % (word,rainingIndex)) feature_names = count_vectorizer.get_feature_names() print( "feature_names[%d] = %s" % (rainingIndex, feature_names[rainingIndex...
python-sklearn-kmeans/kmeans_clustering.ipynb
david-hagar/NLP-Analytics
mit
TFIDF Weighting This applys the TFIDF weight to the matrix tfidf value = word count / number of documents word is in The document vectors are also normalized so they have a euclidian magnitude of 1.0.
from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") tfidf.fit(term_freq_matrix) tf_idf_matrix = tfidf.transform(term_freq_matrix) print( tf_idf_matrix)
python-sklearn-kmeans/kmeans_clustering.ipynb
david-hagar/NLP-Analytics
mit
K-Means
%%time from sklearn.cluster import KMeans,MiniBatchKMeans import numpy num_clusters = 5 #km = KMeans(n_clusters=num_clusters, verbose=True, init='k-means++', n_init=3, n_jobs=-1) km = MiniBatchKMeans(n_clusters=num_clusters, verbose=True, init='k-means++', n_init=25, batch_size=2000) km.fit(tf_idf_matrix) clusters =...
python-sklearn-kmeans/kmeans_clustering.ipynb
david-hagar/NLP-Analytics
mit
Question 1.a
height=np.array([150,163,167,168,170,178])
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
By default numpy uses linear interpolation
print 'min',np.min(height) print '1st', np.percentile(height,25) print 'median',np.median(height) print '3rd',np.percentile(height,75) print 'max',np.max(height)
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
We can also force numpy to use nearest values
print 'min',np.min(height) print '1st', np.percentile(height,25, interpolation='lower') print 'median',np.median(height) print '3rd',np.percentile(height,75, interpolation='higher') print 'max',np.max(height)
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Question 1.b
print 'mean',np.mean(height)
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Question 1.c
l_q75 = np.percentile(height,75) l_q25 = np.percentile(height,25) l_iqr = l_q75 - l_q25 print 'Linear interpolation IQR',l_iqr q75 = np.percentile(height,75, interpolation='higher') q25 = np.percentile(height,25, interpolation='lower') iqr = q75 - q25 print 'IQR',iqr
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Question 1.d
l_q25-l_iqr*1.5 l_q75+l_iqr*1.5
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
150 and 178 are both possible outliers based on the IQR 'fence' definition using linear interpolation
q25-iqr*1.5 q75+iqr*1.5
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
150 is a possible outlier based on the IQR 'fence' definition using nearest value Question 1.e
seaborn.boxplot(height,whis=1.5,vert='True') seaborn.plt.title('Linear Interpolation - Height') item = {} item["label"] = 'box' item["med"] = 167.5 item["q1"] = 163 item["q3"] = 170 item["whislo"] = 163 item["whishi"] = 178 item["fliers"] = [] stats = [item] fig, axes = plt.subplots(1, 1) axes.bxp(stats) axe...
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Question 1.f
print 'Variance',height.var() print "Standard Deviation", height.std()
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Question 2 i. Metric,Interval/Discrete ii. Non-metric, Ordinal iii. Non-metric, Nominal/Categorical iv. Possibly in between. Without more information would categorize Non-metric, Ordinal V. Possible to argue it is in between. I would categorize Non-metric, Ordinal Question 3 Shorthand used to remember the percentage of...
print 'z-score:',(90-100)/16.0
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Question 5 Probability of Silver Given Silver=P(Find S/SC)/(P(Find S/SC)+P(Find S/GC)+P(Find S/MC)) P(S|S)=1/(1+0+.5)=1/1.5=2/3 Question 6 In order for the longer piece to be more than twice the length of the shorter piece, the line must be cut at below 1/3 the length or above 2/3. Therefore the probability is the unio...
total=0 for x in range(0,100): total+=scipy.stats.distributions.poisson.pmf(x, 100) print 'probability:', 1-total
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
51% probability of 100 or more critical failures over the next 50 years. Question 9.a
SE = STD/sqrt(n) .8/math.sqrt(100) **Question 9.b**
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Assuming a normal distribution we use the z table to find the corresponding number of standard deviations. The 95% confidence interval is composed of the following: Lower = 1.6-.08*1.96 = 1.443 Upper = 1.6+.08*1.96 = 1.757 Question 9.c U = Umbrellas/Apartment * Apartments U = 12,800 Question 9.d SE = STD/sqrt(n)
.8*8000/math.sqrt(100)
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Question 9.d Assuming a normal distribution we use the z table to find the corresponding number of standard deviations. The 95% confidence interval is composed of the following: Lower = 12,800-640*1.96 = 11,545.6 Upper = 12,800+640*1.96 = 14,054.4 Question 10 First I randomly sample from a normal distribution using nu...
random_normal=np.random.normal(size=1000) seaborn.distplot(random_normal) within=((0 < x) & (x < 1)).sum() within/1000.0
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
33.3% of the randomly generated values were in the interval [0,1] Question 11 There is not enough information available to determine if the promotion was effective. The month to month variance may be such that 350 is a typical occurance, and so the observation would have nothing to do with the promotion. Question 12...
t,p_value=scipy.stats.ttest_ind([79.98,80.04,80.02,80.04,80.03,80.03,80.04,79.97,80.05,80.03,80.02],[80.02,79.94,79.98,79.97,79.97,80.03,79.95,79.97]) p_value 1-p
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
This p_value is quite small. It is outside a 99% confidence interval so we would reject the null hypothesis and say that the results from the two methods differ. As a check I plot the two data sets and see for myself that distributions do in fact look significantly different.
data={'a':[79.98,80.04,80.02,80.04,80.03,80.03,80.04,79.97,80.05,80.03,80.02],'b':[80.02,79.94,79.98,79.97,79.97,80.03,79.95,79.97]} ax=seaborn.boxplot(data['a']) ax.set_xlim([79.94,80.05]) ax=seaborn.boxplot(data['b']) ax.set_xlim([79.94,80.05]) **Question 13**
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
First perform chisquared test. Also will look at a a visual check.
air=pd.DataFrame(['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'],[1668,1407,1370,1309,1341,1338,1406,1446,1332,1363,1410,1526]) air=air.reset_index() air.columns=['guests','month'] air['expected']=[sum(air['guests'])/12]*12 scipy.stats.chisquare(air['guests'],air['expected'])
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
The low p-value indicates that it is extremely unlikey that the bookings are uniformly distributed. It is possible that there is a season pattern.
air.plot() air_season=pd.DataFrame(['Spring','Summer','Fall','Winter'],[(1370+1309+1341),(1338+1406+1446),(1332+1363+1410),(1526+1668+1407)]) air_season=air_season.reset_index() air_season.columns=['guests','month'] air_season['expected']=[sum(air_season['guests'])/4]*4 air_season.plot()
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
The visuals reinforce the evidence that the bookings are not uniformly distributed.
x=np.dot([[2,3],[2,1]],[[3],[2]]) x=np.dot([[3,0,1],[-4,1,2],[-6,0,-2]],[[-1],[1],[3]]) x x/4
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Question 14.a
q14=pd.DataFrame([16,12,13,11,10,9,8,7,5,3,2,0],[8,10,6,2,8,-1,4,6,-3,-1,-3,0]) q14=q14.reset_index() q14.columns=['x2','x1'] np.corrcoef(q14['x1'],q14['x2']) plt.scatter(q14['x1'],q14['x2']) plt.axis('equal');
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
The correlation of X1 and X2 is .74 and PCA will provide information about the nature of the linear relationship. If the question is to whether we can use PCA to cut out data and only retain the component with the highest variance I would lean towards no, but it is difficult to say without background information on the...
X=np.column_stack((q14['x1'],q14['x2'])) pca = PCA(n_components=2) pca.fit(X) pca.get_covariance() #Uses n vs. n-1 print(pca.components_) #Eigenvectors **Question 14.c** print(pca.explained_variance_) #Eigenvalues, variance def draw_vector(v0, v1, ax=None): ax = ax or plt.gca() arrowprops=dict(arrowsty...
bkds-datachallenge-Winter-John/Data_Challenge_Stats.ipynb
JohnWinter/JohnWinter.github.io
mit
Visualizing epoched data This tutorial shows how to plot epoched data as time series, how to plot the spectral density of epoched data, how to plot epochs as an imagemap, and how to plot the sensor locations and projectors stored in ~mne.Epochs objects. We'll start by importing the modules we need, loading the continuo...
import os import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=120)
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To create the ~mne.Epochs data structure, we'll extract the event IDs stored in the :term:stim channel, map those integer event IDs to more descriptive condition labels using an event dictionary, and pass those to the ~mne.Epochs constructor, along with the ~mne.io.Raw data and the desired temporal limits of our epochs...
events = mne.find_events(raw, stim_channel='STI 014') event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4, 'face': 5, 'button': 32} epochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, event_id=event_dict, preload=True) del raw
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting Epochs as time series .. sidebar:: Interactivity in pipelines and scripts To use the interactive features of the `~mne.Epochs.plot` method when running your code non-interactively, pass the ``block=True`` parameter, which halts the Python interpreter until the figure window is closed. That way, any channels or...
catch_trials_and_buttonpresses = mne.pick_events(events, include=[5, 32]) epochs['face'].plot(events=catch_trials_and_buttonpresses, event_id=event_dict, event_color=dict(button='red', face='blue'))
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To see all sensors at once, we can use butterfly mode and group by selection:
epochs['face'].plot(events=catch_trials_and_buttonpresses, event_id=event_dict, event_color=dict(button='red', face='blue'), group_by='selection', butterfly=True)
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Just as we saw in the tut-section-raw-plot-proj section, we can plot the projectors present in an ~mne.Epochs object using the same ~mne.Epochs.plot_projs_topomap method. Since the original three empty-room magnetometer projectors were inherited from the ~mne.io.Raw file, and we added two ECG projectors for each sensor...
epochs.plot_projs_topomap(vlim='joint')
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that these field maps illustrate aspects of the signal that have already been removed (because projectors in ~mne.io.Raw data are applied by default when epoching, and because we called ~mne.Epochs.apply_proj after adding additional ECG projectors from file). You can check this by examining the 'active' field of t...
print(all(proj['active'] for proj in epochs.info['projs']))
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting sensor locations Just like ~mne.io.Raw objects, ~mne.Epochs objects keep track of sensor locations, which can be visualized with the ~mne.Epochs.plot_sensors method:
epochs.plot_sensors(kind='3d', ch_type='all') epochs.plot_sensors(kind='topomap', ch_type='all')
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting the power spectrum of Epochs Again, just like ~mne.io.Raw objects, ~mne.Epochs objects have a ~mne.Epochs.plot_psd method for plotting the spectral density_ of the data.
epochs['auditory'].plot_psd(picks='eeg')
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
It is also possible to plot spectral estimates across sensors as a scalp topography, using ~mne.Epochs.plot_psd_topomap. The default parameters will plot five frequency bands (δ, θ, α, β, γ), will compute power based on magnetometer channels, and will plot the power estimates in decibels:
epochs['visual/right'].plot_psd_topomap()
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Just like ~mne.Epochs.plot_projs_topomap, ~mne.Epochs.plot_psd_topomap has a vlim='joint' option for fixing the colorbar limits jointly across all subplots, to give a better sense of the relative magnitude in each band. You can change which channel type is used via the ch_type parameter, and if you want to view differ...
bands = [(10, '10 Hz'), (15, '15 Hz'), (20, '20 Hz'), (10, 20, '10-20 Hz')] epochs['visual/right'].plot_psd_topomap(bands=bands, vlim='joint', ch_type='grad')
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If you prefer untransformed power estimates, you can pass dB=False. It is also possible to normalize the power estimates by dividing by the total power across all frequencies, by passing normalize=True. See the docstring of ~mne.Epochs.plot_psd_topomap for details. Plotting Epochs as an image map A convenient way to vi...
epochs['auditory'].plot_image(picks='mag', combine='mean')
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To plot an image map for all sensors, use ~mne.Epochs.plot_topo_image, which is optimized for plotting a large number of image maps simultaneously, and (in interactive sessions) allows you to click on each small image map to pop open a separate figure with the full-sized image plot (as if you had called ~mne.Epochs.plo...
reject_criteria = dict(mag=3000e-15, # 3000 fT grad=3000e-13, # 3000 fT/cm eeg=150e-6) # 150 µV epochs.drop_bad(reject=reject_criteria) for ch_type, title in dict(mag='Magnetometers', grad='Gradiometers').items(): layout = mne.channels.find_layout(epochs.i...
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To plot image maps for all EEG sensors, pass an EEG layout as the layout parameter of ~mne.Epochs.plot_topo_image. Note also here the use of the sigma parameter, which smooths each image map along the vertical dimension (across epochs) which can make it easier to see patterns across the small image maps (by smearing no...
layout = mne.channels.find_layout(epochs.info, ch_type='eeg') epochs['auditory/left'].plot_topo_image(layout=layout, fig_facecolor='w', font_color='k', sigma=1)
0.23/_downloads/e41b6a898e7a75f8a9f1a6c00ca73857/20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Vector represents a Euclidean vector; it is implemented using a NumPy array of coordinates and a reference to the frame those coordinates are defined in.
class FrameError(ValueError): """Indicates an error related to Frames.""" class Vector: def __init__(self, array, frame=None): """A vector is an array of coordinates and a frame of reference. array: sequence of coordinates frame: Frame object """ self.array = np.asarray...
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Rotation represents a rotation matrix, one of several kinds of transformation matrices. We'll use it as part of the implementation of Transform.
class Rotation: def __init__(self, array): self.array = array def __str__(self): return 'Rotation\n%s' % str(self.array) __repr__ = __str__ def __neg__(self): return Rotation(-self.array) def __mul__(self, other): """Apply the rotation to a Vector.""" ...
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
A Transform is a rotation (represented by a Rotation object) and an origin (represented by a Vector). The destination of the transform is the frame of the origin vector. The source of the transform is provided as an argument. When you create a transform, it adds itself to the source frame.
class Transform: """Represents a transform from one Frame to another.""" def __init__(self, rot, org, source=None): """Instantiates a Transform. rot: Rotation object org: origin Vector source: source Frame """ self.rot = rot self.org = org self.d...
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
A Frame has a name and a dictionary that includes the frames we can reach directly from this frame, and the transform that gets there. The roster is a list of all frames.
class Frame: """Represents a frame of reference.""" # list of Frames roster = [] def __init__(self, name): """Instantiate a Frame. name: string """ self.name = name self.transforms = {} Frame.roster.append(self) def __str__(self): retu...
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
We'll start with one frame that is not defined relative to any other frame.
origin = Frame('O') origin
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Now we'll create Frame A, which is defined by a transform relative to O. The string representation of a Frame is in LaTex.
import numpy as np theta = np.pi/2 xhat = Vector([1, 0, 0], origin) rx = Rotation.from_axis(xhat, theta) a = Frame('A') t_ao = Transform(rx, xhat, a) t_ao
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
We can use IPython.display to render the LaTeX:
from IPython.display import Math def render(obj): return Math(str(obj))
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Here's the usual notation for the transform from A to O.
render(t_ao)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Here's Frame B, defined relative to A by a rotation around the yhat axis.
yhat = Vector([0, 1, 0], a) ry = Rotation.from_axis(yhat, theta) b = Frame('B') t_ba = Transform(ry, yhat, b) render(t_ba)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
A Frame C, defined relative to B by a rotation around the zhat axis.
zhat = Vector([0, 0, 1], b) rz = Rotation.from_axis(zhat, theta) c = Frame('C') t_cb = Transform(rz, zhat, c) render(t_cb)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Now let's make a vector defined in C.
p_c = Vector([1, 1, 1], c) render(p_c)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And we can transform it to B:
p_b = t_cb(p_c) render(p_b)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Then to A:
p_a = t_ba(p_b) render(p_a)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And finally to O.
p = t_ao(p_a) render(p)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
If we didn't know how to get from one frame to another, we could search for the shortest path from the start frame to the destination. I'll use NetworkX.
import networkx as nx
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
The following function adds the edges from a given frame to the graph.
def add_edges(G, frame): for neighbor, transform in frame.transforms.items(): G.add_edge(frame, neighbor, transform=transform)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And here's how we can make a graph from a list of frames.
def make_graph(frames): G = nx.DiGraph() for frame in frames: add_edges(G, frame) return G
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Here's the list of frames:
frames = Frame.roster frames
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And a dictionary that maps from each frame to its label:
labels = dict([(frame, str(frame)) for frame in frames]) labels
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
So we can show the frames, and transforms between them, graphically.
G = make_graph(Frame.roster) nx.draw(G, labels=labels) nx.shortest_path(G, c, origin)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
When we apply a transform to a vector, we get a vector in a new frame. When we apply a transform to another transform, we get a new transform that composes the two transforms. For example cbao, below, composes the transforms from C to B, C to A, and A to O. The result is a transform directly from C to O.
cbao = t_ao(t_ba(t_cb)) render(cbao) p = cbao(p_c) render(p)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
When we create the new transform, it gets added to the network, creating shortcuts. If we draw the network again, we can see the new links.
G = make_graph([origin, a, b, c]) nx.draw(G, labels=labels)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And if we find the shortest path, its shorter now.
nx.shortest_path(G, c, origin)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
We can also compute an inverse transform that goes in the other direction.
inv = cbao.inverse() render(inv)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And confirm that it gets us back where we started.
p_c = inv(p) render(p_c)
frame_example.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Version 1: The Obvious
pair_sum_eq = lambda n, start=0: ((i, n-i) for i in range(start, (n>>1)+1)) list(pair_sum_eq(21, 5))
problem-9-special-pythagorean-triplet.ipynb
ltiao/project-euler
unlicense
Note that $3a < a + b + c = 1000$, so $a < \frac{1000}{3} \Leftrightarrow a \leq \lfloor \frac{1000}{3} \rfloor = 333$ so $1 \leq a \leq 333$. Therefore, we need only iterate up to 333 in the outermost loop. Now, $b + c = 1000 - a$, so $667 \leq b + c \leq 999$, so we look at all pairs $333 \leq b < c$ such that $b + c...
def pythagorean_triplet_sum_eq(n): for a in range(1, n//3+1): for b, c in pair_sum_eq(n-a, start=n//3): if a*a + b*b == c*c: yield a, b, c list(pythagorean_triplet_sum_eq(1000)) prod = lambda iterable: reduce(lambda x,y: x*y, iterable) prod(pythagorean_triplet_sum_eq(1000))
problem-9-special-pythagorean-triplet.ipynb
ltiao/project-euler
unlicense
Version 2: Euclid's Formula
# TODO
problem-9-special-pythagorean-triplet.ipynb
ltiao/project-euler
unlicense
Make the notebook reproducible.
np.random.seed(3424)
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Specify a sample size.
n = 1000
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Generate an exposure variable.
exp = np.random.normal(size=n)
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Generate a mediator variable.
def gen_mediator(): mn = np.exp(exp) mtime0 = -mn * np.log(np.random.uniform(size=n)) ctime = -2 * mn * np.log(np.random.uniform(size=n)) mstatus = (ctime >= mtime0).astype(np.int) mtime = np.where(mtime0 <= ctime, mtime0, ctime) return mtime0, mtime, mstatus
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause