markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
We could now simply type points, and Bokeh will attempt to display this data as a standard Bokeh plot. Before doing that, however, remember that we have 12 million rows of data, and no current plotting program will handle this well! Instead of letting Bokeh see this data, let's convert it to something far more tractabl... | %opts RGB [width=600 height=500 bgcolor="black"]
datashade(points) | notebooks/07-working-with-large-datasets.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
If you zoom in you will note that the plot rerenders depending on the zoom level, which allows the full dataset to be explored interactively even though only an image of it is ever sent to the browser. The way this works is that datashade is a dynamic operation that also declares some linked streams. These linked stre... | datashade.streams
# Exercise: Plot the taxi pickup locations ('pickup_x' and 'pickup_y' columns)
# Warning: Don't try to display hv.Points() directly; it's too big! Use datashade() for any display
# Optional: Change the cmap on the datashade operation to inferno
from datashader.colors import inferno
| notebooks/07-working-with-large-datasets.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Adding a tile source
Using the GeoViews (geographic) extension for HoloViews, we can display a map in the background. Just declare a Bokeh WMTSTileSource and pass it to the gv.WMTS Element, then we can overlay it: | %opts RGB [xaxis=None yaxis=None]
import geoviews as gv
from bokeh.models import WMTSTileSource
url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'
wmts = WMTSTileSource(url=url)
gv.WMTS(wmts) * datashade(points)
# Exercise: Overlay the taxi pickup data on top of t... | notebooks/07-working-with-large-datasets.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Aggregating with a variable
So far we have simply been counting taxi dropoffs, but our dataset is much richer than that. We have information about a number of variables including the total cost of a taxi ride, the total_amount. Datashader provides a number of aggregator functions, which you can supply to the datashade ... | selected = points.select(total_amount=(None, 1000))
selected.data = selected.data.persist()
gv.WMTS(wmts) * datashade(selected, aggregator=ds.mean('total_amount'))
# Exercise: Use the ds.min or ds.max aggregator to visualize ``tip_amount`` by dropoff location
# Optional: Eliminate outliers by using select
| notebooks/07-working-with-large-datasets.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Grouping by a variable
Because datashading happens only just before visualization, you can use any of the techniques shown in previous sections to select, filter, or group your data before visualizing it, such as grouping it by the hour of day: | %opts Image [width=600 height=500 logz=True xaxis=None yaxis=None]
taxi_ds = hv.Dataset(ddf)
grouped = taxi_ds.to(hv.Points, ['dropoff_x', 'dropoff_y'], groupby=['hour'], dynamic=True)
aggregate(grouped).redim.values(hour=range(24))
# Exercise: Facet the trips in the morning hours as an NdLayout using aggregate(groupe... | notebooks/07-working-with-large-datasets.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Additional features
The actual points are never given directly to Bokeh, and so the normal Bokeh hover (and other) tools will not normally be useful with Datashader output. However, we can easily verlay an invisible QuadMesh to reveal information on hover, providing information about values in a local area while still... | %%opts QuadMesh [width=800 height=400 tools=['hover']] (alpha=0 hover_line_alpha=1 hover_fill_alpha=0)
hover_info = aggregate(points, width=40, height=20, streams=[hv.streams.RangeXY]).map(hv.QuadMesh, hv.Image)
gv.WMTS(wmts) * datashade(points) * hover_info | notebooks/07-working-with-large-datasets.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Given the following LP
$\begin{gather}
\min\quad -x_1 - 4x_2\
\begin{aligned}
s.a.
2x_1 - x_2 &\geq 0\
x_1 - 3x_2 &\leq 0 \
x_1 + x_2 &\leq 4 \
\quad x_1, x_2 & \geq 0 \
\end{aligned}
\end{gather}$ | x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
| Interior_Point_Method_Example.ipynb | jomavera/Work | mit |
LP in standard form
$\begin{gather}
\min \quad -x_1 - 4x_2\
\begin{aligned}
s.a.
2x_1 - x_2 -x_3 &= 0\
x_1 - 3x_2 +x_4 &= 0 \
x_1 + x_2+x_5 &= 4 \
\quad x_1, x_2, x_3, x_4, x_5 & \geq 0 \
\end{aligned}
\end{gather}$
We see ($x_1,x_2$)=(1,1) is a interior point so we choose it as initial point x_0
x_0=$\be... | x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
plt.scatter([1],[1],... | Interior_Point_Method_Example.ipynb | jomavera/Work | mit |
Iteration 1 | mu = 100
gamma = 0.8
A = np.array([[2,-1,-1,0,0],[1,-3,0,1,0],[1,1,0,0,1]])
X = np.array([[1,0,0,0,0],[0,1,0,0,0],[0,0,1,0,0],[0,0,0,2,0],[0,0,0,0,2]])
vector_1 = np.ones((5,1))
c = np.array([[-1],[-4],[0],[0],[0]])
x_0 = np.array([[1],[1],[1],[2],[2]]) #Punto inicial
#RESOLVER ECUACI... | Interior_Point_Method_Example.ipynb | jomavera/Work | mit |
Iteration 2 | mu = mu*gamma
X = np.array([[x_1[0,0],0,0,0,0],[0,x_1[1,0],0,0,0],
[0,0,x_1[2,0],0,0],[0,0,0,x_1[3,0],0],[0,0,0,0,x_1[4,0]]])
#RESOLVER ECUACION 4
#------Lado izquierdo
izq_ec_4 = np.matmul( A, np.matmul( np.power(X,2),A.T ) )
#------Lado derecho
# -mu*A*X*1 ... | Interior_Point_Method_Example.ipynb | jomavera/Work | mit |
Now lets write a function to run $n$ iterations | mu = 100
gamma = 0.8
A = np.array([[2,-1,-1,0,0],[1,-3,0,1,0],[1,1,0,0,1]])
vector_1 = np.ones((5,1))
c = np.array([[-1],[-4],[0],[0],[0]])
x = np.array([[1],[1],[1],[2],[2]]) #Punto inicial
x1s = [] #Lista vacia para guardar x_1's
x2s = [] #Lista vacia para guardar x_2's
x1s.a... | Interior_Point_Method_Example.ipynb | jomavera/Work | mit |
Explain what the cell below will produce and why. Can you change it so the answer is correct? | 2.0/3 | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Answer these 3 questions without typing code. Then type code to check your answer.
What is the value of the expression 4 * (6 + 5)
What is the value of the expression 4 * 6 + 5
What is the value of the expression 4 + 6 * 5 | print 4*(6+5)
print 4*6+5
print 4+6*5
print 3+1.5+4 | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
What is the type of the result of the expression 3 + 1.5 + 4?
What would you use to find a number’s square root, as well as its square? | print 2**(0.5) | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Strings
Given the string 'hello' give an index command that returns 'e'. Use the code below: | s = 'hello'
# Print out 'e' using indexing
print s[1]
# Code here | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Reverse the string 'hello' using indexing: | s ='hello'
# Reverse the string using indexing
print s[::-1]
print s[:3:-1]
# Code here | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Given the string hello, give two methods of producing the letter 'o' using indexing. | s ='hello'
# Print out the
print s[4]
print s[-1]
# Code here | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Lists
Build this list [0,0,0] two separate ways. | a = list([0,0,0])
print a
a = list([0,0])
print a
a.append(0)
print a | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Reassign 'hello' in this nested list to say 'goodbye' item in this list: | l = [1,2,[3,4,'hello']]
l[2][2] = 'goodbye'
print l | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Sort the list below: | l = [3,4,5,5,6,1]
print l
l.sort()
print l
l = [3,4,5,5,6,1]
print sorted(l)
print l | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Dictionaries
Using keys and indexing, grab the 'hello' from the following dictionaries: | d = {'simple_key':'hello'}
# Grab 'hello'
print d['simple_key']
d = {'k1':{'k2':'hello'}}
# Grab 'hello'
print d['k1']['k2']
# Getting a little tricker
d = {'k1':[{'nest_key':['this is deep',['hello']]}]}
# Grab hello
print d['k1'][0]['nest_key'][1][0]
# This will be hard and annoying!
d = {'k1':[1,2,{'k2':['this i... | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Can you sort a dictionary? Why or why not?
Not sure about this.
Tuples
What is the major difference between tuples and lists?
How do you create a tuple?
Sets
What is unique about a set?
Use a set to find the unique values of the list below: | l = [1,2,2,33,4,4,11,22,3,3,2]
set(l) | P01Objects and Data Structures Assessment Test.ipynb | jArumugam/python-notes | mit |
Step 1: provide generic data submission related information
Type of submission
please specify the type of this data submission:
- "initial_version" for first submission of data
- "new _version" for a re-submission of previousliy submitted data
- "retract" for the request to retract previously submitted data | form.submission_type = "init" # example: sf.submission_type = "initial_version" | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
CMOR compliance
please provide information on the software and tools you used to make sure your data is CMIP6 CMOR3 compliant | form.cmor = '..' ## options: 'CMOR', 'CDO-CMOR', etc.
form.cmor_compliance_checks = '..' ## please name the tool you used to check your files with respect to CMIP6 compliance
## 'PREPARE' for the CMOR PREPARE checker and "DKRZ" for the DKRZ tool. | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Documentation availability
please provide information with respect to availability of es-doc model documentation
in case this form addresses a new version replacing older versions: provide info on the availability of errata information especially refer to errata information provided using the CMIP6 errata web fronten... | form.es_doc = " .. " # 'yes' related esdoc model information is/will be available, 'no' otherwise
form.errata = " .. " # 'yes' if errata information was provided based on the CMIP6 errata mechanism
# fill the following info only in case this form refers to new versions of already published ESGF data
form.errat... | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Uniqueness of tracking_id and creation_date
All your file have unique tracking_ids assigned in the structure required by CMIP6 ?
In case any of your files is replacing a file already published, it must not have the same tracking_id nor the same creation_date as the file it replaces.
Did you make sure that that this i... | form.uniqueness_of_tracking_id = "..." # example: form.uniqueness_of_tracking_id = "yes" | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Generic content characterization based on CMIP6 directory structure
Please name the respective directory names characterizing your submission:
- all files within the specified directory pattern are subject to ESGF publication
CMIP6 directory structure:
<pre><code>
<CMIP6>/<activity_id>/<institutio... | form.data_dir_1 = " ... "
# uncomment for additional entries ...
# form.data_dir_2 = " ... "
# form.data_dir_3 = " ... "
# ... | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Provide specific additional information for this submission
variables, grid, calendar, ...
example file name
.. what do we need ..? | form.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
form.grid = ".." | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Exclude variable list
In each CMIP6 file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [t... | form.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"] | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
CMIP6 terms of use
please explicitly note, you are ok with the CMIP6 terms of use | form.terms_of_use = "..." # has to be "ok" | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Step 2: provide information on the data handover mechanism
the following information (and other information needed for data transport and data publication) | form.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
form.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... " | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other | form.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc" | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Step 3: Check your submission before submission | # simple consistency check report for your submission form - not completed
report = checks.check_report(sf,"sub")
checks.display_report(report) | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Step 4: Save and review your form
your form will be stored (the form name consists of your last name plut your keyword) | form_handler.save_form(sf,"any comment you want") # add a comment
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf) | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Step 5: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications | #form_handler.email_form_info(sf)
form_handler.form_submission(sf) | dkrz_forms/Templates/CMIP6_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
可以看到,我们从网页中提取的标签被嵌在BeautifulSoup对象的第二层。但是,当我们从对象里提取h1标签的时候,可以直接这样调用他。
bsObj.h1
事实上,下面的所有函数调用都可以产生同样的结果
bsObj.html.body.h1
bsObj.body.h1
bsObj.html.h1
处理网络连接异常
让我们看看爬虫的第二行代码
python
html = urlopen("http://www.pythonscraping.com/exercises/exercise1.html")
这行代码可能会发生下面两种异常:
网页在服务器上不存在(或者获取页面的时候出现错误)
服务器不存在
当第一种异常发生时,urlop... | from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
import sys
def getTitle(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html.read(), "html.parser")
title = bsObj... | Python_Spider_Tutorial_05.ipynb | yttty/python3-scraper-tutorial | gpl-3.0 |
在写爬虫的时候,思考代码的总体格局,让代码既可以捕捉异常又容易阅读是很重要的。拥有像getTitle这样的通用的函数(具有周密的异常处理功能)会让快速稳定的网络数据采集变得简单
解析复杂HTML
基本上,每个网站都会有层叠样式表(CSS),CSS可以让HTML元素呈现出差异化,使那些具有完全相同修饰的元素呈现出不同的样式。比如,有一些标签看起来是这样:
css
<span class="green"></span>
而另一些标签看起来是这样
css
<span class="red"></span>
爬虫可以通过class属性的值,轻松区分出两种不同的标签,例如,可以用Beautifu... | from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("http://www.pythonscraping.com/pages/warandpeace.html")
bsObj = BeautifulSoup(html, "html.parser")
nameList = bsObj.findAll("span", {"class":"green"})
for name in nameList:
print(name.get_text()) | Python_Spider_Tutorial_05.ipynb | yttty/python3-scraper-tutorial | gpl-3.0 |
find()和findAll()
文档中的定义是这样的
BeautifulSoup.find(self, name=None, attrs={}, recursive=True, text=None, **kwargs)
BeautifulSoup.findAll(self, name=None, attrs={}, recursive=True, text=None, limit=None, **kwargs)
下面我们通过实例来看一下用法 | bsObj.find({'span'})
bsObj.findAll({'span'})
bsObj.find({'h1','h2'})
bsObj.findAll({'h1','h2'}) | Python_Spider_Tutorial_05.ipynb | yttty/python3-scraper-tutorial | gpl-3.0 |
可以看到,find()只会找出一个的字段,而findAll()则会找出所有符合的字段。
此外,还有一个关键词参数keyword可以选择哪些具有指定属性的标签(就和上面演示的findAll功能一样,这样子设计是为了功能的冗余) | allText = bsObj.findAll(id='text')
print(allText[0].get_text()) | Python_Spider_Tutorial_05.ipynb | yttty/python3-scraper-tutorial | gpl-3.0 |
处理HTML的tag
findAll用标签的名称和属性来查找标签,如果需要通过标签在文档中的位置来查找标签,我们用下面的函数来收集纵向的标签 | html = urlopen("http://www.pythonscraping.com/pages/page3.html")
bsObj = BeautifulSoup(html, "html.parser")
# child 标签
for child in bsObj.find("table",{"id":"giftList"}).children:
print(child)
# sibling标签
for sibling in bsObj.find("table",{"id":"giftList"}).tr.next_siblings:
print(sibling)
# parent标签
print(... | Python_Spider_Tutorial_05.ipynb | yttty/python3-scraper-tutorial | gpl-3.0 |
在BeautifulSoup中使用正则表达式
当然bs4是支持正则表达式的,在bs4中使用正则表达式的例子请参考文档,这里不做过多叙述,仅仅给出一个例子: | from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
html = urlopen("http://www.pythonscraping.com/pages/page3.html")
bsObj = BeautifulSoup(html, "html.parser")
images = bsObj.findAll("img", {"src":re.compile("\.\.\/img\/gifts/img.*\.jpg")})
for image in images:
print(image["src"]) | Python_Spider_Tutorial_05.ipynb | yttty/python3-scraper-tutorial | gpl-3.0 |
Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately. | masses = data[:,2][np.logical_not(np.isnan(data[:,2]))]
masses.max() #does not equal ~ 15...
plt.hist(masses,bins = 60)
plt.show()
import pandas as pd
pd.DataFrame(data)
assert True # leave for grading | assignments/assignment04/MatplotlibEx02.ipynb | LimeeZ/phys292-2015-work | mit |
Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data. | y = data[:,6]
x = data[:,5]
x=np.log(x)
plt.scatter(x, y, label = 'Orbital Eccentricity vs. Semimajor Axis', c = u'k', marker = u'o')
plt.title('Orbital Eccentricity vs. Semimajor Axis')
plt.box(False)
plt.xlabel('Semimajor Axis')
plt.ylabel('Orbital Eccentricity (AU)')
plt.xlim(0, 1.5)
plt.ylim(0, 1.0)
plt.xticks([0.... | assignments/assignment04/MatplotlibEx02.ipynb | LimeeZ/phys292-2015-work | mit |
Part 1: Experimental design, pattern estimation, and data representation
Before you can do any fancy machine learning analysis (or any other pattern analysis), there are several decisions you need to make and steps to take in (pre)processing and structuring your data. Roughly, there are three steps to take:
Design you... | # First, we need to import some Python packages
import numpy as np
import pandas as pd
import os.path as op
import warnings
import matplotlib.pyplot as plt
plt.style.use('classic')
warnings.filterwarnings("ignore")
%matplotlib inline
# The onset times are loaded as pandas dataframe with three columns:
# onset times (... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Remember, the onsets (and duration) are here defined in seconds (not TRs). Let's assume that the fMRI-run has a TR of 2. Now, we can convert (very easily!) the onsets/durations-in-seconds to onsets/durations-in-TRs. | # We repeat loading in the dataframe to avoid dividing the onsets by 2 multiple times ...
stim_info = pd.read_csv(op.join('example_data', 'onsets.csv'), sep='\t',
names=['onset', 'duration', 'trial_type'])
# This is where we divide the onsets/durations by the TR
stim_info[['onset', 'duration']]... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
To perform the first-level analysis, for each regressor (trial) we need to create a regressor of zeros and ones, in which the ones represent the moments in which the particular trial was presented. Let's assume that our moment of interest is the encoding phase, which lasts only 2 seconds; we thus can model it as an "im... | # ToDo
n_timepoints = 162
n_trials = 40
stim_vec = np.zeros((n_timepoints, n_trials))
# Fill the stim_vec variable with ones at the indices of the onsets per trial!
| tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Check your ToDo
Run the cell below to check whether you implemented the ToDo correctly! If it gives no errors, your implementation is correct! | np.testing.assert_array_equal(stim_vec, np.load(op.join('example_data', 'stim_vec.npy')))
print("Well done!") | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Skip this ToDo?
If you want to skip this ToDo, load the correct stim_vec variable below, because we'll need that for the rest of the tutorial! | stim_vec = np.load(op.join('example_data', 'stim_vec.npy')) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Visualizing the design
Now, we only need to convolve an HRF with the stimulus-vectors (using the numpy function convolve()) and we'll have a complete single-trial design!* Don't worry, we do this for you. We'll also plot it to see how it looks (blue = active trials, red = passive trials): | from functions import double_gamma
hrf = double_gamma(np.arange(162))
# List comprehension (fancy for-loop) + stack results back to a matrix
X = np.vstack([np.convolve(hrf, stim_vec[:, i], 'full')[:162] for i in range(40)]).T
plt.figure(figsize=(30, 10))
for plot in range(40):
is_active = True if stim_info['tria... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
* Note: This specific data-set does not have a proper design for a machine-learning analysis on single-trial patterns, because the short ISI leads to temporal autocorrelation between subsequent trials and may thus result in inflated accuracy scores (cf. Mumford et al., 2014). It is generally recommended to use a long I... | from niwidgets import NiWidget
tstat_widget = NiWidget(op.join('..', 'data', 'pi0070', 'wm.feat', 'stats', 'tstat1.nii.gz'))
tstat_widget.nifti_plotter() | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
1.4.1 Data representation
In pattern analyses, there is a specific way to 'store' and represent brain patterns: as 2D matrices of shape N-samples $\times$ K-voxels. Important: often (and confusingly), people refer to voxels as (brain) 'features' in pattern analyses. So in articles people often refer to samples-by-featu... | from glob import glob | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
As you can see, it returns a list with all the files/directories that matched the search-string. Note that you can also search files outside of the current directory. To do so, we can simply specify the relative or absolute path to it.
<div class='alert alert-warning'>
**ToDo**: Now you have the skills to actually "glo... | # Implement your ToDo here
| tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Check your ToDo!
Run the cell below. | # To check your answer, run this cell
assert(len(tstat_paths) == 40)
print("Well done! You globbed all the 40 tstat-files correctly!") | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Skip this ToDo?
To skip this ToDo, run the cell below. | from functions import glob_tstats
tstat_paths = glob_tstats() | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Warning: glob returns unsorted paths (so in seemingly random order). It's better if we sort the paths before loading them in, so the order of the paths is more intuitive (the first file is tstat1, the seconds tstat2, etc.). Python has a builtin function sorted(), which takes a list and sorts it alphabetically. The prob... | print(sorted(tstat_paths)) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
To fix this issue, we wrote a little function (sort_nifti_paths()) that sorts the paths correctly. (If you're interested in how it works, check out the functions.py file.) | # Let's fix it
from functions import sort_nifti_paths
tstat_paths = sort_nifti_paths(tstat_paths)
print(tstat_paths) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
1.4.3. Loading in nifti-data using nibabel
Now, we have the paths to all the nifti-files of a single subject. To load a nifti-file into a numpy-array in Python, we can use the awesome nibabel package. This package has two useful methods you can use to load your data: load and get_data. You need to use them consecutivel... | import nibabel as nib
data = nib.load(tstat_paths[0]).get_data()
voxel_dims = data.shape | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
<div class='alert alert-warning'>
**ToDo**: in the code block below, write a loop that loads in the tstat nifti-files one by one (using nibabel) and store them in the already preallocated array "X". Note that "X" is a 2D matrix (samples-by-features), but each tstat-file contains a 3D array, so you need to "flatten" the... | # Implement your ToDo here
X = np.zeros((40, np.prod(voxel_dims)))
# Start your loop here:
| tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Check your ToDo
Run the cell below to see whether you implemented the ToDo below correctly. If you don't get errors, you did it correctly! | # Can we check if X is correct here? Would be a good check before continuing to part 2
np.testing.assert_almost_equal(X, np.load(op.join('example_data', 'X_section1.npz'))['X'])
print("Well done!") | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Want to skip this ToDo?
Run the cell below. | X = np.load(op.join('example_data', 'X_section1.npz'))['X'] | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
You've reached a milestone!
Starting with 4D fMRI data, you now have patterns in the right format (a 2D-matrix of N-samples x N-features). Now, let's start talking about machine learning!
Part 2. Multivoxel pattern analysis
In Part 1, we discussed within-subject single-trial designs, and showed how to load single-trial... | # Implement your ToDo here | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Check your answer | np.testing.assert_equal(np.array(y), np.load(op.join('example_data', 'y.npy')))
print('Well done!') | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Run the cell below if you want to skip this ToDo | y = np.load(op.join('example_data', 'y.npy')) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
And that's all you need for machine learning-based analyses in terms of preparation!
Commercial* break: skbold
While it's still relatively easy to load in, structure, and preprocess all of the data necessary for pattern-based analyses, there's quite a lot of "boilerplate code", especially when you need to loop your ana... | from sklearn.preprocessing import StandardScaler
scaler = StandardScaler() # Here we initialize the StandardScaler object
scaler.fit(X) # Here we "fit" the StandardScaler to our entire dataset (i.e. calculates means and stds of each feature)
X = scaler.transform(X) # And here we transform the dataset usin... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Did the scaling procedure work? Let's check that below (by asserting that the mean of each column is 0, and the std of each column is 1): | means = np.mean(X, axis=0)
np.testing.assert_almost_equal(means, np.zeros(X.shape[1]))
print("Each column (feature) has mean 0!")
stds = X.std(axis=0)
np.testing.assert_almost_equal(stds[stds != 0], np.ones((stds != 0).sum()))
print("Each column (feature) has std 1!") | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Sweet! We correctly preprocessed out patterns. Now it's time for the more interesting stuff.
2.4. Model fitting & cross-validation
First, we'll show you how to use scikit-learn models and associated functionality. In fact, the most useful functionality of scikit-learn is probably that they made fitting and cross-valida... | # Scikit-learn is always imported as 'sklearn'
from sklearn.svm import SVC | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Like most of scikit-learn's functionality, SVC is a class (not a function!). So, let's initialize an SVC-object! One important argument that this object needs upon initialization is the "kernel" you want the model to have. Basically, the kernel determines how to treat/process your features: linearly (such as kernel='li... | # clf = CLassiFier
clf = SVC(kernel='linear') | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Alright, we initialized an SVC-object (named clf, short for "classifier") with a linear kernel. Now, you need to do two thing to get the prediction for each sample (i.e. whether they're predicted as 0 or 1): fit, using the method fit(X, y), and predict the class (i.e. 0 or 1) for each class using the method predict(X).... | print('Fitting SVC ...', end='')
clf.fit(X, y)
print(' done.') | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
After calling the fit() method, the clf-object contains an attribute coef_ that represent the model's parameters ('coefficients' in scikit-learn lingo, i.e. $\beta$). Let's check that out: | coefs = clf.coef_
print("Shape of coefficients: %r" % (coefs.shape,)) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Ah, just like we expected: the coef_ attribute is exactly the same size as the number of voxels in our X-matrix (i.e. 80*80*37).
Anyway, usually, you don't have to do anything directly with the weights (perhaps only if you want to do anything with feature visualization). Scikit-learn handles the calculation of the pre... | y_hat = clf.predict(X)
print("The predictions for my samples are:\n %r" % y_hat.tolist()) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
<div class='alert alert-warning'>
**ToDo**: The cool thing about scikit-learn is that their objects have a very consistent API and have sensible defaults. As a consequence, *every* model ("estimator" in scikit-learn terms) is used in the same way using the `fit(X, y)` and `predict(X)` methods. Try it out yourself below... | # Try out different estimators below (call fit and predict on X and y!)
| tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
2.5. Model evaluation
A logical next step is to assess how good the model was in predicting the class of the samples. A straightforward metric to summarize performance is accuracy * which can be defined as:
\begin{align}
accuracy = \frac{number\ of\ correct\ predictions}{number\ of\ predictions}
\end{align}
* There a... | # Implement your to-do here!
| tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
If you've done the ToDo above correctly, you should have found out that the accuracy was 1.0 - a perfect score! "Awesome! Nature Neuroscience material!", you might think. But, as is almost always the case: if it seems too good to be true, it probably is indeed too good to be true.
So, what is the issue here?
Well, we ... | from sklearn.model_selection import train_test_split
if not isinstance(y, np.ndarray):
y = np.array(y)
# The argument "test_size" indicates the test-size as a proportion
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, stratify=y,
random_... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
As you can see, the train_test_split function nicely partitions our data in two sets, a train- and test-set. Note the argument stratify in the function, which is set to True, which ensures that the class proportion (here: ratio of "active" trials to "passive" trials) is the same for the train-set as the test-set.
<div ... | # Implement the ToDo here | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Now it's up to you to actually implement the cross-validated equivalent of the fit/predict procedure we showed you before, in the next ToDo!
<div class='alert alert-warning'>
**ToDo**: Fit your model on `X_train` and `y_train` and then predict `X_test`. Calculate both the accuracy on the train-set (fit and predict on t... | # Implement your ToDo here
| tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
If you implemented the ToDo correctly, you should see that the test-set accuracy is quite a bit lower (10%) than the train-set accuracy! This test-set accuracy is still slightly biased due to imbalanced classes, which is discussed in the next section.
Advanced: evaluation of imbalanced datasets/models (optional, but re... | # The roc-auc-curve function is imported as follows:
from sklearn.metrics import roc_auc_score
# Implement your ToDo here
| tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
If you've done the ToDo correctly, you should see that the performance drops to 0.75 if using the ROC-AUC score! While this thus leads to worse results, it's more likely to be based on information from your patterns rather than just the class-imbalance of your dataset.
That said, it's, of course, always best to avoid i... | # scikit-learn is imported as 'sklearn'
from sklearn.model_selection import StratifiedKFold
# They call folds 'splits' in scikit-learn
skf = StratifiedKFold(n_splits=5) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Alright, we have a StratifiedKFold object now, but not yet any indices for our folds (i.e. indices to split X and y into different samples). To do that, we need to call the split(X, y) method: | folds = skf.split(X, y) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Now, we created the variable folds which is officially a generator object, but just think of it as a type of list (with indices) which is specialized for looping over it. Each entry in folds is a tuple with two elements: an array with train-indices and an array with test-indices. Let's demonstrate that*:
* Note that y... | # Notice how we "unpack" the train- and test-indices at the start of the loop
i = 1
for train_idx, test_idx in folds:
print("Processing fold %i" % i)
print("Train-indices: %s" % train_idx)
print("Test-indices: %s\n" % test_idx)
i += 1 | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
As you can see, StratifiedKFold determined that for the first fold, sample
1 to 9 should be used for training and sample 0, 1, 2, 3, 4, 5, 6, 7, and 13 should be used for testing (remember, Python uses 0-based indexing!) and the rest for training.
Now, we know how to access the train- and test-indices, but we haven't... | # Implement the ToDo here by completing the statements in the loop!
from sklearn.linear_model import LogisticRegression
# clf now is a logistic regression model
clf = LogisticRegression()
# run split() again to generate folds
folds = skf.split(X, y)
performance = np.zeros(skf.n_splits)
for i, (train_idx, test_idx) i... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Implementing K-fold cross-validation instead of hold-out cross-validation allowed us to make our analysis a little more efficient (by reusing samples). Another method that (usually) improves performance in decoding analyses is feature selection/extraction, which is discussed in the next section (after the optional run-... | X_r = np.random.randn(80, 1000)
print("Shape of X: %s" % (X_r.shape, ), '\n')
y_r = np.tile([0, 1], 40)
print("Shape of y: %s" % (y_r.shape, ))
print("Y labels:\n%r" % y_r.tolist(), '\n')
runs = np.repeat([1, 2, 3, 4], 20)
print("Shape of runs: %s" % (runs.shape, ))
print("Run-indices: \n%r" % runs.tolist()) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
As you can see in the example above, the GroupKFold cross-validator effectively partionioned our data such that in every fold one group was left out for testing (group 4 in the first fold, group 3 in the second fold, etc.)! So keep in mind to use this type of cross-validation object when you want to employ such a run-w... | from sklearn.model_selection import StratifiedShuffleSplit
# Try implementing the ToDo here | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
2.8. Feature selection/extraction
While a complete discussion on (causes of) the phenomenon of "overfitting" is definitely beyond the scope of this workshop, it is generally agreed upon that a small sample/feature-ratio will likely lead to overfitting of your models (optimistic performance estimates on train-set). A sm... | from sklearn.feature_selection import SelectKBest, f_classif
# f_classif is a scikit-learn specific implementation of the F-test
select2000best = SelectKBest(score_func=f_classif, k=2000) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Another example of a selector is SelectFwe which selects only features with statistics-values corresponding to p-values lower than a set alpha-level. For example, the following transformer would only select features with p-values from a chi-square test lower than 0.01: | from sklearn.feature_selection import SelectFwe, chi2
selectfwe_transformer = SelectFwe(score_func=chi2, alpha=0.01) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
But how does this work in practice? We'll show you an (not cross-validated!) example using the select2000best transformer initialized earlier: | # Fit the transformer ...
select2000best.fit(X, y)
# ... which calculates the following attributes (.scores_ and .pvalues_)
# Let's check them out
scores = select2000best.scores_
pvalues = select2000best.pvalues_
# As you can see, each voxel gets its own score (in this case: an F-score)
print(scores.size)
# and its... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
We can also visualize these scores in brain space: | import matplotlib.pyplot as plt
%matplotlib inline
scores_3d = scores.reshape((80, 80, 37))
plt.figure(figsize=(20, 5))
for i, slce in enumerate(np.arange(15, 65, 5)):
plt.subplot(2, 5, (i+1))
plt.title('X = %i' % slce, fontsize=20)
plt.imshow(scores_3d[slce, :, :].T, origin='lower', cmap='hot')
plt.a... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Thus, in the images above, brighter (more yellow) colors represent voxels with higher scores on the univariate test. If we subsequently apply (or "transform" in scikit-learn lingo) our X-matrix using, for example, the select2000best selector, we'll select only the 2000 "most yellow" (i.e. highest F-scores) voxels.
<div... | print("Shape of X before transform: %r ..." % (X.shape,))
X_after_ufs = select2000best.transform(X)
print("... and shape of X after transform: %r." % (X_after_ufs.shape,)) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
As you can see, the transformer correctly selected a subset of 100 voxels from our X matrix! Now, both selectors were fit on the entire dataset, which is often course not how it should be done: because they use information from the labels (y), this step should be cross-validated. Thus, what you have to do is:
fit your... | # Implement your ToDo here!
from sklearn.decomposition import PCA
X_train_tmp, X_test_tmp = train_test_split(X, test_size=0.5)
# initialize a PCA object ...
# ... call fit on X_train_tmp ...
# ... and call transform on X_train_tmp and X_test_tmp
X_train_pca_transformed =
X_test_pca_transformed =
# And finally che... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Integrating feature selection and model fitting/cv!
Now, let's put everything you learned so far together and implement a fully cross-validated pipeline with UFS and model fitting/prediction!
<div class='alert alert-warning'>
**ToDo**: Below, we set up a K-fold cross-validation loop and prespecified a classifier (`clf`... | from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
folds = skf.split(X, y)
performance = np.zeros(skf.n_splits)
select1000best = SelectKBest(score_func=f_classif, k=1000)
i = 0
for train_ix, test_idx in folds:
# ToDo: make X_train, X_test, y_train, y_test
# ToDo: call th... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
If you did the above ToDo, your performance is likely a bit higher than in the previous section in which we didn't do any feature selection/extraction!
Advanced: cross-validation using Pipelines (optional)
As you have seen in the previous assignment, the code within the K-fold for-loop becomes quite long (and messy) w... | from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
scaler = StandardScaler()
ufs = SelectKBest(score_func=f_classif, k=1000)
pca = PCA(n_components=10) # we want to reduce the features to 10 components
svc = SVC(kernel='linear') | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Now, to initialize a Pipeline-object, we need to give it a list of tuples, in the first entry of each tuple is a name for the step in the pipeline and the second entry of each tuple is the actual transformer/estimator object. Let's do that for our pipeline: | from sklearn.pipeline import Pipeline
pipeline_to_make = [('preproc', scaler),
('ufs', ufs),
('pca', pca),
('clf', svc)]
my_pipe = Pipeline(pipeline_to_make) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Let's test our pipeline-object (my_pipe) on our data. For this example, we'll use a simple hold-out cross-validation scheme (but pipelines are equally valuable in K-fold CV schemes!). | X_train, y_train = X[0::2], y[0::2]
X_test, y_test = X[1::2], y[1::2]
my_pipe.fit(X_train, y_train)
predictions = my_pipe.predict(X_test)
performance = roc_auc_score(y_test, predictions)
print("Cross-validated performance on test-set: %.3f" % performance) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Cool stuff huh? Quite efficient, I would say!
<div class='alert alert-warning'>
**ToDo**: Test your pipeline-skills! Can you build a pipeline that incorporates a [VarianceThreshold](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.VarianceThreshold.html#sklearn.feature_selection.VarianceThresh... | # Implement the ToDo here
from sklearn.feature_selection import VarianceThreshold
from sklearn.cluster import KMeans
from sklearn.ensemble import RandomForestClassifier
| tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
2.9. Assessing significance
Up to now, we examined the model performance of decoding the experimental condition from fMRI data of a single subject. When averaging over folds, we got one scalar estimate of performance. Usually, though, within-subject decoding studies measure multiple subjects and want to test whether th... | participant_numbers = # Fill in the right glob call here!
# Next, we need to extract the participant numbers from the paths you just obtained. We do this for you here.
participant_numbers = [x.split(op.sep)[-1] for x in participant_numbers]
print(participant_numbers) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Want to skip the ToDo?
Run the next cell. | with open(op.join('example_data', 'subject_names'), 'r') as f:
participant_numbers = [s.replace('\n', '') for s in f.readlines()] | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
We can now use this participants_numbers list to loop over the participant names!
<div class='alert alert-warning'>
**ToDo** (optional!): This final ToDo is a big one, in which everything we learned so far comes together nicely. Write a loop over all participants, implementing a cross-valdidated classification pipeline... | ### Set your analyis parameters/pipeline ###
# ToDo: initialize a stratified K-fold class (let's use K=5)
skf =
# ToDo: Initialize the SelectKBest Selector with an f_classif score function
select100best =
# ToDo: initialize an SVC-object
clf =
# Optional: build a pipeline-object!
# Keep track of all perfor... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
If everything went well, you now have a list all_performance which contains the accuracies (mean over folds) of all participants! This is all data we need to do a one-sample t-test.
The one-sample t-test is implemented in the Python package Scipy. Let's load it first | from scipy.stats import ttest_1samp | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
The function requires two arguments: the vector of values to test, and the hypothesized population mean. In our case, the data vector is all_accuracies, and the population mean is 0.50. If done correctly, the ttest_1samp function returns a tuple of the t-value and the p-value.
<div class='alert alert-warning'>
**ToDo**... | # Example answer
t, p = ttest_1samp(all_performance, 0.5)
print('The t-value is %.3f, with a p-value of %.5f' % (t, p)) | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Finally, we can draw our conclusions: can we decode trial condition, using the fMRI data, with an above-chance accuracy? (Yes we can! But remember, the results are probably positively biased due to autocorrelated samples and, as discussed earlier, this t-test is not a proper random-effects analysis!)
Congratulations! Y... | # we'll reset our namespace to free up some memory
%reset -f
from skbold.core import MvpWithin
import os.path as op | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Now, the MvpWithin class is able to quickly load and transform data from FSL-feat (i.e. first-level) directories. It looks for patters in the stats directory (given a specific statistic) and loads condition-labels from the design.con file, which are all standard contents of FEAT-directories.
Additionally, we can specif... | feat_dir = op.join('..', 'data', 'pi0070', 'wm.feat')
mask_file = None # only works if you have FSL installed (if so, uncomment the next line!)
# UNCOMMENT IF YOU HAVE FSL INSTALLED!)
#mask_file = op.join('..', 'data', 'GrayMatter_prob.nii.gz') # mask all non-gray matter!
read_labels = True # parse labels (tar... | tutorial/ICON2017_tutorial.ipynb | lukassnoek/ICON2017 | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.