markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
用0來填補該NaN
shifted.fillna(0) shifted.mean()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
用平均數來填補該NaN
shifted.fillna( shifted.mean() )
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
將序列中的NaN丟棄(drop)
shifted.dropna()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="ex01"/>練習1:各手寫數字分別有幾張圖?
def filePathsGen(rootPath): paths=[] dirs=[] for dirPath,dirNames,fileNames in os.walk(rootPath): for fileName in fileNames: fullPath=os.path.join(dirPath,fileName) paths.append((int(dirPath[len(rootPath) ]),fullPath)) dirs.append(dirNames) return dirs,paths dirs,paths=filePathsGen('mnist/') #載入圖片路徑 dfPath=pd.DataFrame(paths,columns=['class','path']) #圖片路徑存成Pandas資料表 dfPath.head(5) # 看資料表前5個row # 完成以下程式碼: ... ... groups.count()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
Notebook The Jupyter Notebook is a literate programming tool that lets you combine Python code, Markdown text, and other commands. Each notebook is composed of cells, and cells can either be code (e.g. Python) or Markdown. In addition to Python code, the notebook can run so-called magic functions. These begin with a percent % symbol and can be used to do common tasks like listing files with %ls
%ls
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
Both the magic functions and the python ones support tab-completion
%ls rr-intro-data-v0.2/intro/data/
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
Data: In the pandas package, the read_csv function is used to read the data into a pandas Data Frame. Note that the argument provided to this function is the complete path that leads to the dataset from your current working directory (where this Notebook is located). Also note that this is provided as a character string, hence in quotation marks. Read the Gapminder 1950-1960 data in from CSV file
gap_5060 = pd.read_csv('rr-intro-data-v0.2/intro/data/gapminder-5060.csv')
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
Task 1: Visualize life expectancy over time for Canada in the 1950s and 1960s using a line plot. Filter the data for Canada only:
gap_5060_CA = gap_5060.loc[gap_5060['country'] == 'Canada']
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
Visualize: Pandas uses matplotlib by default, and a very common practice is to generate plots inline - meaning within the notebook and not as separate files. A magic function configures this, and you'll often just put it at the top of your notebook with import pandas as pd
%matplotlib inline gap_5060_CA.plot(kind='line', x='year', y='lifeExp') pass
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
Task 2: Something is clearly wrong with this plot! Turns out there's a data error in the data file: life expectancy for Canada in the year 1957 is coded as 999999, it should actually be 69.96. Make this correction. loc and ['column'] to index into the Data Frame
gap_5060.loc[(gap_5060['country'] == 'Canada') & (gap_5060['year'] == 1957)]
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
loc[&lt;col&gt;, 'row'] allows assignment with =
gap_5060.loc[(gap_5060['country'] == 'Canada') & (gap_5060['year'] == 1957), 'lifeExp'] = 69.96 gap_5060.loc[(gap_5060['country'] == 'Canada') & (gap_5060['year'] == 1957)]
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
Task 3: Visualize life expectancy over time for Canada again, with the corrected data. Exact same code as before, but note that the contents of gap_5060 are different as it has been updated in the previous task.
gap_5060_CA = gap_5060.loc[gap_5060['country'] == 'Canada'] gap_5060_CA.plot(kind='line', x='year', y='lifeExp') pass
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
Task 3 - Stretch goal: Add lines for Mexico and United States. .isin() for logical operator testing if a country's name is in the list provided
loc = gap_5060['country'].isin(['Canada','United States','Mexico']) us_mexico_ca = gap_5060.loc[loc]
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
To get each country as a series, we create a 2-level index
indexed_by_country = us_mexico_ca.set_index(['country','year'])
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
The data table now indexes by country and year. See how they appear on the second header row, and the data is grouped by country?
indexed_by_country
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
Visualization code is almost the same, but we tell Pandas how to unstack the data into multiple series
indexed_by_country.unstack(level='country').plot(kind='line',y='lifeExp') pass
material/rr-intro-exercise.ipynb
ben-williams/cbb-retreat
cc0-1.0
In this example, we'll just create a 3-D array of random floating-point data using NumPy:
arr = np.random.random(size=(64,64,64))
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
To load this data into yt, we need associate it with a field. The data dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call load_uniform_grid:
data = dict(density = (arr, "g/cm**3")) bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]]) ds = yt.load_uniform_grid(data, arr.shape, length_unit="Mpc", bbox=bbox, nprocs=64)
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
load_uniform_grid takes the following arguments and optional keywords: data : This is a dict of numpy arrays, where the keys are the field names domain_dimensions : The domain dimensions of the unigrid length_unit : The unit that corresponds to code_length, can be a string, tuple, or floating-point number bbox : Size of computational domain in units of code_length nprocs : If greater than 1, will create this number of subarrays out of data sim_time : The simulation time in seconds mass_unit : The unit that corresponds to code_mass, can be a string, tuple, or floating-point number time_unit : The unit that corresponds to code_time, can be a string, tuple, or floating-point number velocity_unit : The unit that corresponds to code_velocity magnetic_unit : The unit that corresponds to code_magnetic, i.e. the internal units used to represent magnetic field strengths. periodicity : A tuple of booleans that determines whether the data will be treated as periodic along each axis This example creates a yt-native dataset ds that will treat your array as a density field in cubic domain of 3 Mpc edge size and simultaneously divide the domain into nprocs = 64 chunks, so that you can take advantage of the underlying parallelism. The optional unit keyword arguments allow for the default units of the dataset to be set. They can be: * A string, e.g. length_unit="Mpc" * A tuple, e.g. mass_unit=(1.0e14, "Msun") * A floating-point value, e.g. time_unit=3.1557e13 In the latter case, the unit is assumed to be cgs. The resulting ds functions exactly like a dataset like any other yt can handle--it can be sliced, and we can show the grid boundaries:
slc = yt.SlicePlot(ds, "z", ["density"]) slc.set_cmap("density", "Blues") slc.annotate_grids(cmap=None) slc.show()
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
Particle fields are detected as one-dimensional fields. The number of particles is set by the number_of_particles key in data. Particle fields are then added as one-dimensional arrays in a similar manner as the three-dimensional grid fields:
posx_arr = np.random.uniform(low=-1.5, high=1.5, size=10000) posy_arr = np.random.uniform(low=-1.5, high=1.5, size=10000) posz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000) data = dict(density = (np.random.random(size=(64,64,64)), "Msun/kpc**3"), number_of_particles = 10000, particle_position_x = (posx_arr, 'code_length'), particle_position_y = (posy_arr, 'code_length'), particle_position_z = (posz_arr, 'code_length')) bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]]) ds = yt.load_uniform_grid(data, data["density"][0].shape, length_unit=(1.0, "Mpc"), mass_unit=(1.0,"Msun"), bbox=bbox, nprocs=4)
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
In this example only the particle position fields have been assigned. number_of_particles must be the same size as the particle arrays. If no particle arrays are supplied then number_of_particles is assumed to be zero. Take a slice, and overlay particle positions:
slc = yt.SlicePlot(ds, "z", ["density"]) slc.set_cmap("density", "Blues") slc.annotate_particles(0.25, p_size=12.0, col="Red") slc.show()
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
Generic AMR Data In a similar fashion to unigrid data, data gridded into rectangular patches at varying levels of resolution may also be loaded into yt. In this case, a list of grid dictionaries should be provided, with the requisite information about each grid's properties. This example sets up two grids: a top-level grid (level == 0) covering the entire domain and a subgrid at level == 1.
grid_data = [ dict(left_edge = [0.0, 0.0, 0.0], right_edge = [1.0, 1.0, 1.0], level = 0, dimensions = [32, 32, 32]), dict(left_edge = [0.25, 0.25, 0.25], right_edge = [0.75, 0.75, 0.75], level = 1, dimensions = [32, 32, 32]) ]
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
We'll just fill each grid with random density data, with a scaling with the grid refinement level.
for g in grid_data: g["density"] = (np.random.random(g["dimensions"]) * 2**g["level"], "g/cm**3")
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
Particle fields are supported by adding 1-dimensional arrays to each grid and setting the number_of_particles key in each grid's dict. If a grid has no particles, set number_of_particles = 0, but the particle fields still have to be defined since they are defined elsewhere; set them to empty NumPy arrays:
grid_data[0]["number_of_particles"] = 0 # Set no particles in the top-level grid grid_data[0]["particle_position_x"] = (np.array([]), "code_length") # No particles, so set empty arrays grid_data[0]["particle_position_y"] = (np.array([]), "code_length") grid_data[0]["particle_position_z"] = (np.array([]), "code_length") grid_data[1]["number_of_particles"] = 1000 grid_data[1]["particle_position_x"] = (np.random.uniform(low=0.25, high=0.75, size=1000), "code_length") grid_data[1]["particle_position_y"] = (np.random.uniform(low=0.25, high=0.75, size=1000), "code_length") grid_data[1]["particle_position_z"] = (np.random.uniform(low=0.25, high=0.75, size=1000), "code_length")
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
Then, call load_amr_grids:
ds = yt.load_amr_grids(grid_data, [32, 32, 32])
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
load_amr_grids also takes the same keywords bbox and sim_time as load_uniform_grid. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:
slc = yt.SlicePlot(ds, "z", ["density"]) slc.annotate_particles(0.25, p_size=15.0, col="Pink") slc.show()
yt-demo/example4.ipynb
teuben/pitp2016
gpl-3.0
Elements in a $list$ can be made of numbers, strings, a mixture of both or other type of sequences. Element can be accessed by specifying the element position in the $list$ (similar to accessing the strings, as discussed in Tutorial 2). The number of elements in $list$ can be known using the <span style="color: #0000FF">$len$&#40;&#41;</span> function.
List_str = ["Blythe","Rafa","Felicity","Kiyoko"] print "What is the word happy in Arabic?" print "The answer is %s and it's starts with the capital letter %s." % \ (List_str[1],List_str[1][0])
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
In the above example, the elements in $List$_$str$ $list$ is accessed by specifying the positional index (in square bracket after the variable name of the list) of the $list$ and after that the element of the strings is accessed by specifying a second bracketted index as strings is also a $list$. Acessed element of a $list$ can be operated (just like variable)
List_mix = ["The Time Machine", 1895, "The Invisible Man", 1897, \ "The Shape of Things to Come", 1933] print '"%s" was first published in %d with \n"%s" published %d years \ later.' % (List_mix[0],List_mix[1],List_mix[4],List_mix[5]-List_mix[1])
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
<span style="color: #F5DA81; background-color: #610B4B">Example 4.1</span>: The followings are some of the infamous implementations of Python programming language: CPython, Cython, PyPy, IronPython, Jython and Unladen Swallow. Put this sequence in a list and rearrange the sequence according to your preferred implementations in a list that contains only three implementations. Print the new list.
Python_Impl = ['CPython','Cython','PyPy','IronPython','Jython','Unladen Swallow'] New_Python_Impl = [Python_Impl[1],Python_Impl[0],Python_Impl[2]] print New_Python_Impl
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
<span style="color: #F5DA81; background-color: #610B4B">Example 4.2</span>: From this list: ['Python','Java','C','Perl','Sed','Awk','Lisp','Ruby'], create back the original list of Python implementations.
PN = ['Python','Java','C','Perl','Sed','Awk','Lisp','Ruby'] Pyth_ImplN = [PN[2]+PN[0],PN[2]+PN[0][1:],PN[0][:2]*2,\ PN[6][1].upper()+PN[3][2]+PN[0][4:]+PN[0],\ PN[1][0]+PN[0][1:],PN[7][1].upper()+PN[0][-1]+PN[3][-1]+\ PN[1][1]+PN[4][-1]+PN[4][-2]+PN[0][-1]+' '+\ PN[-2][-2].upper()+PN[-3][1]+PN[1][1]+PN[3][3]*2+\ PN[0][-2]+PN[-3][1]] print Pyth_ImplN
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
What we have seen are one-dimensional homogeneous and non-homogeneous $lists$.
x = [12,45,78,14,23] y = ["Dickens","Hardy","Austen","Steinbeck"] Z = [3E8,'light',"metre"]
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
$List$ can also be multi-dimensional.
# Homogeneous multi dimensional list (2D): # List_name[row][column] x2 = [[12,32],[43,9]] print x2 print x2[1] # Second row print x2[0][1] # First row, second column
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
In a matrix representation, this is: $$\left( \begin{array}{cc} 12 & 32 \ 43 & 9\end{array} \right)$$ and to get the matrix determinant:
# Matrix determinant det_x2 = x2[0][0]*x2[1][1]-x2[0][1]*x2[1][0] print "Determinant of x2 is %d" % det_x2
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
A multi-dimensional $list$ is actually $lists$ within $list$:
x1 = [0.1,0.2,0.3,0.4,0.5] x2 = [0,12,34,15,1] x = [x1,x2] print x # A 2x5 Array
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
$List$ can also be non-homogeneous multi-dimensional:
Data_3D = [[[2,3,5],[1,7,0]],[5,"ArXiv"]] #print number 7 print Data_3D[0][1][1] print 'Mr. Perelman published the solution \ to Poinc%sre conjecture in "%s".' % (u"\u00E1", Data_3D[1][1])
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
Data_3D is actually $lists$ inside $lists$ inside $list$ but non-homogeneously. <img src="Tutorial3/array.png" width="500" > The elements in the $list$ can be subtituted.
# Extracting and substitution L1 = Data_3D[0]; print L1 L2 = [Data_3D[1]]+[Data_3D[0][0]] print L2 print L2[0][1] Data_3D[1][1] = "PlosOne" print Data_3D
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
Iterating on elements in list requires the sequential accessing of the list. This can be done using <span style="color: #0000FF">$for$</span> and <span style="color: #0000FF">$while$</span> control structures as well as the <span style="color: #0000FF">$enumerate$&#40;&#41;</span> function.
# Looping: for dwarf = ["Eris","Pluto","Makemake","Haumea","Sedna"] print dwarf for name in dwarf: print name for z in range(len(dwarf)): print "%d\t%s" % (z,dwarf[z]) for x,z in enumerate(dwarf,1): print "%d\t%s" % (x,z) z = 0 while z < len(dwarf): print "%d\t%s" % (z+1,dwarf[z]) z = z + 1
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
<span style="color: #F5DA81; background-color: #610B4B">Example 4.3</span>: Calculate and print each value of x*y with: x = [12.1,7.3,6.2,9.9,0.5] y = [4.5,6.1,3.9,1.7,8.0]
x = [12.1,7.3,6.2,9.9,0.5] y = [4.5,6.1,3.9,1.7,8.0] i = 0 xy = [] # Creating empty list while i < (len(x)): xy = xy + [x[i]*y[i]] # Appending result into list print '%.1f x %.1f = %.2f' % (x[i],y[i],xy[i]) i = i + 1 print '\n' print xy
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
<span style="color: #F5DA81; background-color: #610B4B">Example 4.4</span>: Calculate and print each value of x2*y2 with: x2 = [[12.1,7.3],[6.2,9.9]] y2 = [[4.5,6.1],[3.9,1.7]]
x2 = [[12.1,7.3],[6.2,9.9]] y2 = [[4.5,6.1],[3.9,1.7]] j = 0 xy2 = [] xy3 = [] while j < (len(x2)): k = 0 for k in range(len(x2)): xy3 = xy3 + [x2[j][k]*y2[j][k]] print '%.1f x %.1f = %.2f' % (x2[j][k],y2[j][k],xy3[k]) k = k + 1 xy2 = xy2 + [xy3] xy3 = [] j = j + 1 print '\n' print xy2
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
<span style="color: #F5DA81; background-color: #610B4B">Example 4.5</span>: Just create a list that contains the $f(x)$ value of a Gaussian distribution with $\sigma$ = 0.4 and $\mu$ = 5. The Gaussian function:$$f(x) = e^{\frac{-(x-\mu)^2}{2\sigma^2}}$$
from math import * sigma = 0.4 mu = 5.0 x_val = [] ctr = 3 while ctr < 7: x_val = x_val + [ctr] ctr = ctr + 0.1 fx = [] for n in range(0,len(x_val),1): intensity = exp(-(x_val[n]-mu)**2/(2*sigma**2)) fx = fx + [intensity] print '%f\t%s' % (intensity,int(intensity*50)*'*') fx
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
4.1.1 Converting data from a file into a list Each line in a file can be directly converted to a list using the <span style="color: #0000FF">$readlines$&#40;&#41;</span> function. For instance, in section 2.4 of tutorial 2, instead of using <span style="color: #0000FF">$read$&#40;&#41;</span> function, we can use the <span style="color: #0000FF">$readlines$&#40;&#41;</span> function to convert each line in the file $les miserables.txt$ as elements of a list $linecontent$:
# Opening a file file_read = open("Tutorial2/les miserables.txt") linecontent = file_read.readlines() file_read.close()
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
The elements of $linecontent$ is now the lines in $les miserables.txt$ (including the escape character):
linecontent
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
4.2 The Tuples A $tuple$ can be declared using the round bracket. A $tuple$ is actually a $list$ that contains element that cannot be modified or subtituted. Apart from that, its has similar properties with $list$.
t1 = (1,2,3,4) t1
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
Attempting to substitute a $tuple$ element will give an error.
t1[1] = 5
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
4.3 The Dictionaries $Dictionaries$ are similar to "associative arrays" in many other programming language. $Dictionaries$ are indexed by keys that can be strings or numbers. Acessing data in $dictionaries$ is by specifying the keys instead of index number. Data can be anything including other $dictionaries$. $Dictionaries$ can be declared by using the curly brackets with the pair of key and data separated by '$:$' and each pair of this element separated by '$,$'.
# Nearby stars to the earth Stars = {1:'Sun', 2:'Alpha Centauri', 3:"Barnard's Star",\ 4:'Luhman 16', 5:'WISE 0855-0714'} Stars[3] # Specify the key instead of index number
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
In the above example the keys are made of integers whereas the data are all made of strings. It can also be the opposite:
# Distance of nearby stars to the earth Stars_Dist = {'Sun':0, 'Alpha Centauri':4.24, "Barnard's Star":6.00,\ 'Luhman 16':6.60, 'WISE 0855-0714':7.0} print 'Alpha Centauri is %.2f light years from earth.' % \ (Stars_Dist['Alpha Centauri'])
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
These informations can be made more structured by using $list$ as data in $dictionary$.
# A more structured dictionaries data Stars_List = {1:['Sun',0], 2:['Alpha Centauri',4.24],\ 3:["Barnard's Star",6.00], 4:['Luhman 16',6.60],\ 5:['WISE 0855-0714',7.0]} print '%s is the fourth closest star at about %.2f light \ \nyears from earth.' % (Stars_List[4][0],Stars_List[4][1])
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
Below is the example of $dictionary$ that contains $dictionary$ type data and the ways to access them.
# Declaring dictionaries data for the dictionary 'Author' Coetzee = {1974:'Dusklands', 1977:'In The Heart Of The Country', 1980:'Waiting For The Barbarians', 1983:'Life & Times Of Michael K'} McCarthy = {1992:'All the Pretty Horses', 1994:'The Crossing', 1998:'Cities of the Plain', 2005:'No Country for Old Men', 2006:'The Road'} Steinbeck = {1937:'Of Mice And Men', 1939:'The Grapes Of Wrath', 1945:'Cannery Row', 1952:'East Of Eden', 1961:'The Winter Of Our Discontent'} Lewis = {'Narnia Series':{1950:'The Lion, the Witch and the Wardrobe', 1951:'Prince Caspian: The Return to Narnia', 1952:'The Voyage of the Dawn Treader', 1953:'The Silver Chair', 1954:'The Horse and His Boy', 1955:"The Magician's Nephew", 1956:'The Last Battle' }} # Assigning keys and data for the dictionary 'Author' # one of it is a dictionary list Author = {'South Africa':Coetzee,'USA':[McCarthy,Steinbeck], 'British':Lewis} Author['South Africa'][1983] Author['USA'][1][1939] Author['British']['Narnia Series'][1953]
Tutorial 4 - The Sequence.ipynb
megatharun/basic-python-for-researcher
artistic-2.0
Main objectives of this workshop Provide you with a first insight into the principal Python tools & libraries used in Science: conda. Jupyter Notebook. NumPy, matplotlib, SciPy Provide you with the basic skills to face basic tasks such as: Show other common libraries: Pandas, scikit-learn (some talks & workshops will focus on these packages) SymPy Numba ¿? 1. Jupyter Notebook The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more. It has been widely recognised as a great way to distribute scientific papers, because of the capability to have an integrated format with text and executable code, highly reproducible. Top level investigators around the world are already using it, like the team behind the Gravitational Waves discovery (LIGO), whose analysis was translated to an interactive dowloadable Jupyter notebook. You can see it here: https://github.com/minrk/ligo-binder/blob/master/GW150914_tutorial.ipynb 2. Using arrays: NumPy ndarray object | index | 0 | 1 | 2 | 3 | ... | n-1 | n | | ---------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | value | 2.1 | 3.6 | 7.8 | 1.5 | ... | 5.4 | 6.3 | N-dimensional data structure. Homogeneously typed. Efficient! A universal function (or ufunc for short) is a function that operates on ndarrays. It is a “vectorized function".
import numpy as np my_list = list(range(0,100000)) %timeit sum(my_list) array = np.arange(0, 100000) %timeit np.sum(array)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Array creation
one_dim_array = np.array([1, 2, 3, 4]) one_dim_array two_dim_array = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) two_dim_array two_dim_array.size two_dim_array.shape two_dim_array.dtype zeros_arr = np.zeros([3, 3]) ones_arr = np.ones([10]) eye_arr = np.eye(5) range_arr = np.arange(15) range_arr range_arr.reshape([3, 5]) np.linspace(0, 10, 21)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Basic slicing
one_dim_array[0] two_dim_array[-1, -1]
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
[start:stop:step]
my_arr = np.arange(100) my_arr[0::2] chess_board = np.zeros([8, 8], dtype=int) chess_board[0::2, 1::2] = 1 chess_board[1::2, 0::2] = 1 chess_board
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
2. Drawing: Matplotlib
%matplotlib inline import matplotlib.pyplot as plt plt.matshow(chess_board, cmap=plt.cm.gray)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Operations & linalg
# numpy functions x = np.linspace(1, 10) y = np.sin(x) plt.plot(x, y) y_2 = (1 + np.log(x)) ** 2 # Our first plot plt.plot(x, y_2, 'r-*') # Creating a 2d array two_dim_array = np.array([[10, 25, 33], [40, 25, 16], [77, 68, 91]]) two_dim_array.T # matrix multiplication two_dim_array @ two_dim_array # matrix vector one_dim_array = np.array([2.5, 3.6, 3.8]) two_dim_array @ one_dim_array # inv np.linalg.inv(two_dim_array) # eigenvectors & eigenvalues np.linalg.eig(two_dim_array)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Air quality data
from IPython.display import HTML HTML('<iframe src="http://www.mambiente.munimadrid.es/sica/scripts/index.php" \ width="700" height="400"></iframe>')
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Loading the data
# Linux command !head ./data/barrio_del_pilar-20160322.csv # Windows # !gc log.txt | select -first 10 # head # loading the data # ./data/barrio_del_pilar-20160322.csv data1 = np.genfromtxt('./data/barrio_del_pilar-20160322.csv', skip_header=3, delimiter=';', usecols=(2,3,4)) data1
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Dealing with missing values
np.mean(data1, axis=0) np.nanmean(data1, axis=0) # masking invalid data data1 = np.ma.masked_invalid(data1) np.mean(data1, axis=0) data2 = np.genfromtxt('./data/barrio_del_pilar-20151222.csv', skip_header=3, delimiter=';', usecols=(2,3,4)) data2 = np.ma.masked_invalid(data2)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Plotting the data Maximum values from: http://www.mambiente.munimadrid.es/opencms/export/sites/default/calaire/Anexos/valores_limite_1.pdf NO2 Media anual: 40 µg/m3 Media horaria: 200 µg/m3
plt.plot(data1[:, 1], label='2016') plt.plot(data2[:, 1], label='2015') plt.legend() plt.hlines(200, 0, 200, linestyles='--') plt.ylim(0, 220) from IPython.display import HTML HTML('<iframe src="http://ccaa.elpais.com/ccaa/2015/12/24/madrid/1450960217_181674.html" width="700" height="400"></iframe>')
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
CO Máxima diaria de las medias móviles octohorarias: 10 mg/m³
# http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.convolve.html def moving_average(x, N=8): return np.convolve(x, np.ones(N)/N, mode='same') plt.plot(moving_average(data1[:, 0]), label='2016') plt.plot(moving_average(data2[:, 0]), label='2015') plt.hlines(10, 0, 250, linestyles='--') plt.ylim(0, 11) plt.legend()
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
O3 Máxima diaria de las medias móviles octohorarias: 120 µg/m3 Umbral de información. 180 µg/m3 Media horaria. Umbral de alerta. 240 µg/m3
plt.plot(moving_average(data1[:, 2]), label='2016') #plt.plot(data1[:, 2]) plt.plot(moving_average(data2[:, 2]), label='2015') #plt.plot(data2[:, 2]) plt.hlines(180, 0, 250, linestyles='--') plt.ylim(0, 190) plt.legend()
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
4. Scientific functions: SciPy ``` scipy.linalg: ATLAS LAPACK and BLAS libraries scipy.stats: distributions, statistical functions... scipy.integrate: integration of functions and ODEs scipy.optimization: local and global optimization, fitting, root finding... scipy.interpolate: interpolation, splines... scipy.fftpack: Fourier trasnforms scipy.signal, scipy.special, scipy.io ``` Temperature data Now, we will use some temperature data from the Spanish Ministry of Agriculture.
HTML('<iframe src="http://eportal.magrama.gob.es/websiar/Ficha.aspx?IdProvincia=28&IdEstacion=1" width="700" height="400"></iframe>')
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
The file contains data from 2004 to 2015 (included). Each row corresponds to a day of the year, so evey 365 lines contain data from a whole year* Note1: 29th February has been removed for leap-years. Note2: Missing values have been replaced with the immediately prior valid data. These kind of events are better handled with Pandas!
!head data/M01_Center_Finca_temperature_data_2004_2015.csv # Loading the data temp_data = np.genfromtxt('data/M01_Center_Finca_temperature_data_2004_2015.csv', skip_header=1, delimiter=';') # Importing SciPy stats import scipy.stats as st # Applying some functions: describe, mode, mean... st.describe(temp_data) st.mode(temp_data) np.mean(temp_data, axis=0) np.median(temp_data, axis=0)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
We can also get information about percentiles!
st.scoreatpercentile(temp_data, per=25, axis=0) st.percentileofscore(temp_data[:,0], score=0) st.percentileofscore(temp_data[:,1], score=0) st.percentileofscore(temp_data[:,2], score=0)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Rearranging the data
temp_data2 = np.zeros([365, 3, 12]) for year in range(12): temp_data2[:, :, year] = temp_data[year*365:(year+1)*365, :] # Calculating mean of mean temp mean_mean = np.mean(temp_data2[:, 0, :], axis=1) # max of max max_max = np.max(temp_data2[:, 1, :], axis=1) # min of min min_min = np.min(temp_data2[:, 2, :], axis=1)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Let's visualize data! Using matplotlib styles http://matplotlib.org/users/whats_new.html#styles
%matplotlib inline plt.style.available plt.style.use('ggplot') days = np.arange(1, 366) plt.fill_between(days, max_max, min_min, alpha=0.7) plt.plot(days, mean_mean) plt.xlim(1, 365)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Let's see if 2015 was a normal year...
plt.plot(days, temp_data2[:,0,-1], lw=2) plt.plot(days, mean_mean) plt.xlim(1, 365) plt.fill_between(days, max_max, min_min, alpha=0.7) plt.fill_between(days, temp_data2[:,1,-1], temp_data2[:,2,-1], color='purple', alpha=0.5) plt.plot(days, temp_data2[:,0,-1], lw=2) plt.xlim(1, 365)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
For example, lets represent a function over a 2D domain! For this we will use the contour function, which requires some special inputs...
#we will use numpy functions in order to work with numpy arrays def funcion(x,y): return np.cos(x) + np.sin(y) # 0D: works! funcion(3,5) # 1D: works! x = np.linspace(0,5, 100) plt.plot(x, funcion(x,1))
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
In oder to plot the 2D function, we will need a grid. For 1D domain, we just needed one 1D array containning the X position and another 1D array containing the value. Now, we will create a grid, a distribution of points covering a surface. For the 2D domain, we will need: - One 2D array containing the X coordinate of the points. - One 2D array containing the Y coordinate of the points. - One 2D array containing the function value at the points. The three matrices must have the exact same dimensions, because each cell of them represents a particular point.
#We can create the X and Y matrices by hand, or use a function designed to make ir easy: #we create two 1D arrays of the desired lengths: x_1d = np.linspace(0, 5, 5) y_1d = np.linspace(-2, 4, 7) #And we use the meshgrid function to create the X and Y matrices! X, Y = np.meshgrid(x_1d, y_1d) X Y
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Note that with the meshgrid function we can only create rectangular grids
#Using Numpy arrays, calculating the function value at the points is easy! Z = funcion(X,Y) #Let's plot it! plt.contour(X, Y, Z) plt.colorbar()
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
We can try a little more resolution...
x_1d = np.linspace(0, 5, 100) y_1d = np.linspace(-2, 4, 100) X, Y = np.meshgrid(x_1d, y_1d) Z = funcion(X,Y) plt.contour(X, Y, Z) plt.colorbar()
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
The countourf function is simmilar, but it colours also between the lines. In both functions, we can manually adjust the number of lines/zones we want to differentiate on the plot.
plt.contourf(X, Y, Z, np.linspace(-2, 2, 6),cmap=plt.cm.Spectral) #With cmap, a color map is specified plt.colorbar() plt.contourf(X, Y, Z, np.linspace(-2, 2, 100),cmap=plt.cm.Spectral) plt.colorbar() #We can even combine them! plt.contourf(X, Y, Z, np.linspace(-2, 2, 100),cmap=plt.cm.Spectral) plt.colorbar() cs = plt.contour(X, Y, Z, np.linspace(-2, 2, 9), colors='k') plt.clabel(cs)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
These functions can be enormously useful when you want to visualize something. And remember! Always visualize data! Let's try it with Real data!
time_vector = np.loadtxt('data/ligo_tiempos.txt') frequency_vector = np.loadtxt('data/ligo_frecuencias.txt') intensity_matrix = np.loadtxt('data/ligo_datos.txt')
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
The time and frequency vectors contain the values at which the instrument was reading, and the intensity matrix, the postprocessed strength measured for each frequency at each time. We need again to create the 2D arrays of coordinates.
time_2D, freq_2D = np.meshgrid(time_vector, frequency_vector) plt.figure(figsize=(10,6)) #We can manually adjust the sice of the picture plt.contourf(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 200),cmap='bone') plt.xlabel('time (s)') plt.ylabel('Frequency (Hz)') plt.colorbar()
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Wow! What is that? Let's zoom into it!
plt.figure(figsize=(10,6)) plt.contourf(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 200),cmap = plt.cm.Spectral) plt.colorbar() plt.contour(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 9), colors='k') plt.xlabel('time (s)') plt.ylabel('Frequency (Hz)') plt.axis([9.9, 10.05, 0, 300])
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
IPython Widgets The IPython Widgets are interactive tools to use in the notebook. They are fun and very useful to quickly understand how different parameters affect a certain function. This is based on a section of the PyConEs 14 talk by Kiko Correoso "Hacking the notebook": http://nbviewer.jupyter.org/github/kikocorreoso/PyConES14_talk-Hacking_the_Notebook/blob/master/notebooks/Using%20Interact.ipynb
from ipywidgets import interact #Lets define a extremely simple function: def ejemplo(x): print(x) interact(ejemplo, x =10) #Try changing the value of x to True, 'Hello' or ['hello', 'world'] #We can control the slider values with more precission: interact(ejemplo, x = (9,10,0.1))
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
If you want a dropdown menu that passes non-string values to the Python function, you can pass a dictionary. The keys in the dictionary are used for the names in the dropdown menu UI and the values are the arguments that are passed to the underlying Python function.
interact(ejemplo, x={'one': 10, 'two': 20})
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Let's have some fun! We talked before about frequencys and waves. Have you ever learn about AM and FM modulation? It's the process used to send radio communications!
x = np.linspace(-1, 7, 1000) fig = plt.figure() plt.subplot(211)#This allows us to display multiple sub-plots, and where to put them plt.plot(x, np.sin(x)) plt.grid(False) plt.title("Audio signal: modulator") plt.subplot(212) plt.plot(x, np.sin(50 * x)) plt.grid(False) plt.title("Radio signal: carrier") #Am modulation simply works like this: am_wave = np.sin(50 * x) * (0.5 + 0.5 * np.sin(x)) plt.plot(x, am_wave)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
In order to interact with it, we will need to transform it into a function
def am_mod (f_carr=50, f_mod=1, depth=0.5): #The default values will be the starting points of the sliders x = np.linspace(-1, 7, 1000) am_wave = np.sin(f_carr * x) * (1- depth/2 + depth/2 * np.sin(f_mod * x)) plt.plot(x, am_wave) interact(am_mod, f_carr = (1,100,2), f_mod = (0.2, 2, 0.1), depth = (0, 1, 0.1))
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Other options... 5. Other packages Symbolic calculations with SymPy SymPy is a Python package for symbolic math. We will not cover it in depth, but let's take a picure of the basics!
# Importación from sympy import init_session init_session(use_latex='matplotlib') #We must start calling this function
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
The basic unit of this package is the symbol. A simbol object has name and graphic representation, which can be different:
coef_traccion = symbols('c_T') coef_traccion w = symbols('omega') W = symbols('Omega') w, W
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
By default, SymPy takes symbols as complex numbers. That can lead to unexpected results in front of certain operations, like logarithms. We can explicitly signal that a symbol is real when we create it. We can also create several symbols at a time.
x, y, z, t = symbols('x y z t', real=True) x.assumptions0
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Expressions can be created from symbols:
expr = cos(x)**2 + sin(x)**2 expr simplify(expr) #We can substitute pieces of the expression: expr.subs(x, y**2) #We can particularize on a certain value: (sin(x) + 3 * x).subs(x, pi) #We can evaluate the numerical value with a certain precission: (sin(x) + 3 * x).subs(x, pi).evalf(25)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
We can manipulate the expression in several ways. For example:
expr1 = (x ** 3 + 3 * y + 2) ** 2 expr1 expr1.expand()
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
We can derivate and integrate:
expr = cos(2*x) expr.diff(x, x, x) expr_xy = y ** 3 * sin(x) ** 2 + x ** 2 * cos(y) expr_xy diff(expr_xy, x, 2, y, 2) int2 = 1 / sin(x) integrate(int2) x, a = symbols('x a', real=True) int3 = 1 / (x**2 + a**2)**2 integrate(int3, x)
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
We also have ecuations and differential ecuations:
a, x, t, C = symbols('a, x, t, C', real=True) ecuacion = Eq(a * exp(x/t), C) ecuacion solve(ecuacion ,x) x = symbols('x') f = Function('y') ecuacion_dif = Eq(f(x).diff(x,2) + f(x).diff(x) + f(x), cos(x)) ecuacion_dif dsolve(ecuacion_dif, f(x))
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Data Analysis with pandas Pandas is a package that focus on data structures and data analysis tools. We will not cover it because the next workshop, by Kiko Correoso, will develop it in depth. Machine Learning with scikit-learn Scikit-learn is a very complete Python package focusing on machin learning, and data mining and analysis. We will not cover it in depth because it will be the focus of many more talks at the PyData. A world of possibilities... Thanks for yor attention! Any Questions?
# Notebook style from IPython.core.display import HTML css_file = './static/style.css' HTML(open(css_file, "r").read())
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
PyDataMadrid2016/Conference-Info
mit
Visualize channel over epochs as an image This will produce what is sometimes called an event related potential / field (ERP/ERF) image. Two images are produced, one with a good channel and one with a channel that does not show any evoked field. It is also demonstrated how to reorder the epochs using a 1D spectral embedding as described in [1]_.
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path()
0.19/_downloads/2677ee623a2aeff54fe63131444b1844/plot_channel_epochs_image.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id, tmin, tmax = 1, -0.2, 0.4 # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Set up pick list: EEG + MEG - bad channels (modify to your needs) raw.info['bads'] = ['MEG 2443', 'EEG 053'] # Create epochs, here for gradiometers + EOG only for simplicity epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=('grad', 'eog'), baseline=(None, 0), preload=True, reject=dict(grad=4000e-13, eog=150e-6))
0.19/_downloads/2677ee623a2aeff54fe63131444b1844/plot_channel_epochs_image.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show event-related fields images
# and order with spectral reordering # If you don't have scikit-learn installed set order_func to None from sklearn.cluster.spectral import spectral_embedding # noqa from sklearn.metrics.pairwise import rbf_kernel # noqa def order_func(times, data): this_data = data[:, (times > 0.0) & (times < 0.350)] this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis] return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.), n_components=1, random_state=0).ravel()) good_pick = 97 # channel with a clear evoked response bad_pick = 98 # channel with no evoked response # We'll also plot a sample time onset for each trial plt_times = np.linspace(0, .2, len(epochs)) plt.close('all') mne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5, order=order_func, vmin=-250, vmax=250, overlay_times=plt_times, show=True)
0.19/_downloads/2677ee623a2aeff54fe63131444b1844/plot_channel_epochs_image.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Читаем данные из порта USB в файл: cat /dev/cu.usbmodem1421 &gt; output3.bin Они будут в бинарном формате, прочитаем их в DataFrame и сконвертируем в миллиамперы:
df = pd.DataFrame(np.fromfile("./output.bni", dtype=np.uint16).astype(np.float32) * (3300 / 2**12)) #df.describe()
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Данных много, миллион сэмплов в секунду. Мы насобирали почти 70 миллионов сэмплов. Если строить их все сразу, питон ОЧЕНЬ задумается. Поэтому будем строить кусочки. 100 сэмплов, или 100 микросекунд:
fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() df[20000:20100].plot(ax=ax)
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Возьмем более мелкий масштаб, для этого сгруппируем данные по 10 мкс и возьмем среднее:
df_r = df.groupby(df.index//10).mean() fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() df_r[:30000].plot(ax=ax)
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Посмотрим на таймстемпы в logcat. У нас три события из Браузера, остальное -- включение/выключение фонарика. 05:05:51.540 05:05:52.010 05:05:52.502 05:05:52.857 05:05:53.317 05:05:53.660 05:05:54.118 05:05:54.504 05:05:54.966 05:05:55.270 05:06:01.916 14509 14509 I cr_Ya:DownloadTracking: PageLoadStarted, ElapsedRealtimeMillis: 1241509 05:06:03.453 14509 14509 I cr_Ya:DownloadTracking: DownloadStarted, ElapsedRealtimeMillis: 1243046 05:06:09.147 14509 14509 I cr_Ya:DownloadTracking: DownloadFinished, ElapsedRealtimeMillis: 1248740 05:06:13.336 05:06:13.691 05:06:14.051 05:06:14.377 05:06:14.783 05:06:15.089 05:10:32.190 05:10:34.015 05:10:37.349 05:10:37.491 Еще раз сделаем ресемплинг, чтобы в одной точке была одна миллисекунда и построим все данные:
df_r1000 = df.groupby(df.index//1000).mean() fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() df_r1000.plot(ax=ax)
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Интересные всплески потребления начинаются где-то с 40000-ной миллисекунды (их пять подряд, мы моргали лампочкой пять раз).
fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() df_r1000[40000:41000].plot(ax=ax)
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Предполагаем, что первый всплеск был в 40200-ю миллисекунду. Теперь посчитаем относительные времена:
times = [ 51540, 52010, 52502, 52857, 53317, 53660, 54118, 54504, 54966, 55270, 60000 + 1916, # PageLoadStarted 60000 + 3453, # DownloadStarted 60000 + 9147, # DownloadFinished 60000 + 13336, 60000 + 13691, 60000 + 14051, 60000 + 14377, 60000 + 14783, 60000 + 15089, 60000 + 32190, 60000 + 34015, 60000 + 37349, 60000 + 37491, ]
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
И построим их на нашем графике:
sync = 40200 fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() sync = 40205 df_r1000[40000:43000].plot(ax=ax) for t in times: sns.plt.axvline(sync + t - times[0])
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
У второй вспышки более резкий фронт, поэтому попробуем синхронизироваться более точно по нему (и используем микросекундные данные):
fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() df[41100000:41250000].plot(ax=ax) sns.plt.axvline(40200000 + 470000 + 498000 + 5000)
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
То же для первой вспышки, видно, что фронт у нее размытый:
fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() df[40100000:40250000].plot(ax=ax) sns.plt.axvline(40200000 + 5000)
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Теперь построим данные за весь тесткейс, учитывая изменение синхронизации:
fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() sync = 40205 df_r1000[40000:65000].plot(ax=ax) for t in times: sns.plt.axvline(sync + t - times[0])
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
И увеличим до периода загрузки файла:
fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() sync = 40205 df_r1000[52000:58000].plot(ax=ax) for t in times: sns.plt.axvline(sync + t - times[0])
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0