markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
First we need to create the lattice from the tight-binding model and define the translation symmetries.
lattice = model.to_kwant_lattice() sym = kwant.TranslationalSymmetry( lattice.vec((1, 0, 0)), lattice.vec((0, 1, 0)), lattice.vec((0, 0, 1)) )
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Now we define a Builder with these symmetries
kwant_sys = kwant.Builder(sym)
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
We give the system an "infinite" shape. This needs to be done before adding the hoppings, because on-site energies and hoppings are added only to existing sites.
kwant_sys[lattice.shape(lambda p: True, (0, 0, 0))] = 0
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Now we can add the hoppings. This modifies the model in-place.
model.add_hoppings_kwant(kwant_sys)
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Finally, use wraparound to finalize the bulk system:
kwant_model = kwant.wraparound.wraparound(kwant_sys).finalized()
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
To see that the two models are the same, we plot the bands along some line. Note that the periodicity of the k-vector is $1$ in TBmodels, but $2\pi$ in kwant. The k-vector needs to be scaled accordingly.
k_list = [(kx, 0, 0) for kx in np.linspace(0, 1, 100)] x = range(100) eigs_tbmodels = [model.eigenval(k) for k in k_list] eigs_kwant = [la.eigvalsh( kwant_model.hamiltonian_submatrix( params={key: val for key, val in zip(['k_x', 'k_y', 'k_z'], 2 * np.pi * np.array(k))} ) ) for k in k_list]
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Numerical and visual test for equivalence:
np.isclose(eigs_tbmodels, eigs_kwant).all() fig, ax = plt.subplots() for band in np.array(eigs_tbmodels).T: ax.plot(x, band, 'k') for band in np.array(eigs_kwant).T: ax.plot(x, band, 'b')
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Finite wire with leads In the second example, we build a finite wire and attach two leads on either side. Since the finite wire doesn't have translation symmetry, we can just create a bare Builder.
wire = kwant.Builder()
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Now we define a shape for the wire - for simplicity we use a square.
def shape(p): x, y, z = p return -20 < x < 20 and -5 < y < 5 and -5 < z < 5
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Again, we explicitly create the lattice sites before populating the hoppings.
wire[lattice.shape(shape, (0, 0, 0))] = 0 model.add_hoppings_kwant(wire) kwant.plot(wire);
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Now we create and attach two leads on either side. The lead must be long enough s.t. the most long-range hopping stays within the lead.
sym_lead = kwant.TranslationalSymmetry(lattice.vec((-5, 0, 0))) lead = kwant.Builder(sym_lead) def lead_shape(p): x, y, z = p return -5 <= x <= 0 and -5 < y < 5 and -5 < z < 5 lead[lattice.shape(lead_shape, (0, 0, 0))] = 0 model.add_hoppings_kwant(lead) wire.attach_lead(lead); wire.attach_lead(lead.reversed())...
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
Here's how you can find out what's the longest-range hopping in a given direction:
for i, dir in enumerate(['x', 'y', 'z']): print(dir + ':', max([abs(R[i]) for R in model.hop.keys()]))
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0
<a id="ref0"></a> <h2 align=center>Activation Functions </h2> Just like a neural network, you apply an activation function to the activation map as shown in the following image: <img src = "https://ibm.box.com/shared/static/g3x3p1jaf2lv249gdvnjtnzez3p64nou.png" width = 1000, align = "center"> Create a kernel and imag...
conv = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=3) Gx=torch.tensor([[1.0,0,-1.0],[2.0,0,-2.0],[1.0,0,-1.0]]) conv.state_dict()['weight'][0][0]=Gx conv.state_dict()['bias'][0]=0.0 conv.state_dict() image=torch.zeros(1,1,5,5) image[0,0,:,2]=1 image
DL0110EN/6.1.3.Activation max pooling .ipynb
atlury/deep-opencl
lgpl-3.0
The following image shows the image and kernel: <img src = "https://ibm.box.com/shared/static/e0xc2oqtolg4p6nfsumcbpix1q5yq2kr.png" width = 500, align = "center"> Apply convolution to the image:
Z=conv(image) Z
DL0110EN/6.1.3.Activation max pooling .ipynb
atlury/deep-opencl
lgpl-3.0
Apply the activation function to the activation map. This will apply the activation function to each element in the activation map.
A=F.relu(Z) A
DL0110EN/6.1.3.Activation max pooling .ipynb
atlury/deep-opencl
lgpl-3.0
The process is summarized in the the following figure. The Relu function is applied to each element. All the elements less than zero are mapped to zero. The remaining components do not change. <img src = "https://ibm.box.com/shared/static/b07y9oepudg45ur8383x11xv36ox6any.gif" width = 1000, align = "center"> <a id="ref1...
image1=torch.zeros(1,1,4,4) image1[0,0,0,:]=torch.tensor([1.0,2.0,3.0,-4.0]) image1[0,0,1,:]=torch.tensor([0.0,2.0,-3.0,0.0]) image1[0,0,2,:]=torch.tensor([0.0,2.0,3.0,1.0]) image1
DL0110EN/6.1.3.Activation max pooling .ipynb
atlury/deep-opencl
lgpl-3.0
Max pooling simply takes the maximum value in each region. Consider the following image. For the first region, max pooling simply takes the largest element in a yellow region. <img src = "https://ibm.box.com/shared/static/gso58h37ov42cl6bx5wkvll11kx80jku.png" width = 500, align = "center"> The region shifts, and the...
max3=torch.nn.MaxPool2d(2,stride=1) max3(image1) max1=torch.nn.MaxPool2d(2) max1(image1)
DL0110EN/6.1.3.Activation max pooling .ipynb
atlury/deep-opencl
lgpl-3.0
If the stride is set to None (its defaults setting), the process will simply take the maximum in a prescribed area and shift over accordingly as shown in the following figure: <img src = "https://ibm.box.com/shared/static/cenhef82q5kxzvzdqmjyuvbxo6j3c2ej.gif" width = 500, align = "center"> Here's the code in Pytorch:
max1=torch.nn.MaxPool2d(2) max1(image1)
DL0110EN/6.1.3.Activation max pooling .ipynb
atlury/deep-opencl
lgpl-3.0
I mean that the command is written in the ipython command prompt (or in the ipython notebook if you prefer to use that). There are some lines above the command prompt. These are information on the current python and ipython version and tips on how to get help. But first, let us look at some of the basic operations, add...
(3.6*5 + (3 - 2)**3)/2
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
These are the most basic operation and there are several more. We wont bother about them in this text though. Variables Python, as most programming languages relies on the fact that you can create something called a variable in it. This is similar to what in mathematics is called a variable and a constant. For example,...
a = 5
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Python now knows that there is such a thing as $a$ which you can use to do further operations. For example, instead of writing
5 + 3
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
We can write
a + 3
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
and we get the same result. This might seem as it is not that useful, but we can use it the same way as we use constants and variables in mathematics to shorten what we have to write. For example, we may want to calculate averages of averages.
b = (2.5 + 6.3)/2 c = (5.3 + 8.7)/2 (b + c)/2
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Without variables this would get messy and very, very soon extremely difficult. Objects Just as there are different kind of objects in mathematics such as integers, reals and matrices, there are different kind of objects in python. The basic ones are integers, floats, lists, tuples and strings which will be introduced...
a = 12 a / 5
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Often, this is not what you wanted. This leads us to the next object. Floats A float, or a floating point number, works very much like a real number in mathematics. The set of floats is closed under all the operations we have introduced just as the reals are. To declare a float we simply have to add a dot to the number...
a = 12.
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
and now, $a$ is a float instead of am integer. If we do the same operation as before, but with floats we get
a / 5.
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Now it seems as if we only should use floats and not use integers at all because of this property. But as we will se soon the integers will play a central role in loops. Lists and tuples Just as integers and floats are similar, so are lists and tuples. We will begin with lists. A list is an ordered collection of object...
a = [1, 3.5, [3, 5, []], 'this is a string'] a
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Because a list is ordered, the objects in the list can be accessed by stating at which place they are. To access an object in the list we use the square brackets again. In python (and most otehr programming languages) counts from $0$, which means that the first object in the list has index $0$, the second has index $1$...
a[2] # Accessing the third element in the list. (This is acomment and is ignored by the interpreter.) a[2][1] # We can access an objects in a list which is in a list a[1] = 13.2 # We can change values of the objects in the list a[0] + a[1] + a[2][1]
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
You can also access parts of a list like this:
a[0:2] # Return a list containing the first element up to but not including the third (index 2) element a[0:2] + a[0:2] # You can put two lists together with addition.
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
The lengt hof a list is not fixed in python and objects can be added and removed. To do this we will use append and del.
a = [] # Creating an empty list a a.append('bleen') a.append([2,4.1,'grue']) a.append(4.3) a del a[-1] # We can also index from the end of the list. -1 indicates the last element a
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Tuples are initialized similarly as lists and they can contain most objects the list can contain. The main difference is that the tuple does not support item assignment. What this means is that when the tuple is created its objects can not change later. Tuples are initiated with matching parentheses ( and ).
a = (2, 'e', (3.4, 6.8)) a a[0] a[-1][-1] a[1] = 0
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Because tuples does not support item assignment, you cannot use append or del with it. Tuples ar good to use if you want to make sure that certain values stay unchanged in a program, for example a group of physical constants. Strings Strings are lines of text or symbols and are initiated with doubble or single quotes. ...
a = 'here is a string' a a = "Here's another" # Notice that we included a single quote in the string. a a = """ This string spans several lines. """ a # \n means new line. They can be manually included with \n. print a # To see \n as an actual new line we need to use print a.
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Here, you saw the first occurrence of the print statement. It's functionality is much greater than prettifying string output as it can print text to the command or terminal window. One omportant functionality of the string is the format function. This function lets us create a string without knowing what it will contai...
a = [1,2,3,4,5] str = "The sum of {} is {}".format(a, sum(a)) str
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
It uses curly brackets { and } as placeholders for the objects in the format part. There are many other things you can do with strings, to find out use the question mark, ?, in the interpreter after the variable you want more information about. Notice that this does not work in the regular python interpreter, you have ...
a? help(sum)
Chapter2.ipynb
rorimac/Tools-of-a-Math-Student
mit
Structure of this module There are 2 way of invoking this module textwrap.TextWrapper(**kwargs) this module also provide some convinient function Let's start with convinient functiones.
sample_text = ''' The textwrap module can be used to format text for output in situations where pretty-printing is desired. It offers programmatic functionality similar to the paragraph wrapping or filling features found in many text editors. '''
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Convinient functiones textwrap.wrap(text, width=70, **kwargs) Wraps the single paragraph in text (a string) so every line is at most width characters long. Returns a list of output lines, without final newlines
wrap_result = textwrap.wrap(sample_text,width=30) wrap_result
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
textwrap.fill(text, width=70, **kwargs) Wraps the single paragraph in text, and returns a single string containing the wrapped paragraph
fill_result = textwrap.fill(sample_text,width=30) fill_result
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
textwrap.dedent(text) Remove any common leading whitespace from every line in text.
# before dedent sample_text dedent_result = textwrap.dedent(sample_text) # after dedent dedent_result
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
you probaly notice that the result of textwrap.fill have some unwanted space. in this case we can work around this by dedent it first then fill.
dedent_result_wrap = textwrap.fill(dedent_result,width=30) dedent_result_wrap
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
textwrap.indent(text, prefix, predicate=None) Add prefix to the beginning of selected lines in text.
indent_result = textwrap.indent(sample_text,prefix="=A=") indent_result print(indent_result)
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
To control which lines receive the new prefix, pass a callable as the predicate argument to indent(). The callable will be invoked for each line of text in turn and the prefix will be added for lines where the return value is true.
def should_indent(line): print('Indent {!r}?'.format(line)) return len(line.strip()) % 2 == 0 dedented_text = textwrap.dedent(sample_text) wrapped = textwrap.fill(dedented_text, width=50) final = textwrap.indent(wrapped, 'EVEN ', predicate=should_indent) print('\nQuoted block:\n') prin...
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
This example adds the prefix EVEN to lines that contain an even number of characters. textwrap.shorten(text, width, **kwargs) Collapse and truncate the given text to fit in the given width. First the whitespace in text is collapsed (all whitespace is replaced by single spaces). If the result fits in the width, it is r...
shorten_result = textwrap.shorten(sample_text,width=90) shorten_result # use different placeholder shorten_result_1 = textwrap.shorten(sample_text,width=90,placeholder='....') shorten_result_1
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
textwrap.TextWrapper(**kwargs) If you want more convinient or efficient solution. you should use TextWrapper directly. you may notice that textwrap.wrap(text, width=70, **kwargs) textwrap.fill(text, width=70, **kwargs) textwrap.shorten(text, width=70, **kwargs) have optional kwargs arguments. These optional arguments ...
dedented_text = textwrap.dedent(sample_text).strip() print(textwrap.fill(dedented_text, initial_indent='', subsequent_indent=' ' * 4, width=50, ))
text/text_wrap.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
1. I want to make sure my Plate ID is a string. Can't lose the leading zeroes!
plate_info = {'Plate ID': 'str'} df = pd.read_csv("small-violations.csv", dtype=plate_info) df df.head() df.head(10) df.tail()
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
2. I don't think anyone's car was built in 0AD. Discard the '0's as NaN.
plate_info = {'Plate ID': 'str'} df = pd.read_csv("small-violations.csv", dtype=plate_info, na_values={'Vehicle Year': '0', 'Date First Observed': '0'}) df.head()
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
3. I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates.
import dateutil def date_to_date(date): date = str(date) parsed_date = dateutil.parser.parse(date) return parsed_date df.columns df['New Issue Date']= df['Issue Date'].apply(date_to_date) import datetime def convert_to_time(time): try: str_time = str(time) return datetime....
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
4. "Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
other_df.columns other_df['Date First Observed'].dtypes other_df['Date First Observed'].tail() import dateutil other_df['Date First Observed'] other_df['Violation Time'].head() other_df['Violation Time'].tail() def int_to_date(integer): if not pd.isnull(integer): date = str(int(integer)) pars...
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
5. "Violation time" is... not a time. Make it a time.
def violation_time_to_time(time): try: hour = time[0:2] minutes = time[2:4] am_pm= time[4] regular_time= hour + ":" + minutes + " " + am_pm + 'm' violation_time_fixed = dateutil.parser.parse(regular_time) return violation_time_fixed.strftime("%H:%M%p") except: ...
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
6. There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice.
other_df['Vehicle Color'].value_counts() def color_rename(color): if (color == 'BLACK') or (color == 'BLK') or (color == 'BK'): return 'BLACK' elif (color == 'WHITE') or (color == 'WHT') or (color == 'WH') or (color == 'W'): return 'WHITE' other_df['Vehicle Color'].apply(color_rename)
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
7. Join the data with the Parking Violations Code dataset from the NYC Open Data site.
parking_violations_df = pd.read_csv("DOF_Parking_Violation_Codes.csv", encoding="mac_roman", error_bad_lines=False) parking_violations_df.head() parking_violations_df['CODE'].describe() other_df['Violation Code'].describe() def convert_to_str(n): return str(n) parking_violations_df['Code'] = parking_violations_...
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
8. How much money did NYC make off of parking violations?
diff_violations_df['Manhattan 96th & below'].describe() diff_violations_df['All other areas'].describe() diff_violations_df['Manhattan 96th & below'].apply(convert_to_str).head() diff_violations_df['All other areas'].apply(convert_to_str).head() diff_violations_df = new_violations_df[new_violations_df['Manhattan 96...
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
9. What's the most lucrative kind of parking violation? The most frequent?
manhattan_violations.sort_values(ascending=False) violations_not_man.sort_values(ascending=False) new_violations_df['Violation code'].value_counts()
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
10. New Jersey has bad drivers, but does it have bad parkers, too? How much money does NYC make off of all non-New York vehicles?
out_of_staters_df = diff_violations_df[diff_violations_df['Registration State'] != 'NY'] out_of_staters_df.head() out_of_staters_other = out_of_staters_df.groupby('Violation code')['All Other Areas'].sum() out_of_staters_other.sum() out_of_staters_manhattan= out_of_staters_df.groupby('Violation code')['Manhattan 96th...
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
11. Make a chart of the top few.
%matplotlib inline out_of_staters_other.sort_values(ascending=False).plot(kind='bar', x='Violation code') out_of_staters_manhattan.sort_values(ascending=False).plot(kind='bar', x='Violation code')
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
12. What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am. 13. What's the average ticket cost in NYC?
average_tix_price = total_out_of_staters_violations / diff_violations_df['Violation code'].value_counts().sum() average_tix_price
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
14. Make a graph of the number of tickets per day.
diff_violations_df['Issue Date'].value_counts().head(10).plot(kind='barh')
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
15. Make a graph of the amount of revenue collected per day.
daily_revenue = total_out_of_staters_violations / new_violations_df['New Issue Date'].value_counts() daily_revenue.sort_values(ascending=False).head(20).plot(kind='bar')
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
16. Manually construct a dataframe out of https://dmv.ny.gov/statistic/2015licinforce-web.pdf (only NYC boroughts - bronx, queens, manhattan, staten island, brooklyn), having columns for borough name, abbreviation, and number of licensed drivers.
nyc_licenses = pd.read_excel("NYC.xlsx") nyc_licenses
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
17. What's the parking-ticket-$-per-licensed-driver in each borough of NYC? Do this with pandas and the dataframe you just made, not with your head!
diff_violations_df.columns diff_violations_df['Violation County'].value_counts() bronx_violations = diff_violations_df[diff_violations_df['Violation County'] == 'BX'] bronx_licenses = nyc_licenses['Total'][nyc_licenses['Abbreviation'] == 'BX'] bronx_tix = bronx_violations.groupby('Violation code')['All Other Areas']....
Homework 11 Soma.ipynb
skkandrach/foundations-homework
mit
Data Visualiazation
pylab.plot(x, y,'*') pylab.show()
exams/w261mt/Midterm MRjob code.ipynb
JasonSanchez/w261
mit
MrJob class code The solution of linear model $$ \textbf{Y} = \textbf{X}\theta $$ is: $$ \hat{\theta} = (\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T\textbf{y} $$ If $\textbf{X}^T\textbf{X}$ is denoted by $A$, and $\textbf{X}^T\textbf{y}$ is denoted by $b$, then $$ \hat{\theta} = A^{-1}b $$ There are two MrJob classes to c...
%%writefile linearRegressionXSquare.py #Version 1: One MapReduce Stage (join data at the first reducer) from mrjob.job import MRJob class MRMatrixX2(MRJob): #Emit all the data need to caculate cell i,j in result matrix def mapper(self, _, line): v = line.split(',') # add 1s to calculate interce...
exams/w261mt/Midterm MRjob code.ipynb
JasonSanchez/w261
mit
Driver: Driver run tow MrJob class to get $\textbf{X}^T\textbf{X}$ and $\textbf{X}^T\textbf{y}$. And it calculate $(\textbf{X}^T\textbf{X})^{-1}$ by numpy.linalg.solve.
from numpy import linalg,array,empty from linearRegressionXSquare import MRMatrixX2 from linearRegressionXy import MRMatrixXY mr_job1 = MRMatrixX2(args=['LinearRegression.csv']) mr_job2 = MRMatrixXY(args=['LinearRegression.csv']) X_Square = [] X_Y = [] # Calculate XT*X Covariance Matrix print "Matrix XT*X:" with mr_jo...
exams/w261mt/Midterm MRjob code.ipynb
JasonSanchez/w261
mit
Gradient descent - doesn't work
%%writefile MrJobBatchGDUpdate_LinearRegression.py from mrjob.job import MRJob # This MrJob calculates the gradient of the entire training set # Mapper: calculate partial gradient for each example # class MrJobBatchGDUpdate_LinearRegression(MRJob): # run before the mapper processes any input def re...
exams/w261mt/Midterm MRjob code.ipynb
JasonSanchez/w261
mit
Commit and Deploy New Tensorflow AI Model Commit Model to Github
!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export !ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027 !git status !git add --all /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027/ !git status !git commit -m "updated ...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Airflow Workflow Deploys New Model through Github Post-Commit Webhook to Triggers
from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from IPython.display import clear_output, Image, display, HTML html = '<iframe width=100% height=500px src="http://demo.pipeline.io:8080/admin">' display(HTML(html))
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Train and Deploy Spark ML Model (Airbnb Model, Mutable Deploy) Scale Out Spark Training Cluster Kubernetes CLI
!kubectl scale --context=awsdemo --replicas=2 rc spark-worker-2-0-1 !kubectl get pod --context=awsdemo
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Weavescope Kubernetes AWS Cluster Visualization
from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from IPython.display import clear_output, Image, display, HTML html = '<iframe width=100% height=500px src="http://kubernetes-aws.demo.pipeline.io">' display(HTML(html))
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Generate PMML from Spark ML Model
from pyspark.ml.linalg import Vectors from pyspark.ml.feature import VectorAssembler, StandardScaler from pyspark.ml.feature import OneHotEncoder, StringIndexer from pyspark.ml import Pipeline, PipelineModel from pyspark.ml.regression import LinearRegression # You may need to Reconnect (more than Restart) the Kernel t...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 0: Load Libraries and Data
df = spark.read.format("csv") \ .option("inferSchema", "true").option("header", "true") \ .load("s3a://datapalooza/airbnb/airbnb.csv.bz2") df.registerTempTable("df") print(df.head()) print(df.count())
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 1: Clean, Filter, and Summarize the Data
df_filtered = df.filter("price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null") df_filtered.registerTempTable("df_filtered") df_final = spark.sql(""" select id, city, case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA') then stat...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 2: Define Continous and Categorical Features
continuous_features = ["bathrooms", \ "bedrooms", \ "security_deposit", \ "cleaning_fee", \ "extra_people", \ "number_of_reviews", \ "square_feet", \ "review_s...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 3: Split Data into Training and Validation
[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 4: Continous Feature Pipeline
continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol="unscaled_continuous_features") continuous_feature_scaler = StandardScaler(inputCol="unscaled_continuous_features", outputCol="scaled_continuous_features", \ withStd=True, withMean=False)
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 5: Categorical Feature Pipeline
categorical_feature_indexers = [StringIndexer(inputCol=x, \ outputCol="{}_index".format(x)) \ for x in categorical_features] categorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \ ...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 6: Assemble our Features and Feature Pipeline
feature_cols_lr = [x.getOutputCol() \ for x in categorical_feature_one_hot_encoders] feature_cols_lr.append("scaled_continuous_features") feature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \ outputCol="features_lr")
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 7: Train a Linear Regression Model
linear_regression = LinearRegression(featuresCol="features_lr", \ labelCol="price", \ predictionCol="price_prediction", \ maxIter=10, \ regParam=0.3, \ ...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Step 8: Convert PipelineModel to PMML
from jpmml import toPMMLBytes model_bytes = toPMMLBytes(spark, training_dataset, pipeline_model) print(model_bytes.decode("utf-8"))
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Push PMML to Live, Running Spark ML Model Server (Mutable)
import urllib.request namespace = 'default' model_name = 'airbnb' version = '1' update_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml-model/%s/%s/%s' % (namespace, model_name, version) update_headers = {} update_headers['Content-type'] = 'application/xml' req = urllib.request.Request(update_url, \ ...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Deploy Java-based Model (Simple Model, Mutable Deploy)
from urllib import request sourceBytes = ' \n\ private String str; \n\ \n\ public void initialize(Map<String, Object> args) { \n\ ...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Deploy Java Model (HttpClient Model, Mutable Deploy)
from urllib import request sourceBytes = ' \n\ public Map<String, Object> data = new HashMap<String, Object>(); \n\ \n\ public void initialize(Map<String, Object> args) { ...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Load Test and Compare Cloud Providers (AWS and Google) Monitor Performance Across Cloud Providers NetflixOSS Services Dashboard (Hystrix)
from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from IPython.display import clear_output, Image, display, HTML html = '<iframe width=100% height=500px src="http://hystrix.demo.pipeline.io/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Start Load Tests Run JMeter Tests from Local Laptop (Limited by Laptop) Run Headless JMeter Tests from Training Clusters in Cloud
# Spark ML - PMML - Airbnb !kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml !kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml # Codegen - Java - Simple !kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equal...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
End Load Tests
!kubectl delete --context=awsdemo rc loadtest-aws-airbnb !kubectl delete --context=gcpdemo rc loadtest-aws-airbnb !kubectl delete --context=awsdemo rc loadtest-aws-equals !kubectl delete --context=gcpdemo rc loadtest-aws-equals !kubectl delete --context=awsdemo rc loadtest-aws-minimal !kubectl delete --context=gcpdemo ...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Rolling Deploy Tensorflow AI (Simple Model, Immutable Deploy) Kubernetes CLI
!kubectl rolling-update prediction-tensorflow --context=awsdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow !kubectl get pod --context=awsdemo !kubectl rolling-update prediction-tensorflow --context=gcpdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow !kubectl ge...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
fluxcapacitor/source.ml
apache-2.0
Las tuplas son inmutables. Si intento cambiar el valor de una posición, genera error. Si asingo otra tupla a la misma variable, genera otro ID.
id(x) x = (0, 'Cambio', (1,2)) id(x) x
Notas/Notas-Python/01_Estructuras.ipynb
avallarino-ar/MCDatos
mit
Listas Son elementos .....
x = [1,2,3] # Declaración de una Lista x.append('Nuevo valor') # Agrego nuevo contenido x # Imprimo Lista completa x.insert(2, 'Valor Intermedio') # Inserto otro valor x
Notas/Notas-Python/01_Estructuras.ipynb
avallarino-ar/MCDatos
mit
¿Qué es más rapido: Tulpas o Listas?
import timeit timeit.timeit('x = (1,2,3,4,5,6)') # Mido tiempo de ejecución de una Tupla timeit.timeit('x = [1,2,3,4,5,6]') # Mido tiempo de ejecución de una
Notas/Notas-Python/01_Estructuras.ipynb
avallarino-ar/MCDatos
mit
Referencia / asignacion:
x = [1,2,3] # Asignación y = [0, x] # Referencia y x[0] = -1 # Asigno otra lista a x y # al cambiar el valor en x se cambio en y debido a que y apunta a x
Notas/Notas-Python/01_Estructuras.ipynb
avallarino-ar/MCDatos
mit
Diccionarios En una gran cantidad de problemas se requieren almacenar claves y asignarle a cada clave un valor. Un ejemplo paar un dicc. podría ser un "directorio telefonico": (nombre : nro_tel)
dir_tel = {'juan':5512345, 'pedro':5554321, 'itam':'is fun'} # Defino un diccionario dir_tel['juan'] # Obtengo el valor de la clave 'juan' dir_tel.keys() # Obtengo el listado de las claves del dicc. dir_tel.values() # Obtengo el listado de los valores del dicc.
Notas/Notas-Python/01_Estructuras.ipynb
avallarino-ar/MCDatos
mit
Sets Conjuntos matemáticos
A = set([1,2,3]) # Defino 2 sets B = set([2,3,4]) A | B # Union A & B # Intersección A - B # Diferencia de conj. A ^ B # Diferencia simetrica
Notas/Notas-Python/01_Estructuras.ipynb
avallarino-ar/MCDatos
mit
Condicionales y Loops, For, While, If, Elif Una opción para hacer loops en python es la func. range
range(1000) for i in range(5): print(i) for i in range(10): if i % 2 == 0: print(str(i) + ' Par') else: print(str(i) + ' Impar') i = 0 while i < 10: print(i) i = i + 1
Notas/Notas-Python/01_Estructuras.ipynb
avallarino-ar/MCDatos
mit
Clases
# Definición de clase: class Person: def __init__(self, first, last): # Constructor self.first = first self.last = last def greet(self, add_msg = ''): # Método print('Hello ' + self.first + ' ' + add_msg) juan = Person('juan', 'dominguez') #...
Notas/Notas-Python/01_Estructuras.ipynb
avallarino-ar/MCDatos
mit
Simulator needs a Model and World at the instantiation.
m = NetworkModel() m.add_species_attribute(Species("A", "0.0025", "1")) m.add_reaction_rule(create_degradation_reaction_rule(Species("A"), 0.693 / 1)) w = world_type(Real3(1, 1, 1)) w.bind_to(m) w.add_molecules(Species("A"), 60) sim = simulator_type(m, w) sim.set_dt(0.01) #XXX: Optional
ipynb/Tutorials/Simulator.ipynb
navoj/ecell4
gpl-2.0
A Simulator has getters for a simulation time, a step interval, and the next-event time. In principle, a Simulator returns the World's time as its simulation time, and does a sum of the current time and a step interval as the next-event time.
print(sim.num_steps()) print(sim.t(), w.t()) print(sim.next_time(), sim.t() + sim.dt())
ipynb/Tutorials/Simulator.ipynb
navoj/ecell4
gpl-2.0
A Simulator can return the connected model and world. They are not copies, but the shared objects.
print(sim.model(), sim.world())
ipynb/Tutorials/Simulator.ipynb
navoj/ecell4
gpl-2.0
If you change a World after connecting it to a Simulator, you have to call initialize() manually before step(). The call will update the internal state of the Simulator.
sim.world().add_molecules(Species("A"), 60) # w.add_molecules(Species("A"), 60) sim.initialize() # w.save('test.h5')
ipynb/Tutorials/Simulator.ipynb
navoj/ecell4
gpl-2.0
Simulator has two types of step functions. First, with no argument, step() increments the time until next_time().
print("%.3e %.3e" % (sim.t(), sim.next_time())) sim.step() print("%.3e %.3e" % (sim.t(), sim.next_time()))
ipynb/Tutorials/Simulator.ipynb
navoj/ecell4
gpl-2.0
With an argument upto, if upto is later than next_time(), step(upto) increments the time upto the next_time() and returns True. Otherwise, it increments the time for upto and returns False. (If the current time t() is less than upto, it does nothing and returns False.)
print("%.3e %.3e" % (sim.t(), sim.next_time())) print(sim.step(0.1)) print("%.3e %.3e" % (sim.t(), sim.next_time()))
ipynb/Tutorials/Simulator.ipynb
navoj/ecell4
gpl-2.0
For a discrete-step simulation, the main loop should be written like:
# w.load('test.h5') sim.initialize() next_time, dt = 0.0, 1e-2 for _ in range(5): while sim.step(next_time): pass next_time += dt print("%.3e %.3e %d %g" % (sim.t(), sim.dt(), sim.num_steps(), w.num_molecules(Species("A"))))
ipynb/Tutorials/Simulator.ipynb
navoj/ecell4
gpl-2.0
Reading epochs from a raw FIF file This script shows how to read the epochs from a raw file given a list of events. For illustration, we compute the evoked responses for both MEG and EEG data by averaging all the epochs.
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Matti Hamalainen <msh@nmr.mgh.harvard.edu> # # License: BSD (3-clause) import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path()
0.16/_downloads/plot_read_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause