markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בדוגמה זו השתמשנו בעובדה שהפרמטר שמוגדר בעזרת שתי כוכביות הוא בהכרח מילון.<br>
עברנו על כל המפתחות והערכים שבו בעזרת הפעולה <code>items</code>, והדפסנו את המתכון, רכיב אחר רכיב.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
גרמו לפונקציה <code>print_sushi_recipe</code> להדפיס את הרכיבים לפי סדר משקלם, מהנמוך לגבוה.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פרמטר המוגדר בעזרת שתי כוכביות תמיד יופיע בסוף רשימת הפרמטרים.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגול ביניים: גזור פזורפ<span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <code>my_format</code> שמקבלת מחרוזת, ומספר בלתי מוגבל של פרמטרים עם שמות.<br>
הפונקציה תחליף כל הופעה של <code>{key}</code> במחרוזת, אם <code>key</code> הועבר כפרמטר לפונקציה.<br>
הערך שבו <code>{key}</code> יוחלף הוא הערך שהועבר ל־<code>key</code> במסגרת העברת הארגומנטים לפונקציה.<br>
הפונקציה לא תשתמש בפעולה <code>format</code> של מחרוזות או בפונקציות שלא למדנו עד כה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הנה כמה דוגמאות לקריאות לפונקציה ולערכי ההחזרה שלה:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>
הקריאה <code dir="ltr">my_format("I'm Mr. {name}, look at me!", name="Meeseeks")</code><br>
תחזיר <samp dir="ltr">"I'm Mr. Meeseeks, look at me!"</samp>
</li>
<li>
הקריאה <code dir="ltr">my_format("{a} {b} {c} {c}", a="wubba", b="lubba", c="dub")</code><br>
תחזיר <samp dir="ltr">"wubba lubba dub dub"</samp>
</li>
<li>
הקריאה <code dir="ltr">my_format("The universe is basically an animal", animal="Chicken")</code><br>
תחזיר <samp dir="ltr">"The universe is basically an animal"</samp>
</li>
<li>
הקריאה <code dir="ltr">my_format("The universe is basically an animal")</code><br>
תחזיר <samp dir="ltr">"The universe is basically an animal"</samp>
</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">חוק וסדר<span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל לשלב יחד את כל הטכניקות שלמדנו עד עכשיו לפונקציה אחת.<br>
ניצור, לדוגמה, פונקציה שמחשבת עלות הכנה של עוגה.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציה תקבל:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>את רשימת הרכיבים הקיימים בסופר ואת המחירים שלהם.</li>
<li>את רשימת הרכיבים הדרושים כדי להכין עוגה (נניח ששם כל רכיב הוא מילה בודדת).</li>
<li>אם ללקוח מגיעה הנחה.</li>
<li>שיעור ההנחה, באחוזים. כברירת מחדל, אם ללקוח מגיעה הנחה – שיעורה הוא 10%.</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לצורך פישוט התרגיל, נתעלם לרגע מעניין הכמויות במתכון :)
</p>
|
def calculate_cake_price(apply_discount, *ingredients, discount_rate=10, **prices):
if not apply_discount:
discount_rate = 0
final_price = 0
for ingredient in ingredients:
final_price = final_price + prices.get(ingredient)
final_price = final_price - (final_price * discount_rate / 100)
return final_price
calculate_cake_price(True, 'chocolate', 'cream', chocolate=30, cream=20, water=5)
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
הפונקציה נכתבה כדי להדגים את הטכניקה, והיא נראית די רע.<br>
ראו כמה קשה להבין איזה ארגומנט שייך לאיזה פרמטר בקריאה לפונקציה.<br>
יש להפעיל שיקול דעת לפני שימוש בטכניקות של קבלת פרמטרים מרובים.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב לסדר הפרמטרים בכותרת הפונקציה:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>הארגומנטים שמיקומם קבוע ואנחנו יודעים מי הם הולכים להיות (<code>apply_discount</code>).</li>
<li>הארגומנטים שמיקומם קבוע ואנחנו לא יודעים מי הם הולכים להיות (<code>ingredients</code>).</li>
<li>הארגומנטים ששמותיהם ידועים וערך ברירת המחדל שלהם נקבע בראש הפונקציה (<code>discount_rate</code>).</li>
<li>ערכים נוספים ששמותיהם לא ידועים מראש (<code>prices</code>).</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נסו לחשוב: למה נקבע דווקא הסדר הזה?
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
איך הייתם כותבים את אותה הפונקציה בדיוק בלי שימוש בטכניקות שלמדנו?<br>
השימוש בערכי ברירת מחדל מותר.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ערכי ברירת מחדל שאפשר לשנותם<span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יש מקרה קצה של ערכי ברירת מחדל שגורם לפייתון להתנהג קצת מוזר.<br>
זה קורה כשערך ברירת המחדל שהוגדר בכותרת הפונקציה הוא mutable:
</p>
|
def append(item, l=[]):
l.append(item)
return l
print(append(4, [1, 2, 3]))
print(append('a'))
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כאן נראה כאילו הפונקציה פועלת באופן שהיינו מצפים ממנה.<br>
ערך ברירת המחדל של הפרמטר <code>l</code> הוא רשימה ריקה, ולכן בקריאה השנייה חוזרת רשימה עם איבר בודד, <samp>['a']</samp>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נקרא לפונקציה עוד כמה פעמים, ונגלה משהו מוזר:
</p>
|
print(append('b'))
print(append('c'))
print(append('d'))
print(append(4, [1, 2, 3]))
print(append('e'))
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
<p style="text-align: right; direction: rtl; float: right; clear: both;">
משונה ולא הגיוני! ציפינו לקבל את הרשימה <samp>['b']</samp> ואז את הרשימה <samp>['c']</samp> וכן הלאה.<br>
במקום זה בכל פעם מצטרף איבר חדש לרשימה. למה?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הסיבה לכך היא שפייתון קוראת את כותרת הפונקציה רק פעם אחת – בשלב ההגדרה של הפונקציה.<br>
בשלב הזה שבו פייתון תקרא את כותרת הפונקציה, ערך ברירת המחדל של <code>l</code> יצביע לרשימה ריקה.<br>
מאותו רגע, בכל פעם שלא נעביר ל־<code>l</code> ערך, <code>l</code> תהיה אותה רשימת ברירת מחדל שהגדרנו בהתחלה.<br>
נדגים זאת בעזרת הדפסת ה־<code>id</code> של הרשימה:
</p>
|
def view_memory_of_l(item, l=[]):
l.append(item)
print(f"{l} --> {id(l)}")
return l
same_list1 = view_memory_of_l('a')
same_list2 = view_memory_of_l('b')
same_list3 = view_memory_of_l('c')
new_list1 = view_memory_of_l(4, [1, 2, 3])
new_list2 = view_memory_of_l(5, [1, 2, 3])
new_list3 = view_memory_of_l(6, [1, 2, 3])
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כיצד נפתור את הבעיה?<br>
דבר ראשון – נשתדל שלא להגדיר משתנים מטיפוס שהוא mutable בתוך כותרת הפונקציה.<br>
אם נרצה בכל זאת שהפרמטר יקבל רשימה כברירת מחדל, נעשה זאת כך:
</p>
|
def append(item, l=None):
if l == None:
l = []
l.append(item)
return l
print(append(4, [1, 2, 3]))
print(append('a'))
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב שהתופעה לא משתחזרת במבנים שהם immutable, כיוון שכשמם כן הם – אי אפשר לשנותם:
</p>
|
def increment(i=0):
i = i + 1
return i
print(increment(100))
print(increment())
print(increment())
print(increment())
print(increment(100))
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
<span style="text-align: right; direction: rtl; float: right; clear: both;">דוגמאות נוספות<span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">חיקוי מדויק של פונקציית get למילונים:<span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נרענן את זיכרוננו בנוגע ל־unpacking:
</p>
|
range_arguments = [1, 10, 3]
range_result = range(*range_arguments)
print(list(range_result))
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
<p style="text-align: right; direction: rtl; float: right; clear: both;">
או:
</p>
|
preformatted_message = "My name is {me}, and my sister is {sister}"
parameters = {'me': 'Mei', 'sister': 'Satsuki'}
message = preformatted_message.format(**parameters)
print(message)
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם כך, נוכל לכתוב:
</p>
|
def get(dictionary, *args, **kwargs):
return dictionary.get(*args, **kwargs)
|
week05/2_Functions_Part_2.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
Required Packages
If you installed Python through the Anaconda platform, then the packages below should already be installed on your hard drive and Python should be able to find them.
|
import os
import string
import numpy
import matplotlib
import pandas
import sklearn
import scipy
import nltk
print("Success!")
|
00-Introduction/Installation_Check.ipynb
|
lknelson/text-analysis-2017
|
bsd-3-clause
|
Visualization¶
The code in the cell below is not Python but a direct instruction to the Jupyter Notebook. Any visualizations that we produce will appear within the notebook itself (as opposed to an external image processor).
|
%pylab inline
|
00-Introduction/Installation_Check.ipynb
|
lknelson/text-analysis-2017
|
bsd-3-clause
|
As a quick, opening toy example to see Python in action, let's find all the present participles used in Jane Austen's Pride and Prejudice. There is a plain text file containing this book in this folder.
Part of the reason why people use Python to do work on human-language texts (natural language processing) is because it makes tasks like this relatively simple.
|
# every line that starts with a hash is a comment
# the computer ignores these lines, they are meant to address a human reader
# here is some starter code to make sure everything is set up (don't worry about understanding everything here)
for line in open('../Data/Austen_PrideAndPrejudice.txt', encoding='utf-8'):
for word in line.split():
if word.endswith('ing'):
print(word)
|
00-Introduction/Installation_Check.ipynb
|
lknelson/text-analysis-2017
|
bsd-3-clause
|
Tutorial 4 - Current induced domain wall motion
In this tutorial we show how spin transfer torque (STT) can be included in micromagnetic simulations. To illustrate that, we will try to move a domain wall pair using spin-polarised current.
Let us simulate a two-dimensional sample with length $L = 500 \,\text{nm}$, width $w = 20 \,\text{nm}$ and discretisation cell $(2.5 \,\text{nm}, 2.5 \,\text{nm}, 2.5 \,\text{nm})$. The material parameters are:
exchange energy constant $A = 15 \,\text{pJ}\,\text{m}^{-1}$,
Dzyaloshinskii-Moriya energy constant $D = 3 \,\text{mJ}\,\text{m}^{-2}$,
uniaxial anisotropy constant $K = 0.5 \,\text{MJ}\,\text{m}^{-3}$ with easy axis $\mathbf{u}$ in the out of plane direction $(0, 0, 1)$,
gyrotropic ratio $\gamma = 2.211 \times 10^{5} \,\text{m}\,\text{A}^{-1}\,\text{s}^{-1}$, and
Gilbert damping $\alpha=0.3$.
|
# Definition of parameters
L = 500e-9 # sample length (m)
w = 20e-9 # sample width (m)
d = 2.5e-9 # discretisation cell size (m)
Ms = 5.8e5 # saturation magnetisation (A/m)
A = 15e-12 # exchange energy constant (J/)
D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2)
K = 0.5e6 # uniaxial anisotropy constant (J/m**3)
u = (0, 0, 1) # easy axis
gamma = 2.211e5 # gyromagnetic ratio (m/As)
alpha = 0.3 # Gilbert damping
# Mesh definition
p1 = (0, 0, 0)
p2 = (L, w, d)
cell = (d, d, d)
mesh = oc.Mesh(p1=p1, p2=p2, cell=cell)
# Micromagnetic system definition
system = oc.System(name="domain_wall_pair")
system.hamiltonian = oc.Exchange(A=A) + \
oc.DMI(D=D, kind="interfacial") + \
oc.UniaxialAnisotropy(K1=K, u=u)
system.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)
|
workshops/Durham/tutorial4_current_induced_dw_motion.ipynb
|
joommf/tutorial
|
bsd-3-clause
|
Because we want to move a DW pair, we need to initialise the magnetisation in an appropriate way before we relax the system.
|
def m_value(pos):
x, y, z = pos
if 20e-9 < x < 40e-9:
return (0, 0, -1)
else:
return (0, 0, 1)
system.m = df.Field(mesh, value=m_value, norm=Ms)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
system.m.z.plot_plane("z", ax=ax);
|
workshops/Durham/tutorial4_current_induced_dw_motion.ipynb
|
joommf/tutorial
|
bsd-3-clause
|
Now, we can relax the magnetisation.
|
md = oc.MinDriver()
md.drive(system)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
system.m.z.plot_plane("z", ax=ax)
|
workshops/Durham/tutorial4_current_induced_dw_motion.ipynb
|
joommf/tutorial
|
bsd-3-clause
|
And drive the system for half a nano second:
|
td = oc.TimeDriver()
td.drive(system, t=0.5e-9, n=100)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
system.m.z.plot_plane("z", ax=ax);
|
workshops/Durham/tutorial4_current_induced_dw_motion.ipynb
|
joommf/tutorial
|
bsd-3-clause
|
We see that the DW pair has moved to the positive $x$ direction.
Exercise
Modify the code below (which is a copy of the example from above) to obtain one domain wall instead of a domain wall pair and move it using the same current.
|
# Definition of parameters
L = 500e-9 # sample length (m)
w = 20e-9 # sample width (m)
d = 2.5e-9 # discretisation cell size (m)
Ms = 5.8e5 # saturation magnetisation (A/m)
A = 15e-12 # exchange energy constant (J/)
D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2)
K = 0.5e6 # uniaxial anisotropy constant (J/m**3)
u = (0, 0, 1) # easy axis
gamma = 2.211e5 # gyromagnetic ratio (m/As)
alpha = 0.3 # Gilbert damping
# Mesh definition
p1 = (0, 0, 0)
p2 = (L, w, d)
cell = (d, d, d)
mesh = oc.Mesh(p1=p1, p2=p2, cell=cell)
# Micromagnetic system definition
system = oc.System(name="domain_wall")
system.hamiltonian = oc.Exchange(A=A) + \
oc.DMI(D=D, kind="interfacial") + \
oc.UniaxialAnisotropy(K1=K, u=u)
system.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)
def m_value(pos):
x, y, z = pos
if 20e-9 < x < 40e-9:
return (0, 0, -1)
else:
return (0, 0, 1)
# We have added the y-component of 1e-8 to the magnetisation to be able to
# plot the vector field. This will not be necessary in the long run.
system.m = df.Field(mesh, value=m_value, norm=Ms)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
system.m.z.plot_plane("z", ax=ax);
md = oc.MinDriver()
md.drive(system)
system.m.z.plot_plane("z");
ux = 400 # velocity in x direction (m/s)
beta = 0.5 # non-adiabatic STT parameter
system.dynamics += oc.STT(u=(ux, 0, 0), beta=beta)
td = oc.TimeDriver()
td.drive(system, t=0.5e-9, n=100)
system.m.z.plot_plane("z")
|
workshops/Durham/tutorial4_current_induced_dw_motion.ipynb
|
joommf/tutorial
|
bsd-3-clause
|
For Leoni
|
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
system.m.z.imshow("z", ax=ax, aspect=5)
system.m.quiver("z", ax=ax, n=(30, 5), scale=1e7)
ax.set_title("Plot slice")
ax.set_xlabel("x label")
ax.set_ylabel("y label")
|
workshops/Durham/tutorial4_current_induced_dw_motion.ipynb
|
joommf/tutorial
|
bsd-3-clause
|
The construction if __name__ == '__main__': is a standard way in Python of marking code which should be executed when the file is used as a script. It is a best practice to use the above mentioned template.
GRASS GIS parser
Every GRASS GIS module must use the GRASS parser mechanism. This very advanced parser helps to check the user interput, format the help text and optionally create a graphical user interface for the new module.
In Python, this means calling the parser() function from grass.script package. This function parses the special comments written at the beginning of the Python file (below the 'shebang'), processes the parameters provided in command line when using the module and provides these data accordingly within the module. These special comments start with #% and can be referred to as interface definition comments.
Minimal template
The interface definition comment should contain at least the description of the module and two keywords as shown below. Existing GRASS GIS Python scripts may help to understand the best practice. These values are defined in section module which contains the description and keyword keys.
|
%%file r.viewshed.points.py
#!/usr/bin/env python
#%module
#% description: Compute and analyze viewsheds
#% keyword: raster
#% keyword: viewshed
#%end
import grass.script as gscript
def main():
gscript.parser()
if __name__ == '__main__':
main()
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Note that the GRASS GIS parser requires a new line character at the end of file. In this case, we use two empty lines because IPython Notebook will remove one.
To run the script, we need to set its permissions to 'executable'. We do so using chmod command.
|
!chmod u+x r.viewshed.points.py
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Now we can execute the script as a GRASS GIS module with --help to get the command line help output.
|
!./r.viewshed.points.py --help
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Adding parameters as standard options
Now we will add parameters to our module. Let's say, we want to pass the name of an elevation raster map and of a vector points map. In the interface definition comment we hence add a section option for each of these maps and define respectively standard (or predefined) options. For the elevation raster map we use G_OPT_R_INPUT and for the points vector map G_OPT_V_INPUT. Both these standard options are named input by default, so we have to give them unique names. Here we will use elevation for the raster and points for the vector map. The option's name is defined using a key named key.
|
%%file r.viewshed.points.py
#!/usr/bin/env python
#%module
#% description: Compute and analyze viewsheds
#% keyword: raster
#% keyword: viewshed
#%end
#%option G_OPT_R_INPUT
#% key: elevation
#%end
#%option G_OPT_V_INPUT
#% key: points
#%end
import grass.script as gscript
def main():
options, flags = gscript.parser()
elevation = options['elevation']
points = options['points']
print(elevation, points)
if __name__ == '__main__':
main()
!./r.viewshed.points.py --help
!./r.viewshed.points.py elevation=elevation points=bridges
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Adding parameters as custom options
When using standard options, all items are predefined by the parser but we can modify e.g. key or other items to our needs. However, for other cases, we have to define the options completely from scratch. In our example, we introduce a non-standard key with name max_distance which has its type set to double. We use additionally description to document the meaning of this option. In case we would need a longer description we may use the label key for a short and concise description and the description key for more lengthy and wordy description. In addition to the description we add also key_desc which is a short hint for a proper syntax. Again, the existing GRASS GIS Python scripts may help to quickly understand the best practice.
Our max_distance parameter option has as default value -1 which stands for infinity in the case of viewsheds. The user is not required to provide this option, so we set required to no. Also, there can be only one value provided for this option, so we set multiple to no as well.
|
%%file r.viewshed.points.py
#!/usr/bin/env python
#%module
#% description: Compute and analyze viewsheds
#% keyword: raster
#% keyword: viewshed
#%end
#%option G_OPT_R_INPUT
#% key: elevation
#%end
#%option G_OPT_V_INPUT
#% key: points
#%end
#%option
#% key: max_distance
#% key_desc: value
#% type: double
#% description: Maximum visibility radius. By default infinity (-1)
#% answer: -1
#% multiple: no
#% required: no
#%end
import grass.script as gscript
def main():
options, flags = gscript.parser()
elevation = options['elevation']
points = options['points']
max_distance = float(options['max_distance'])
print(elevation, points, max_distance)
if __name__ == '__main__':
main()
!./r.viewshed.points.py elevation=elevation points=bridges max_distance=50
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Adding parameters as flags
|
%%file r.viewshed.points.py
#!/usr/bin/env python
#%module
#% description: Compute and analyze viewsheds
#% keyword: raster
#% keyword: viewshed
#%end
#%option G_OPT_R_INPUT
#% key: elevation
#%end
#%option G_OPT_V_INPUT
#% key: points
#%end
#%option
#% key: max_distance
#% key_desc: value
#% type: double
#% description: Maximum visibility radius. By default infinity (-1)
#% answer: -1
#% multiple: no
#% required: no
#%end
#%flag
#% key: c
#% description: Consider the curvature of the earth (current ellipsoid)
#%end
import grass.script as gscript
def main():
options, flags = gscript.parser()
elevation = options['elevation']
points = options['points']
max_distance = float(options['max_distance'])
use_curvature = flags['c']
print(elevation, points, max_distance)
if use_curvature:
print(use_curvature, "is true")
else:
print(use_curvature, "is false")
if __name__ == '__main__':
main()
!./r.viewshed.points.py -c elevation=elevation points=bridges max_distance=50
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Compilation and formal requirements
As mentioned before, a GRASS GIS module must use the GRASS parser to handle its command line parameters.
A module reads and/or writes geospatial data as GRASS raster or vector maps.
A module should not overwrite existing data unless specified by the user using the --overwrite flag.
For raster and vector maps and other files, the GRASS parser automatically checks their existence and ends the module execution with a proper error message in case the output already exists.
Furthermore, a file with documentation which uses simple HTML syntax must be provided.
This documentation is then distributed with the addon and it is also automatically available online (GRASS addons docs).
A template with the main sections follows.
|
%%file r.viewshed.points.html
<h2>DESCRIPTION</h2>
A long description with details about the method, implementation, usage or whatever is appropriate.
<h2>EXAMPLES</h2>
Examples of how the module can be used alone or in combination with other modules.
Possibly using the GRASS North Carolina State Plane Metric sample Location.
At least one screenshot (PNG format) of the result should provided to show the user what to expect.
<h2>REFERENCES</h2>
Reference or references to paper related to the module or references which algorithm the module is based on.
<h2>SEE ALSO</h2>
List of related or similar GRASS GIS modules or modules used together with this module as well as any related websites, or
related pages at the GRASS GIS User wiki.
<h2>AUTHORS</h2>
List of author(s), their organizations and funding sources.
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
All GRASS GIS modules must be distributed under GNU GPL license (version 2 or later).
There is a specified way how the first comment in module's main file should look like.
Here is how it can look like for our module:
"""
MODULE: r.viewshed.points
AUTHOR(S): John Smith <email at some domain>
PURPOSE: Computes viewshed for multiple points
COPYRIGHT: (C) 2015 John Smith
This program is free software under the GNU General Public
License (>=v2). Read the file COPYING that comes with GRASS
for details.
"""
Although Python is not a compiled language like C, we need to 'compile' also the Python based GRASS GIS module in order to include it in our GRASS installation, and to create HTML documentation and GUI. For this a Makefile needs to be written which follows a standard template as well. The included 'Script.make' takes care of processing everything, given that the Python script, the HTML documentation and optional screenshot(s) in PNG format are present in the same directory.
|
%%file Makefile
MODULE_TOPDIR = ../..
PGM = r.viewshed.points
include $(MODULE_TOPDIR)/include/Make/Script.make
default: script
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
No we will compile the module which will also add it to our GRASS GIS installation. To do this, we need to have administrator rights (on Linux, use sudo in command line). First we need to create one directory required for the compilation:
|
!mkdir `grass70 --config path`/scriptstrings
!g.gisenv set="MAPSET=PERMANENT"
!g.gisenv
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Then we can compile:
|
!make MODULE_TOPDIR=`grass70 --config path`
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
In these two command, we are using 'backticks' syntax to include output of command into another command.
Handling temporary maps and region
In scripts we have to often create temporary maps, which needs to be removed after the script is finished, or if there is an error.
Similarly, when we need to change computational region in a script, we use temporary region, so that we don't affect the current region settings. This also allows running multiple scripts in parallel, each with its own region settings.
We will show you how to handle such cases.
Temporary maps
We will show you how to deal with temporary maps on a simple example script. We will generate temporary random maps, each with unique name. Unique name can be achieved by using process ID in combination with some other prefix:
|
%tb
import os
import grass.script as gscript
def main():
for i in range(5):
name = 'tmp_raster_' + str(os.getpid()) + str(i)
gscript.mapcalc("{name} = rand(0, 10)".format(name=name), seed=1)
if __name__ == '__main__':
main()
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
To avoid removing temporary maps manually with g.remove,
we write a function which removes those temporary maps after at the end of the script.
|
%tb
import os
import grass.script as gscript
TMP_MAPS = []
def main():
for i in range(5):
name = 'tmp_raster_' + str(os.getpid()) + str(i)
TMP_MAPS.append(name)
gscript.mapcalc("{name} = rand(0, 10)".format(name=name), seed=1)
cleanup()
def cleanup():
gscript.run_command('g.remove', type='raster', name=','.join(TMP_MAPS), flags='f')
if __name__ == '__main__':
main()
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
However in case of error, the cleanup function won't be called:
|
import os
import grass.script as gscript
TMP_MAPS = []
def main():
for i in range(5):
name = 'tmp_raster_' + str(os.getpid()) + str(i)
TMP_MAPS.append(name)
gscript.mapcalc("{name} = rand(0, 10)".format(name=name), seed=1)
# we raise intentionally error here:
raise TypeError
cleanup()
def cleanup():
gscript.run_command('g.remove', type='raster', name=','.join(TMP_MAPS), flags='f')
if __name__ == '__main__':
main()
gscript.list_pairs(type='raster', pattern='tmp_raster_*')
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Therefore we will use Python atexit module to ensure the cleanup functions is called everytime, even when the script ends with error.
|
import os
import atexit
import grass.script as gscript
TMP_MAPS = []
def main():
for i in range(5):
name = 'tmp_raster_' + str(os.getpid()) + str(i)
TMP_MAPS.append(name)
gscript.mapcalc("{name} = rand(0, 10)".format(name=name), seed=1)
# we raise intentionally error here:
raise TypeError
def cleanup():
gscript.run_command('g.remove', type='raster', name=','.join(TMP_MAPS), flags='f')
if __name__ == '__main__':
atexit.register(cleanup)
main()
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Temporary region
Using temporary region is simple - library function use_temp_region is called in the beginning of the script and we can then call g.region anywhere in the script. First, we set the region to match a raster map.
|
import grass.script as gscript
gscript.run_command('g.region', raster='elevation')
print gscript.region()
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
As shown in this example,
the region set in the script doesn't affect the current region set in the previous step.
|
import grass.script as gscript
def main():
gscript.run_command('g.region', n=100, s=0, w=0, e=100, res=1)
print gscript.region()
if __name__ == '__main__':
gscript.use_temp_region()
main()
print gscript.region()
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Complete module example
Now we will convert the script from previous section into a fully-fledged module.
The input is a vector map with points and elevation map. The output is a new vector map of vector points with viewshed characteristics saved in attribute table and the raster viewsheds (basename specified in G_OPT_R_BASENAME_OUTPUT). Additionally, module allows to set maximum distance and observer elevation and consider curvature when computing viewshed.
Tip: use r.viewshed --script to copy the definition of options max_distance and observer_elevation.
|
%%file r.viewshed.points.py
#!/usr/bin/env python
#%module
#% description: Compute and analyze viewsheds
#% keyword: raster
#% keyword: viewshed
#%end
#%option G_OPT_R_INPUT
#% key: elevation
#%end
#%option G_OPT_V_INPUT
#% key: points
#%end
#%option G_OPT_V_OUTPUT
#% key: output_points
#%end
#%option G_OPT_R_BASENAME_OUTPUT
#% key: viewshed_basename
#%end
#%option
#% key: observer_elevation
#% type: double
#% required: no
#% multiple: no
#% key_desc: value
#% description: Viewing elevation above the ground
#% answer: 1.75
#%end
#%option
#% key: max_distance
#% key_desc: value
#% type: double
#% description: Maximum visibility radius. By default infinity (-1)
#% answer: -1
#% multiple: no
#% required: no
#%end
#%flag
#% key: c
#% description: Consider the curvature of the earth (current ellipsoid)
#%end
import os
import atexit
import grass.script as gscript
from grass.pygrass.vector import VectorTopo
TMP_MAPS = []
def main():
options, flags = gscript.parser()
elevation = options['elevation']
input_points = options['points']
basename = options['viewshed_basename']
output_points = options['output_points']
observer_elev = options['observer_elevation']
max_distance = float(options['max_distance'])
flag_curvature = 'c' if flags['c'] else ''
tmp_prefix = 'tmp_r_viewshed_points_' + str(os.getpid()) + '_'
tmp_viewshed_name = tmp_prefix + 'viewshed'
tmp_viewshed_vector_name = tmp_prefix + 'viewshed_vector'
tmp_visible_points = tmp_prefix + 'points'
tmp_point = tmp_prefix + 'current_point'
TMP_MAPS.extend([tmp_viewshed_name, tmp_viewshed_vector_name, tmp_visible_points, tmp_point])
columns = [('cat', 'INTEGER'),
('area', 'DOUBLE PRECISION'),
('n_points_visible', 'INTEGER'),
('distance_to_closest', 'DOUBLE PRECISION')]
with VectorTopo(input_points, mode='r') as points, \
VectorTopo(output_points, mode='w', tab_cols=columns) as output:
for point in points:
viewshed_id = str(point.cat)
viewshed_name = basename + '_' + viewshed_id
gscript.run_command('r.viewshed', input=elevation,
output=tmp_viewshed_name, coordinates=point.coords(),
max_distance=max_distance, flags=flag_curvature,
observer_elev=observer_elev, overwrite=True, quiet=True)
gscript.mapcalc(exp="{viewshed} = if({tmp}, {vid}, null())".format(viewshed=viewshed_name,
tmp=tmp_viewshed_name,
vid=viewshed_id))
# viewshed size
cells = gscript.parse_command('r.univar', map=viewshed_name,
flags='g')['n']
area = float(cells) * gscript.region()['nsres'] * gscript.region()['nsres']
# visible points
gscript.run_command('r.to.vect', input=viewshed_name, output=tmp_viewshed_vector_name,
type='area', overwrite=True, quiet=True)
gscript.run_command('v.select', ainput=input_points, atype='point',
binput=tmp_viewshed_vector_name, btype='area',
operator='overlap', flags='t',
output=tmp_visible_points, overwrite=True, quiet=True)
n_points_visible = gscript.vector_info_topo(tmp_visible_points)['points'] - 1
# distance to closest visible point
if float(n_points_visible) >= 1:
gscript.write_command('v.in.ascii', input='-', stdin='%s|%s' % (point.x, point.y),
output=tmp_point, overwrite=True, quiet=True)
distance = gscript.read_command('v.distance', from_=tmp_point, from_type='point', flags='p',
to=tmp_visible_points, to_type='point', upload='dist',
dmin=1, quiet=True).strip()
distance = float(distance.splitlines()[1].split('|')[1])
else:
distance = 0
#
# write each point with its attributes
#
output.write(point, (area, n_points_visible, distance))
output.table.conn.commit()
def cleanup():
gscript.run_command('g.remove', type='raster', name=','.join(TMP_MAPS), flags='f')
if __name__ == '__main__':
atexit.register(cleanup)
main()
# to sbuild the module we need to remove the anaconda python from the PATH
# which has a newer version of GDAL in conflict with the system one used by GRASS
PATH='/usr/local/bin:/usr/local/grass-7.0.2svn/bin:/usr/local/grass-7.0.2svn/scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
import os
os.environ['PATH']=PATH
!make MODULE_TOPDIR=`grass70 --config path`
!v.info input_points
!g.region vect=input_points res=100 -ap
import grass.script as gscript
gscript.run_command('r.viewshed.points', elevation='elevation',
points='input_points', viewshed_basename='viewshed',
output_points='new_points', flags='c', overwrite=True)
!v.info new_points
|
GSOC/notebooks/Projects/GRASS/python-grass-addons/04_script_to_grass_module.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
We prepare the controlled-Hadamard and controlled-u3 gates that are required in the encoding as below.
|
def ch(qProg, a, b):
""" Controlled-Hadamard gate """
qProg.h(b)
qProg.sdg(b)
qProg.cx(a, b)
qProg.h(b)
qProg.t(b)
qProg.cx(a, b)
qProg.t(b)
qProg.h(b)
qProg.s(b)
qProg.x(b)
qProg.s(a)
return qProg
def cu3(qProg, theta, phi, lambd, c, t):
""" Controlled-u3 gate """
qProg.u1((lambd-phi)/2, t)
qProg.cx(c, t)
qProg.u3(-theta/2, 0, -(phi+lambd)/2, t)
qProg.cx(c, t)
qProg.u3(theta/2, phi, 0, t)
return qProg
|
community/terra/qis_adv/two-qubit_state_quantum_random_access_coding.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Encoding 7 bits into 2 qubits with $(7,2)$-QRAC
The sender prepares the state to be sent by working on 3 qubits and uses the first one to control the mixture. She needs controlled unitary gates of $(3,1)$-QRAC encoding for her first 6 bits. When the first qubit is zero, she encodes $x_1x_2x_3$ into the second qubit, and $x_4x_5x_6$ into the third qubit. This can be realized with NOT on the first qubit followed by controlled $(3,1)$-QRAC gates with the first qubit as control and the second and third qubits as targets.
To encode $x_7$, when $x_7 = 0$ the sender applies a controlled Hadamard on the second qubit (with the first qubit as control), and apply Toffoli gate on the third qubit using the first and second qubits as controls. When $x_7 = 1$, the sender flips the second and third qubits before applying the same operations as when $x_7 = 0$.
The decoding operations are the same as those of $(3,1)$-QRAC, and for encoding $x_7$, the receiver just measures the value of the second and third qubit on the computational basis.
Below is the quantum circuits for encoding "0101010" and decoding any one bit with $(7,2)$-QRAC.
|
#CHANGE THIS 7BIT 0-1 STRING TO PERFORM EXPERIMENT ON ENCODING 0000000, ..., 1111111
x1234567 = "0101010"
if len(x1234567) != 7 or not("1" in x1234567 or "0" in x1234567):
raise Exception("x1234567 is a 7-bit 0-1 pattern. Please set it to the correct pattern")
#compute the value of rotation angle theta of (3,1)-QRAC
theta = acos(sqrt(0.5 + sqrt(3.0)/6.0))
#to record the u3 parameters for encoding 000, 010, 100, 110, 001, 011, 101, 111
rotationParams = {"000":(2*theta, pi/4, -pi/4), "010":(2*theta, 3*pi/4, -3*pi/4),
"100":(pi-2*theta, pi/4, -pi/4), "110":(pi-2*theta, 3*pi/4, -3*pi/4),
"001":(2*theta, -pi/4, pi/4), "011":(2*theta, -3*pi/4, 3*pi/4),
"101":(pi-2*theta, -pi/4, pi/4), "111":(pi-2*theta, -3*pi/4, 3*pi/4)}
# Creating registers
# qubits for encoding 7 bits of information with qr[0] kept by the sender
qr = QuantumRegister(3)
# bits for recording the measurement of the qubits qr[1] and qr[2]
cr = ClassicalRegister(2)
encodingName = "Encode"+x1234567
encodingCircuit = QuantumCircuit(qr, cr, name=encodingName)
#Prepare superposition of mixing QRACs of x1...x6 and x7
encodingCircuit.u3(1.187, 0, 0, qr[0])
#Encoding the seventh bit
seventhBit = x1234567[6]
if seventhBit == "1": #copy qr[0] into qr[1] and qr[2]
encodingCircuit.cx(qr[0], qr[1])
encodingCircuit.cx(qr[0], qr[2])
#perform controlled-Hadamard qr[0], qr[1], and toffoli qr[0], qr[1] , qr[2]
encodingCircuit = ch(encodingCircuit, qr[0], qr[1])
encodingCircuit.ccx(qr[0], qr[1], qr[2])
#End of encoding the seventh bit
#encode x1...x6 with two (3,1)-QRACS. To do that, we must flip q[0] so that the controlled encoding is executed
encodingCircuit.x(qr[0])
#Encoding the first 3 bits 000, ..., 111 into the second qubit, i.e., (3,1)-QRAC on the second qubit
firstThreeBits = x1234567[0:3]
#encodingCircuit.cu3(*rotationParams[firstThreeBits], qr[0], qr[1])
encodingCircuit = cu3(encodingCircuit, *rotationParams[firstThreeBits], qr[0], qr[1])
#Encoding the second 3 bits 000, ..., 111 into the third qubit, i.e., (3,1)-QRAC on the third qubit
secondThreeBits = x1234567[3:6]
#encodingCircuit.cu3(*rotationParams[secondTreeBits], qr[0], qr[2])
encodingCircuit = cu3(encodingCircuit, *rotationParams[secondThreeBits], qr[0], qr[2])
#end of encoding
encodingCircuit.barrier()
# dictionary for decoding circuits
decodingCircuits = {}
# Quantum circuits for decoding the 1st to 6th bits
for i, pos in enumerate(["First", "Second", "Third", "Fourth", "Fifth", "Sixth"]):
circuitName = "Decode"+pos
decodingCircuits[circuitName] = QuantumCircuit(qr, cr, name=circuitName)
if i < 3: #measure 1st, 2nd, 3rd bit
if pos == "Second": #if pos == "First" we can directly measure
decodingCircuits[circuitName].h(qr[1])
elif pos == "Third":
decodingCircuits[circuitName].u3(pi/2, -pi/2, pi/2, qr[1])
decodingCircuits[circuitName].measure(qr[1], cr[1])
else: #measure 4th, 5th, 6th bit
if pos == "Fifth": #if pos == "Fourth" we can directly measure
decodingCircuits[circuitName].h(qr[2])
elif pos == "Sixth":
decodingCircuits[circuitName].u3(pi/2, -pi/2, pi/2, qr[2])
decodingCircuits[circuitName].measure(qr[2], cr[1])
#Quantum circuits for decoding the 7th bit
decodingCircuits["DecodeSeventh"] = QuantumCircuit(qr, cr, name="DecodeSeventh")
decodingCircuits["DecodeSeventh"].measure(qr[1], cr[0])
decodingCircuits["DecodeSeventh"].measure(qr[2], cr[1])
#combine encoding and decoding of (7,2)-QRACs to get a list of complete circuits
circuitNames = []
circuits = []
k1 = encodingName
for k2 in decodingCircuits.keys():
circuitNames.append(k1+k2)
circuits.append(encodingCircuit+decodingCircuits[k2])
print("List of circuit names:", circuitNames) #list of circuit names
# for circuit in circuits: #list qasms codes
# print(circuit.qasm())
|
community/terra/qis_adv/two-qubit_state_quantum_random_access_coding.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Below are plots of the experimental results of extracting the first to sixth bit, that results in observing the-$i$th bit with probability at least $0.54$
|
%%qiskit_job_status
# Use the qasm simulator
backend = Aer.get_backend("qasm_simulator")
# Use the IBM Quantum Experience
# backend = least_busy(IBMQ.backends(simulator=False))
shots = 1000
job = execute(circuits, backend=backend, shots=shots)
results = job.result()
for k in ["DecodeFirst", "DecodeSecond", "DecodeThird", "DecodeFourth", "DecodeFifth", "DecodeSixth"]:
print("Experimental Result of ", encodingName+k)
plot_histogram(results.get_counts(circuits[circuitNames.index(encodingName+k)]))
|
community/terra/qis_adv/two-qubit_state_quantum_random_access_coding.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
The seventh bit is obtained by looking at the content of classical registers. If they are the same, i.e., both are 1 or 0, then we conclude that it is 0, or otherwise 1. For the encoding of 0101010, the seventh bit is 0, so the total probability of observing 00 and 11 must exceed that of observing 01 and 10.
|
print("Experimental result of ", encodingName+"DecodeSeventh")
plot_histogram(results.get_counts(circuits[circuitNames.index(encodingName+"DecodeSeventh")]))
|
community/terra/qis_adv/two-qubit_state_quantum_random_access_coding.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Graded = 7/8
1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
|
#Mother's Day in 2009 was May 10, 2009
response = requests.get("http://api.nytimes.com/svc/books/v2/lists/2009-05-10/hardcover-fiction.json?api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
mom_09_data = response.json()
#print(mom_09_data)
#mom_09_data.keys()
#print(mom_09_data['results'])
for item in mom_09_data['results']:
for title in item['book_details']:
print(title['title'])
#Q: Is this the only way to get into a dictionary in a list in a dictionary in a list? To do another for loop?
#Mother's Day in 2010 was May 9, 2010
response = requests.get("http://api.nytimes.com/svc/books/v2/lists/2010-05-09/hardcover-fiction.json?api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
mom_10_data = response.json()
#print(mom_10_data)
for item in mom_10_data['results']:
for title in item['book_details']:
print(title['title'])
#Father's Day in 2009 was June 21, 2009
response = requests.get("http://api.nytimes.com/svc/books/v2/lists/2009-06-21/hardcover-fiction.json?api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
dad_09_data = response.json()
for item in dad_09_data['results']:
for title in item['book_details']:
print(title['title'])
#Father's Day in 2010 was June 20, 2010
response = requests.get("http://api.nytimes.com/svc/books/v2/lists/2010-06-20/hardcover-fiction.json?api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
dad_10_data = response.json()
for item in dad_10_data['results']:
for title in item['book_details']:
print(title['title'])
|
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
|
honjy/foundations-homework
|
mit
|
2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
Question: To specify a date, include it in the URI path. To specify a response-format, add it as an extension. The other parameters in this table are specified as name-value pairs in a query string.
(What is the difference between putting it in the URI path and putting it as a query?)
|
response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?date=2009-06-06&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
june6_09_data = response.json()
#print(june6_09_data)
#Looks all right?
june6_09_data.keys()
#print(june6_09_data['results'])
#Looks like the categories are under display name and list name. I'll just go for list name.
#I hope I got the right list, I am actually not very sure.
for category in june6_09_data['results']:
print(category['list_name'])
response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?date=2015-06-06&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
june6_15_data = response.json()
#print(june6_15_data)
for category in june6_15_data['results']:
print(category['list_name'])
|
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
|
honjy/foundations-homework
|
mit
|
3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy.
|
#Gadafi
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=Gadafi&glocations=Libya&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gadafi_data = response.json()
#print(gadafi_data)
print("The NYT has referred to him by 'Gadafi' a total of", gadafi_data['response']['meta']['hits'], "times")
#Gaddafi
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=Gaddafi&glocations=Libya&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gaddafi_data = response.json()
#print(gaddafi_data)
print("The NYT has referred to him by 'Gaddafi' a total of", gaddafi_data['response']['meta']['hits'], "times")
#Kadafi
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=Kadafi&glocations=Libya&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
kadafi_data = response.json()
#print(kadafi_data)
print("The NYT has referred to him by 'Kadafi' a total of", kadafi_data['response']['meta']['hits'], "times")
#Qaddafi
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=qaddafi&glocations=Libya&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
qaddafi_data = response.json()
#print(qaddafi_data)
print("The NYT has referred to him by 'Qaddafi' a total of", qaddafi_data['response']['meta']['hits'], "times")
|
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
|
honjy/foundations-homework
|
mit
|
4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
Question: What is the difference between query and filter query?
|
#hipster
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=hipster&begin_date=19950101&end_date=19951231&sort=oldest&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
hipster_data = response.json()
#print(hipster_data)
hipster_data.keys()
#print(hipster_data['response']['docs'][0]) #The first story
hipster_data['response']['docs'][0].keys()
print("The title of the first story to mention the word 'hipster' is", hipster_data['response']['docs'][0]['headline']['main'])
|
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
|
honjy/foundations-homework
|
mit
|
TA-Stephan: Didn't print out first paragraph.
5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article.
Tip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959.
|
#gaymarriage
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19500101&end_date=19591231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gay_50_data = response.json()
#print(gay_50_data)
print("The number of times gay marriage is mentioned between 1950 and 1959 is", gay_50_data['response']['meta']['hits'], "times")
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19600101&end_date=19691231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gay_60_data = response.json()
print("The number of times gay marriage is mentioned between 1960 and 1969 is", gay_60_data['response']['meta']['hits'], "times")
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19700101&end_date=19791231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gay_70_data = response.json()
print("The number of times gay marriage is mentioned between 1970 and 1979 is", gay_70_data['response']['meta']['hits'], "times")
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=gaymarriage%22&begin_date=19800101&end_date=19891231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gay_80_data = response.json()
print("The number of times gay marriage is mentioned between 1980 and 1989 is", gay_80_data['response']['meta']['hits'], "times")
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19900101&end_date=19991231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gay_90_data = response.json()
print("The number of times gay marriage is mentioned between 1990 and 1999 is", gay_90_data['response']['meta']['hits'], "times")
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=20000101&end_date=20091231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gay_00_data = response.json()
print("The number of times gay marriage is mentioned between 2000 and 2001 is", gay_00_data['response']['meta']['hits'], "times")
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=20100101&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
gay_10_data = response.json()
print("The number of times gay marriage is mentioned between 2010 and now is", gay_10_data['response']['meta']['hits'], "times")
|
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
|
honjy/foundations-homework
|
mit
|
6) What section talks about motorcycles the most?
Tip: You'll be using facets
|
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycle&facet_field=section_name&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
motor_data = response.json()
top_motor = motor_data['response']['facets']['section_name']['terms']
for item in top_motor:
print(item['term'], item['count'])
print("The section that mentions motorcycles the most is the World section with 1739 mentions")
#QUESTION: Why can't I do this?
#Question: Is there a way to automatically tell you highest value?
#top_motor = (motor_data['response']['facets']['section_name']['terms'][0])
#for item in top_motor:
#print("The section that mentions motorcycles the most is the", item['term'], "section with", item['count'], "mentions")
|
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
|
honjy/foundations-homework
|
mit
|
7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
|
response = requests.get("http://api.nytimes.com/svc/movies/v2/reviews/search.json?resource-type=all&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
movie20_data = response.json()
#print(movie20_data)
#movie20_data.keys()
#print(movie20_data['results'])
#I am assuming critics_pick = 0 means no and critics_pick = 1 means yes
movie_count = 0
for movie in movie20_data['results']:
#print(movie['display_title'], movie['critics_pick'])
if movie['critics_pick'] > 0:
movie_count = movie_count + 1
print("There are", movie_count, "Critics' Picks movies in the last 20 movies reviewed by the NYT")
response = requests.get("http://api.nytimes.com/svc/movies/v2/reviews/search.json?resource-type=all&offset=20&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
movie40_data = response.json()
#print(movie40_data)
movie40 = movie40_data['results'] + movie20_data['results']
#print(movie40)
movie_count = 0
for pie in movie40:
if pie['critics_pick'] > 0:
movie_count = movie_count + 1
print("There are", movie_count, "Critics' Picks movies in the last 40 movies reviewed by the NYT")
response = requests.get("http://api.nytimes.com/svc/movies/v2/reviews/search.json?resource-type=all&offset=40&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2")
movie60_data = response.json()
movie60 = movie60_data['results'] + movie40_data['results'] + movie20_data['results']
movie_count = 0
for berry in movie60:
if berry['critics_pick'] > 0:
movie_count = movie_count + 1
print("There are", movie_count, "Critics' Picks movies in the last 60 movies reviewed by the NYT")
|
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
|
honjy/foundations-homework
|
mit
|
8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
|
byline_count = []
from collections import Counter
#print(movie40_data['results'])
for stuff in movie40_data['results']:
byline = stuff['byline']
#print(byline)
byline_count.append(byline)
counts = Counter(byline_count)
counts.most_common(1)
|
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
|
honjy/foundations-homework
|
mit
|
Load up Dependencies
|
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
import pandas as pd
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Check if GPU is available for use!
|
tf.test.is_gpu_available()
tf.test.gpu_device_name()
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Load and View Dataset
|
dataset = pd.read_csv('movie_reviews.csv.bz2', compression='bz2')
dataset.info()
dataset['sentiment'] = [1 if sentiment == 'positive' else 0 for sentiment in dataset['sentiment'].values]
dataset.head()
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Build train, validation and test datasets
|
reviews = dataset['review'].values
sentiments = dataset['sentiment'].values
train_reviews = reviews[:30000]
train_sentiments = sentiments[:30000]
val_reviews = reviews[30000:35000]
val_sentiments = sentiments[30000:35000]
test_reviews = reviews[35000:]
test_sentiments = sentiments[35000:]
train_reviews.shape, val_reviews.shape, test_reviews.shape
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Basic Text Wrangling
|
!pip install contractions
!pip install beautifulsoup4
import contractions
from bs4 import BeautifulSoup
import unicodedata
import re
def strip_html_tags(text):
soup = BeautifulSoup(text, "html.parser")
[s.extract() for s in soup(['iframe', 'script'])]
stripped_text = soup.get_text()
stripped_text = re.sub(r'[\r|\n|\r\n]+', '\n', stripped_text)
return stripped_text
def remove_accented_chars(text):
text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8', 'ignore')
return text
def expand_contractions(text):
return contractions.fix(text)
def remove_special_characters(text, remove_digits=False):
pattern = r'[^a-zA-Z0-9\s]' if not remove_digits else r'[^a-zA-Z\s]'
text = re.sub(pattern, '', text)
return text
def pre_process_document(document):
# strip HTML
document = strip_html_tags(document)
# lower case
document = document.lower()
# remove extra newlines (often might be present in really noisy text)
document = document.translate(document.maketrans("\n\t\r", " "))
# remove accented characters
document = remove_accented_chars(document)
# expand contractions
document = expand_contractions(document)
# remove special characters and\or digits
# insert spaces between special characters to isolate them
special_char_pattern = re.compile(r'([{.(-)!}])')
document = special_char_pattern.sub(" \\1 ", document)
document = remove_special_characters(document, remove_digits=True)
# remove extra whitespace
document = re.sub(' +', ' ', document)
document = document.strip()
return document
pre_process_corpus = np.vectorize(pre_process_document)
train_reviews = pre_process_corpus(train_reviews)
val_reviews = pre_process_corpus(val_reviews)
test_reviews = pre_process_corpus(test_reviews)
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Build Data Ingestion Functions
|
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.estimator.inputs.numpy_input_fn(
{'sentence': train_reviews}, train_sentiments,
batch_size=256, num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.numpy_input_fn(
{'sentence': train_reviews}, train_sentiments, shuffle=False)
# Prediction on the whole validation set.
predict_val_input_fn = tf.estimator.inputs.numpy_input_fn(
{'sentence': val_reviews}, val_sentiments, shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.estimator.inputs.numpy_input_fn(
{'sentence': test_reviews}, test_sentiments, shuffle=False)
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Build Deep Learning Model with Universal Sentence Encoder
|
embedding_feature = hub.text_embedding_column(
key='sentence',
module_spec="https://tfhub.dev/google/universal-sentence-encoder/2",
trainable=False)
dnn = tf.estimator.DNNClassifier(
hidden_units=[512, 128],
feature_columns=[embedding_feature],
n_classes=2,
activation_fn=tf.nn.relu,
dropout=0.1,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.005))
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Train for approx 12 epochs
|
256*1500 / 30000
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Model Training
|
tf.logging.set_verbosity(tf.logging.ERROR)
import time
TOTAL_STEPS = 1500
STEP_SIZE = 100
for step in range(0, TOTAL_STEPS+1, STEP_SIZE):
print()
print('-'*100)
print('Training for step =', step)
start_time = time.time()
dnn.train(input_fn=train_input_fn, steps=STEP_SIZE)
elapsed_time = time.time() - start_time
print('Train Time (s):', elapsed_time)
print('Eval Metrics (Train):', dnn.evaluate(input_fn=predict_train_input_fn))
print('Eval Metrics (Validation):', dnn.evaluate(input_fn=predict_val_input_fn))
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Model Evaluation
|
dnn.evaluate(input_fn=predict_train_input_fn)
dnn.evaluate(input_fn=predict_test_input_fn)
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Build a Generic Model Trainer on any Input Sentence Encoder
|
import time
TOTAL_STEPS = 1500
STEP_SIZE = 500
my_checkpointing_config = tf.estimator.RunConfig(
keep_checkpoint_max = 2, # Retain the 2 most recent checkpoints.
)
def train_and_evaluate_with_sentence_encoder(hub_module, train_module=False, path=''):
embedding_feature = hub.text_embedding_column(
key='sentence', module_spec=hub_module, trainable=train_module)
print()
print('='*100)
print('Training with', hub_module)
print('Trainable is:', train_module)
print('='*100)
dnn = tf.estimator.DNNClassifier(
hidden_units=[512, 128],
feature_columns=[embedding_feature],
n_classes=2,
activation_fn=tf.nn.relu,
dropout=0.1,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.005),
model_dir=path,
config=my_checkpointing_config)
for step in range(0, TOTAL_STEPS+1, STEP_SIZE):
print('-'*100)
print('Training for step =', step)
start_time = time.time()
dnn.train(input_fn=train_input_fn, steps=STEP_SIZE)
elapsed_time = time.time() - start_time
print('Train Time (s):', elapsed_time)
print('Eval Metrics (Train):', dnn.evaluate(input_fn=predict_train_input_fn))
print('Eval Metrics (Validation):', dnn.evaluate(input_fn=predict_val_input_fn))
train_eval_result = dnn.evaluate(input_fn=predict_train_input_fn)
test_eval_result = dnn.evaluate(input_fn=predict_test_input_fn)
return {
"Model Dir": dnn.model_dir,
"Training Accuracy": train_eval_result["accuracy"],
"Test Accuracy": test_eval_result["accuracy"],
"Training AUC": train_eval_result["auc"],
"Test AUC": test_eval_result["auc"],
"Training Precision": train_eval_result["precision"],
"Test Precision": test_eval_result["precision"],
"Training Recall": train_eval_result["recall"],
"Test Recall": test_eval_result["recall"]
}
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Train Deep Learning Models on difference Sentence Encoders
NNLM - pre-trained and fine-tuning
USE - pre-trained and fine-tuning
|
tf.logging.set_verbosity(tf.logging.ERROR)
results = {}
results["nnlm-en-dim128"] = train_and_evaluate_with_sentence_encoder(
"https://tfhub.dev/google/nnlm-en-dim128/1", path='/storage/models/nnlm-en-dim128_f/')
results["nnlm-en-dim128-with-training"] = train_and_evaluate_with_sentence_encoder(
"https://tfhub.dev/google/nnlm-en-dim128/1", train_module=True, path='/storage/models/nnlm-en-dim128_t/')
results["use-512"] = train_and_evaluate_with_sentence_encoder(
"https://tfhub.dev/google/universal-sentence-encoder/2", path='/storage/models/use-512_f/')
results["use-512-with-training"] = train_and_evaluate_with_sentence_encoder(
"https://tfhub.dev/google/universal-sentence-encoder/2", train_module=True, path='/storage/models/use-512_t/')
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
Model Evaluations
|
results_df = pd.DataFrame.from_dict(results, orient="index")
results_df
best_model_dir = results_df[results_df['Test Accuracy'] == results_df['Test Accuracy'].max()]['Model Dir'].values[0]
best_model_dir
embedding_feature = hub.text_embedding_column(
key='sentence', module_spec="https://tfhub.dev/google/universal-sentence-encoder/2", trainable=True)
dnn = tf.estimator.DNNClassifier(
hidden_units=[512, 128],
feature_columns=[embedding_feature],
n_classes=2,
activation_fn=tf.nn.relu,
dropout=0.1,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.005),
model_dir=best_model_dir)
dnn
def get_predictions(estimator, input_fn):
return [x["class_ids"][0] for x in estimator.predict(input_fn=input_fn)]
predictions = get_predictions(estimator=dnn, input_fn=predict_test_input_fn)
predictions[:10]
!pip install seaborn
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
with tf.Session() as session:
cm = tf.confusion_matrix(test_sentiments, predictions).eval()
LABELS = ['negative', 'positive']
sns.heatmap(cm, annot=True, xticklabels=LABELS, yticklabels=LABELS, fmt='g')
xl = plt.xlabel("Predicted")
yl = plt.ylabel("Actuals")
from sklearn.metrics import classification_report
print(classification_report(y_true=test_sentiments, y_pred=predictions, target_names=LABELS))
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
dipanjanS/text-analytics-with-python
|
apache-2.0
|
The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
|
def complete_deg(n):
"""Return the integer valued degree matrix D for the complete graph K_n."""
D=np.zeros((n,n),np.dtype(int))
for i in range(n):
D[i::n,i::n]=int(n-1) #makes diagnal = n-1
return D
print(complete_deg(10))
raise NotImplementedError()
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
|
assignments/assignment03/NumpyEx04.ipynb
|
rvperry/phys202-2015-work
|
mit
|
The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
|
def complete_adj(n):
"""Return the integer valued adjacency matrix A for the complete graph K_n."""
A=np.ones((n,n),np.dtype(int))
for i in range(n):
A[i::n,i::n]=0
return A
raise NotImplementedError()
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
|
assignments/assignment03/NumpyEx04.ipynb
|
rvperry/phys202-2015-work
|
mit
|
Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
|
def Laplacian(n):
L=complete_deg(n)-complete_adj(n)
return L
def Eigen(n):
E=np.linalg.eigvals(Laplacian(n))
return E
print Eigen(24)
raise NotImplementedError()
|
assignments/assignment03/NumpyEx04.ipynb
|
rvperry/phys202-2015-work
|
mit
|
The stress function for a disk of diameter $d$ with center in the origin, and radial inward and opposite forces $P$ placed at $(0, d/2)$ and $(0, -d/2)$ is given by
$$\phi = x\arctan\left[\frac{x}{d/2 - y}\right] + x\arctan\left[\frac{x}{d/2 + y}\right] + \frac{P}{\pi d}(x^2 + y^2)$$
We know that the stresses are given by
\begin{align}
\sigma_{xx} = \frac{\partial^2 \phi}{\partial x^2}\
\sigma_{yy} = \frac{\partial^2 \phi}{\partial y^2}\
\sigma_{xy} = -\frac{\partial^2 \phi}{\partial x \partial y}
\end{align}
|
d, P = symbols("d P", positive=True)
phi = x*atan(x/(d/2 - y)) + x*atan(x/(d/2 + y)) + P/(pi*d)*(x**2 + y**2)
Sxx = phi.diff(y, 2)
Syy = phi.diff(x, 2)
Sxy = -phi.diff(x, 1, y, 1)
display(Sxx)
display(Syy)
display(Sxy)
Sxx_fun = lambdify((x, y), Sxx.subs({P:1, d:2}), numpy)
Syy_fun = lambdify((x, y), Syy.subs({P:1, d:2}), numpy)
Sxy_fun = lambdify((x, y), Sxy.subs({P:1, d:2}), numpy)
theta, r = np.mgrid[0:2*np.pi:101j, 1e-6:1:101j]
xx = r*np.cos(theta)
yy = r*np.sin(theta)
Sxx_vec = Sxx_fun(xx, yy)
Syy_vec = Syy_fun(xx, yy)
Sxy_vec = Sxy_fun(xx, yy)
plt.figure()
plt.contourf(xx, yy, Sxx_vec, cmap="RdYlGn", vmin=-12, vmax=12)
plt.colorbar()
plt.axis("square");
plt.figure()
plt.contourf(xx, yy, Syy_vec, cmap="RdYlGn", vmin=-210, vmax=210)
plt.colorbar()
plt.axis("square");
plt.figure()
plt.contourf(xx, yy, Sxy_vec, cmap="RdYlGn")
plt.colorbar()
plt.axis("square");
|
Stresses_disk_point_loads.ipynb
|
nicoguaro/notebooks_examples
|
mit
|
Strains
|
E, nu = symbols("E nu")
G = E/(2*(1 + nu))
Exx = simplify(1/E*(Sxx - nu*Syy))
Eyy = simplify(1/E*(Syy - nu*Sxx))
Exy = simplify(1/G*Sxy)
display(Exx)
display(Eyy)
display(Exy)
|
Stresses_disk_point_loads.ipynb
|
nicoguaro/notebooks_examples
|
mit
|
Displacements
|
C1, a, b = symbols("C1 a b")
ux, uy = symbols("ux uy", cls=Function)
eq1 = diff(ux(x), x) - Exx
eq1
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
|
Stresses_disk_point_loads.ipynb
|
nicoguaro/notebooks_examples
|
mit
|
Parametric Equations
Example: Graph the curve $x = \sin{t}$, $y = \cos{t}$, $0 \le t \le \pi$.
Read Scipy Lecture Notes (http://www.scipy-lectures.org/) pg. 45 for creating arrays and pg. 83 for plotting.
|
# define array from 0 to pi with interval size 0.1
t = np.arange(0, np.pi, 0.1)
#define x and y
x = np.sin(t)
y = np.cos(t)
# plot the graph
plt.plot(x, y)
plt.show()
|
Lab1/MAT491-Lab1.ipynb
|
rizauddin/Lab-Assignment-MAT491
|
gpl-3.0
|
Q1. Graph the curve
$x= 26 \sin^3{t}$,
$y = 13\cos{t} - 5\cos{2t} - 2\cos{4t}$,
$0 \le t \le 2\pi.$
Q2. Graph the curve
$x=\mathrm{e}^t \cos{t} - \mathrm{e}^t \sin{t}$,
$y = \mathrm{e}^t \sin{t}$,
$0 \le t \le \pi.$
Hint: Use np.exp(t) for $\mathrm{e}^t$.
Polar Coordinates
Example: Sketch the curve $r = 1 + \sin{\theta}$.
|
# define array from 0 to pi 100 elements
theta = np.linspace(0, 2*np.pi, 100)
# define $r = 1 + \sin{\theta}$
r = 1 + np.sin(theta)
# add subplot in polar coordinates
ax = plt.subplot(111, polar=True)
# plot the graph
ax.plot(theta, r)
plt.show()
|
Lab1/MAT491-Lab1.ipynb
|
rizauddin/Lab-Assignment-MAT491
|
gpl-3.0
|
Q3. Sketch the curve $r = 1 + \cos{\theta}$.
Q4. Sketch the curve $r = 1 - \cos{\theta}$.
Quadratic Surfaces
Example: Sketch the graph of the surface $z = x^2$.
|
x = np.arange(-2, 2, 0.25) # points in the x axis
y = np.arange(-2, 2, 0.25) # points in the y axis
X, Y = np.meshgrid(x, y) # create the "base grid"
Z = X**2 # points in the z axis
fig = plt.figure()
ax = fig.gca(projection='3d') # 3d axes instance
surf = ax.plot_surface(X, Y, Z,
rstride=2, # row step size
cstride=2, # column step size
linewidth=1, # wireframe line width
cmap=cm.RdPu, # colour map
antialiased=True)
|
Lab1/MAT491-Lab1.ipynb
|
rizauddin/Lab-Assignment-MAT491
|
gpl-3.0
|
How much variance is there?
Plot 2 answers this question by adding error bars to the previous plot. Length of the error bar is equal to the standard deviation of actual returns within each respective bucket.
Plots such as these would be typically used by a portfolio manager to assess behavior of prospective signals and to assess signal levels at which an action should be taken. The simplest trading system utilizing this signal would buy security when predicted return is above some threshold (say, above 0.5%) and sell (or short-sell) the security when the signal is below negative threshold (e.g. below -0.5%).
4. Next Steps
We recommend you to try the following steps after the lab.
Try using other machine learning techniques such as random forest, ridge regression, xgboost and compare the correlation with LSTM based predictor.
Try using autoencoder to extract fewer features than the original dataset provides and use the features as input to the deep learning model. Analyze the performance.
5. Summary
In this lab, step by step implementation of a LSTM based deep neural network to predict time series financial data is presented. The performance of the model is evaluated with the pearson correlation and competitive performance is achieved. The code provided in this lab can be used in complex trading strategies.
6. Post-Lab
Finally, don't forget to save your work from this lab before time runs out and the instance shuts down!!
You can download the data from this link.
To use the data, please set the "usePreparedData" variable to False before running the code on your environment.
Also, remove the code "model_saver.restore(sess, pre_trained_model)" to train the model for you data.
You can execute the following cell block to zip the files you've been working on, and download it from the link below.
|
! tar -cvf output3.zip --exclude="2sigma" --exclude="__*" --exclude="*.zip" *
!tar -cvf output.zip main.ipynb prepareData.py dnn.jpg data.jpg data_split.jpg rnn.jpg
! ls
|
ai_newsweek/keras_autoencoder_ai_newsweek_conf.ipynb
|
GuillaumeDec/machine-learning
|
gpl-3.0
|
TensorFlow
Let's create a DNNClassifier using TF's built-in classifier, and evaluate its accuracy.
|
import tensorflow as tf
# what should the classifier expect in terms of features
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=D)]
# defining the actual classifier
dnn_spiral_classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
activation_fn = tf.nn.softmax, # softmax activation
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01), #GD with LR of 0.01
hidden_units = [10], # one hidden layer, containing 10 neurons
n_classes = K, # K target classes
model_dir="/tmp/spiral_model") # directory for saving model checkpoints
# turn data into tensors to feed into the computational graph
# honestly input_fn could also handle these as np.arrays but this is here to show you that the tf.constant operation can run on np.array input
def get_inputs():
X_tensor = tf.constant(X)
y_tensor = tf.constant(y)
return X_tensor, y_tensor
# fit the model
dnn_spiral_classifier.fit(input_fn=get_inputs, steps=200)
# interestingsly, you can continue training the model by continuing to call fit
dnn_spiral_classifier.fit(input_fn=get_inputs, steps=300)
#evaluating the accuracy
accuracy_score = dnn_spiral_classifier.evaluate(input_fn=get_inputs,
steps=1)["accuracy"]
print("\n Accuracy: {0:f}\n".format(accuracy_score))
|
codelab_3_tensorflow_nn.ipynb
|
thinkingmachines/deeplearningworkshop
|
mit
|
Notice the following:
The higher level library vastly simplied the following mechanics:
tf.session management
training the model
running evaluation loops
feeding data into the model
generating predictions from the model
saving the model in a checkpoint file
For most use cases, it's likely that the many common models built into to tf will be able to solve your problem. You'll have to do model tuning by figuring out the correct parameters. Building your computational graph node by doing isn't likely needed unless you're doing academic research or working with very specialized datasets where default performance plataues.'
Look at the checkpoints
Poke inside the /tmp/spiral_model/ directory to see how the checkpoint data is stored. What's contained in these files?
|
%ls '/tmp/spiral_model/'
|
codelab_3_tensorflow_nn.ipynb
|
thinkingmachines/deeplearningworkshop
|
mit
|
Predicting on a new value
Let's classify a new point
|
def new_points():
return np.array([[1.0, 1.0],
[-1.5, -1.0]], dtype = np.int32)
predictions = list(dnn_spiral_classifier.predict(input_fn=new_points))
print(
"New Samples, Class Predictions: {}\n"
.format(predictions))
|
codelab_3_tensorflow_nn.ipynb
|
thinkingmachines/deeplearningworkshop
|
mit
|
Digging into the DNNClassifier
The DNNClassifier is one fo the (Estimators)[https://www.tensorflow.org/api_guides/python/contrib.learn#Estimators] available in the tf.contrib.learn libary. Other estimators include:
KMeansClustering
DNNRegressor
LinearClassifier
LinearRegressor
LogisticRegressor
Each one of these can perform various actions on the graph, including:
evaluate
infer
train
Each one of these can read in batched input data with types including:
pandas data
real values columns from an input
real valued columns from an input function (what we used above)
batches
Keep an eye on the documentation and updates. TensorFlow is under constant development. Things change very quickly!
|
# watch out for this, tf.classifier.evaluate is going to be deprecated, so keep an eye out for a long-term solution to calculating accuracy
accuracy_score = dnn_spiral_classifier.evaluate(input_fn=get_inputs,
steps=1)["accuracy"]
|
codelab_3_tensorflow_nn.ipynb
|
thinkingmachines/deeplearningworkshop
|
mit
|
Exercises
Get into the (TensorFlow API docs)[https://www.tensorflow.org/api_docs/python/tf]. Try the following and see how it impacts the final scores
Change the activation function to a ReLU
Change the optimization function to stochastic gradient descent, then change it again to Adagrad
Add more steps to training
Add more layers
Increase the number of neurons in each hidden layer
Change the learning rate to huge and tiny values
/
Gold Star Challenge
Reimplement the Spiral Classifier as a 2 Layer Neural Network in TensorFlow Core
|
# sample code to use for the gold star challenge from https://www.tensorflow.org/get_started/get_started
import numpy as np
import tensorflow as tf
# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x:x_train, y:y_train})
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x:x_train, y:y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
|
codelab_3_tensorflow_nn.ipynb
|
thinkingmachines/deeplearningworkshop
|
mit
|
Joining two tables using edit distance measure typically consists of three steps:
1. Loading the input tables
2. Profiling the tables
3. Performing the join
1. Loading the input tables
We begin by loading the two tables. For the purpose of this guide,
we use the sample dataset that comes with the package.
|
# construct the path of the tables to be loaded. Since we are loading a
# dataset from the package, we need to access the data from the path
# where the package is installed. If you need to load your own data, you can directly
# provide your table path to the read_csv command.
table_A_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'person_table_A.csv'])
table_B_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'person_table_B.csv'])
# Load csv files as dataframes.
A = pd.read_csv(table_A_path)
B = pd.read_csv(table_B_path)
print('Number of records in A: ' + str(len(A)))
print('Number of records in B: ' + str(len(B)))
A
B
|
notebooks/Joining two tables using edit distance measure.ipynb
|
anhaidgroup/py_stringsimjoin
|
bsd-3-clause
|
For the purpose of this guide, we will now join tables A and B on
'name' attribute using edit distance measure. Next, we need to decide on what
threshold to use for the join. For this guide, we will use a threshold of 5.
Specifically, the join will now find tuple pairs from A and B such that
the edit distance over the 'name' attributes is at most 5.
3. Performing the join
The next step is to perform the edit distance join using the following command:
|
# find all pairs from A and B such that the edit distance
# on 'name' is at most 5.
# l_out_attrs and r_out_attrs denote the attributes from the
# left table (A) and right table (B) that need to be included in the output.
output_pairs = ssj.edit_distance_join(A, B, 'A.id', 'B.id', 'A.name', 'B.name', 5,
l_out_attrs=['A.name'], r_out_attrs=['B.name'])
len(output_pairs)
# examine the output pairs
output_pairs
|
notebooks/Joining two tables using edit distance measure.ipynb
|
anhaidgroup/py_stringsimjoin
|
bsd-3-clause
|
Handling missing values
By default, pairs with missing values are not included
in the output. This is because a string with a missing value
can potentially match with all strings in the other table and
hence the number of output pairs can become huge. If you want
to include pairs with missing value in the output, you need to
set the allow_missing flag to True, as shown below:
|
output_pairs = ssj.edit_distance_join(A, B, 'A.id', 'B.id', 'A.name', 'B.name', 5, allow_missing=True,
l_out_attrs=['A.name'], r_out_attrs=['B.name'])
output_pairs
|
notebooks/Joining two tables using edit distance measure.ipynb
|
anhaidgroup/py_stringsimjoin
|
bsd-3-clause
|
Enabling parallel processing
If you have multiple cores which you want to exploit for performing the
join, you need to use the n_jobs option. If n_jobs is -1, all CPUs
are used. If 1 is given, no parallel computing code is used at all,
which is useful for debugging and is the default option. For n_jobs below
-1, (n_cpus + 1 + n_jobs) are used (where n_cpus is the total number of
CPUs in the machine). Thus for n_jobs = -2, all CPUs but one are used. If
(n_cpus + 1 + n_jobs) becomes less than 1, then no parallel computing code
will be used (i.e., equivalent to the default).
The following command exploits all the cores available to perform the join:
|
output_pairs = ssj.edit_distance_join(A, B, 'A.id', 'B.id', 'A.name', 'B.name', 5,
l_out_attrs=['A.name'], r_out_attrs=['B.name'], n_jobs=-1)
len(output_pairs)
|
notebooks/Joining two tables using edit distance measure.ipynb
|
anhaidgroup/py_stringsimjoin
|
bsd-3-clause
|
You need to set n_jobs to 1 when you are debugging or you do not want
to use any parallel computing code. If you want to execute the join as
fast as possible, you need to set n_jobs to -1 which will exploit all
the CPUs in your machine. In case there are other concurrent processes
running in your machine and you do not want to halt them, then you may
need to set n_jobs to a value below -1.
Performing join on numeric attributes
The join method expects the join attributes to be of string type.
If you need to perform the join over numeric attributes, then you need
to first convert the attributes to string type and then perform the join.
For example, if you need to join 'A.zipcode' in table A with 'B.zipcode' in
table B, you need to first convert the attributes to string type using
the following command:
|
ssj.dataframe_column_to_str(A, 'A.zipcode', inplace=True)
ssj.dataframe_column_to_str(B, 'B.zipcode', inplace=True)
|
notebooks/Joining two tables using edit distance measure.ipynb
|
anhaidgroup/py_stringsimjoin
|
bsd-3-clause
|
Note that the above command preserves the NaN values while converting the numeric column to string type. Next, you can perform the join as shown below:
|
output_pairs = ssj.edit_distance_join(A, B, 'A.id', 'B.id', 'A.zipcode', 'B.zipcode', 1,
l_out_attrs=['A.zipcode'], r_out_attrs=['B.zipcode'])
output_pairs
|
notebooks/Joining two tables using edit distance measure.ipynb
|
anhaidgroup/py_stringsimjoin
|
bsd-3-clause
|
Additional options
You can find all the options available for the edit distance
join function using the help command as shown below:
|
help(ssj.edit_distance_join)
|
notebooks/Joining two tables using edit distance measure.ipynb
|
anhaidgroup/py_stringsimjoin
|
bsd-3-clause
|
Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
|
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, None)
# Leaky ReLU
h1 = tf.maximum(h1, alpha * h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, None)
out = tf.tanh(logits)
return out
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
ianhamilton117/deep-learning
|
mit
|
Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
|
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, None)
# Leaky ReLU
h1 = tf.maximum(h1, alpha * h1)
logits = tf.layers.dense(h1, 1, None)
out = tf.sigmoid(logits)
return out, logits
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
ianhamilton117/deep-learning
|
mit
|
Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
|
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, reuse=True, alpha=alpha)
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
ianhamilton117/deep-learning
|
mit
|
Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
|
# Calculate losses
d_labels_real = tf.ones_like(d_logits_real) * (1 - smooth)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=d_labels_real))
d_labels_fake = tf.zeros_like(d_logits_fake)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=d_labels_fake))
d_loss = d_loss_real + d_loss_fake
g_labels = tf.ones_like(d_logits_fake)
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=g_labels))
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
ianhamilton117/deep-learning
|
mit
|
Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
|
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith("generator")]
d_vars = [var for var in t_vars if var.name.startswith("discriminator")]
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
ianhamilton117/deep-learning
|
mit
|
Let's see the number of rows in this dataset
|
df.shape
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
And the number of person,
|
df.Name.nunique()
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
Okay, so we know that each person represent one observation in this dataset. Let's see the distribution of age in this dataset.
|
df.Age.describe()
df.Age.hist(bins=40)
plt.xlabel("Age")
plt.ylabel("Number of Person")
plt.title("Histogram of Passenger Age");
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
I choose histogram since I only analyze one numerical variable. I choose 40 as nunmber of bins to let the histogram show smooth distribution of the data. And since we see from the statistics that the age is max at 80 years old, every 20 year will have the exact edge of the histogram.
I can see from this histogram that many children is below 5 years old. Some of them are babies which we see that there's a peak around 1 year. This histogram will have an almost normal distribution if there isn't a peak around 1 year old. The earlier statistics show that median is 28 year old, and mean is 29 year old. You can also tell that the distribution is normal when you have similar median and mean.
Overall the plot tells us that the passengers' age is distributed around mid-end 20's. Let's see if the distribution of the age is actually different between whether or not the passengers have survived.
|
p = sns.violinplot(data = df, x = 'Survived', y = 'Age')
p.set(title = 'Age Distribution by Survival',
xlabel = 'Survival',
ylabel = 'Age Distribution',
xticklabels = ['Died', 'Survived']);
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
Now this is interesting. We can see that from violin plot, the distribution is a little bit different. I use violin plot because I can see the distribution of Age by Survived side by side.
The distribution of the age shows bimodal distribution of people who survived. Many old people died from the tragedy, though we see 1 80 year-old man did survive the tragedy. We can see the person below.
|
df[(df.Survived == 1) & (df.Age == 80)]
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
To support the plot, I also include relevant statistics between Age and Survived.
|
df.groupby('Survived').Age.describe().unstack(level=0)
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
If we see from the statistics everything is similar. Children who died minimum is 1 years old, while babies who survive minimum at 5 months old. Again if I observe the plot, many children survived the accident, at least compared to the children who didn't.
If you remember from Titanic movies, children and women are prioritized to get to the lifeboat. It's interesting to know whether this is actually true.
Is women and children survival rate is higher than those who didn't?
To get into this, I create a frequency table. Children are described as passenger below 12 years old. And gender already described by Sex column.
|
df['WomenChildren'] = np.where((df.Age <= 12) | (df.Sex == 'female'),1,0)
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
I'm using ChiSquare from Scipy library. This function has takes frequency table that I've created earlier, and then output chisquare statistic, p-value, degree of freedom, and the expected frequency table if both variables aren't related. Since for Survived is categorical and some of the variables also categorical, I create neat function that calculate frequency table and compute ChiSquare Indepence test for 2 Pandas categorical Series.
|
def compute_freq_chi2(x,y):
"""This function will compute frequency table of x an y
Pandas Series, and use the table to feed for the contigency table
Parameters:
-------
x,y : Pandas Series, must be same shape for frequency table
Return:
-------
None. But prints out frequency table, chi2 test statistic, and
p-value
"""
freqtab = pd.crosstab(x,y)
print("Frequency table")
print("============================")
print(freqtab)
print("============================")
chi2,pval,dof,expected = sp.chi2_contingency(freqtab)
print("ChiSquare test statistic: ",chi2)
print("p-value: ",pval)
return
compute_freq_chi2(df.Survived,df.WomenChildren)
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
From the frequency table, we can see a magnitude difference of women and children that survived compared to those who didn't. Women and children survived is about 2.5 times higher than women and children whom not survived. On the contrary, adult men is about 5 times higher between not survived and survived. To be fair, let's put this to a statistical test.
Since both independent and dependent variable are categorical, I choose Chi-Square Independece test. For this test to be true, Let's validate the condition,
Each cell has at least 5 expected cases. Checked.
Each case only contributes to once cell in the table. Checked.
If sample, random sample and less than 10% population. This dataset is already a population.
Since we have checked all the condition, we can proceed to the test. And as expected, chi-square statistic provide very high number, and p value which practically zero. Thus the data provide convincing evidence that whether the passenger woman or children and whether they survived are related. Just for the sake of curiosity, how is the accuracy if we take it as predictive model?
|
(df['WomenChildren'] == df.Survived).mean()
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
There you go, I got 79.24% accuracy.
Titanic was a massive ship. Again remembering the movie back then, rich and poor people get to the ship. I wonder how Titanic data required social economy status of passenger, which represented by Pclass. We could see if the fare vary across this variable.
|
df.groupby('Pclass').Fare.mean()
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
From this result, we see that there's huge price jump between upper class and middle/lower class! Although it's no surprise. I recall from the movie, upper class room is family room, and facilitated with a lot of fancy stuff. While in lower class (DiCaprio's room), people have to shared between other passengers.
Perhaps number alone won't satisfy you enough. Let's take it to the visualization. And since in this analysis, we want to know people who survive, I also throw whether the people survive into the equation.
I will plot the visualization using bar plot, since I want to see different of Fare across social-economic status. And I want to differentiate the status by Survive to see if it depends on these two variables.
|
sns.barplot(x="Pclass",y="Fare",hue="Survived",data=df,estimator=np.mean)
plt.ylabel("")
plt.xlabel("Socio-Econmic Status")
plt.title("Average fare for different SES");
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
Looking at this plot, I see something expected and unexpected. First the expected one, I see the average of fare of middle/lower is similar but the difference is huge when compared to the upper class. This is again, expected since earlier we have saw the number.
The unexpected one, is whether survive vary in the upper class! Lower/middle class has similar fare, but there is a clear difference of average fare of in the upper class which result in life and death difference. What's the cause of this? They pay same upper class. What makes the different price in one class? What I can think of is because they have different cabin. Is it because lifeboats are placed in particular cabin? Unfortunately it's hard to know which cabin is better than others, except with Fare.
|
(df[(df.Pclass == 1)]
.groupby([df.Cabin.str[:1],'Survived'])
.Fare
.mean()
.unstack())
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
We see from the table above, Cabin with prefix B is the most expensive one compared to ohers. Cabin C is probably the most expensive on average. But will this guarantee for whether the passengers survive?
|
(df[(df.Pclass == 1)]
.groupby([df.Cabin.str[:1],'Survived'])
.PassengerId
.count()
.unstack())
|
p2-introds/titanic/titanic.ipynb
|
napjon/ds-nd
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.