Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
9,000
|
<ASSISTANT_TASK:>
Python Code:
import re
phoneRegex = re.compile(r'/d/d/d-/d/d/d-/d/d/d/d')
#phoneRegex.search() # finds first match
#phoneRegex.findall() # finds all matches
import re
phoneRegex = re.compile(r'(/d/d/d)-(/d/d/d-/d/d/d/d)') # Two groups, so returns tuples
#phoneRegex.findall() # finds all matches in pairs; [('group1', 'group2'),...]
#digitRegex = re.compile(r'(1|2|3|4...|n)`) is equivalent to
#digitRegex = re.compile(r'\d\')
# Example using lyrics from The Twelve Days of Christmas
lyrics = '''
12 Drummers Drumming
11 Pipers Piping
10 Lords a Leaping
9 Ladies Dancing
8 Maids a Milking
7 Swans a Swimming
6 Geese a Laying
5 Golden Rings
4 Calling Birds
3 French Hens
2 Turtle Doves
and 1 Partridge in a Pear Tree
'''
xmasRegex = re.compile(r'\d+\s\w+') # 1 or more digits, space, 1 or more words
xmasRegex.findall(lyrics) # Returns all 'x gift', but stops at space because \w+ does not include spaces
vowelRegex = re.compile(r'[aeiouAEIOU]') # RegEx for lowercase and uppercase vowels
alphabetRegex = re.compile(r'[a-zA-Z]') # RegEx for lowercase and uppercase alphabet using ranges
print(vowelRegex.findall('Robocop eats baby food.')) # Finds a list of all vowels in string
doublevowelRegex = re.compile(r'[aeiouAEIOU]{2}') # RegEx for two lowercase and uppercase vowels in a row; {2} repeats.
print(doublevowelRegex.findall('Robocop eats baby food.')) # Finds a list of all vowels in string
consonantsRegex = re.compile(r'[^aeiouAEIOU]') # RegEx for finding all characters that are NOT vowels
print(consonantsRegex.findall('Robocop eats baby food.')) # Output will include spaces and words.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: find.all() returns a list of strings.
Step2: To get the total string, just wrap the total regex in its own group, so you get [(totalstring, group1, group2),...].
Step3: Other regex characters are
Step4: It is possible to create your own character classes, outside of these shorthand classes, using []
Step5: A useful feature of custom character classes are negative character classes
|
9,001
|
<ASSISTANT_TASK:>
Python Code:
%%html
<style>
.example-container { background: #999999; padding: 2px; min-height: 100px; }
.example-container.sm { min-height: 50px; }
.example-box { background: #9999FF; width: 50px; height: 50px; text-align: center; vertical-align: middle; color: white; font-weight: bold; margin: 2px;}
.example-box.med { width: 65px; height: 65px; }
.example-box.lrg { width: 80px; height: 80px; }
</style>
from IPython.html import widgets
from IPython.display import display
button = widgets.Button(
description='Hello World!',
width=100, # Integers are interpreted as pixel measurements.
height='2em', # em is valid HTML unit of measurement.
color='lime', # Colors can be set by name,
background_color='#0022FF', # and also by color code.
border_color='red')
display(button)
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
container = widgets.Box()
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container)
int_range = widgets.IntSlider()
container.children=[int_range]
name1 = widgets.Text(description='Location:')
zip1 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page1 = widgets.Box(children=[name1, zip1])
name2 = widgets.Text(description='Location:')
zip2 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page2 = widgets.Box(children=[name2, zip2])
accord = widgets.Accordion(children=[page1, page2], width=400)
display(accord)
accord.set_title(0, 'From')
accord.set_title(1, 'To')
name = widgets.Text(description='Name:', padding=4)
color = widgets.Dropdown(description='Color:', padding=4, options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])
page1 = widgets.Box(children=[name, color], padding=4)
age = widgets.IntSlider(description='Age:', padding=4, min=0, max=120, value=50)
gender = widgets.RadioButtons(description='Gender:', padding=4, options=['male', 'female'])
page2 = widgets.Box(children=[age, gender], padding=4)
tabs = widgets.Tab(children=[page1, page2])
display(tabs)
tabs.set_title(0, 'Name')
tabs.set_title(1, 'Details')
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="aaaaaaaaaaaaaaaaaa:"))
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text())
buttons = [widgets.Button(description=str(i)) for i in range(3)]
display(*buttons)
container = widgets.HBox(children=buttons)
display(container)
container.width = '100%'
container.pack = 'center'
w1 = widgets.Latex(value="First line")
w2 = widgets.Latex(value="Second line")
w3 = widgets.Latex(value="Third line")
display(w1, w2, w3)
w2.visible=None
w2.visible=False
w2.visible=True
form = widgets.VBox()
first = widgets.Text(description="First Name:")
last = widgets.Text(description="Last Name:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet's Name:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.on_trait_change(on_student_toggle, 'value')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Widget Styling
Step2: Parent/child relationships
Step3: After the parent is displayed
Step4: Fancy boxes
Step5: TabWidget
Step6: Alignment
Step7: If a label is longer than the minimum width, the widget is shifted to the right
Step8: If a description is not set for the widget, the label is not displayed
Step9: Flex boxes
Step10: Using hbox
Step11: By setting the width of the container to 100% and its pack to center, you can center the buttons.
Step12: Visibility
Step13: Another example
|
9,002
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename="ImagAnillosNewton.jpg")
from IPython.display import Image
Image(filename="PaperAnillosNewton.JPG")
from IPython.display import Image
Image(filename="esquemaAnillosNewton1.jpg")
from IPython.display import Image
Image(filename="esquemaAnillosNewton2.jpg")
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display,HTML
%matplotlib inline
#plt.style.use('fivethirtyeight')
#Lambda = 5.5e-7;h0 = 0;R = 10.5
n = 1
I1 = 1; I2 = 1
x,y = np.mgrid[-100:100,-100:100]*5e-5
xcut = np.arange(-100*5e-5,100*5e-5,5e-5)
def anillos(h,R,Color):
fig, ax = plt.subplots(1,2,figsize=(16,8))
if(Color=='Azul (400nm)'):
Lambda = 4e-7
elif(Color=='Verde (550 nm)'):
Lambda = 5.5e-7
elif(Color=='Rojo (650 nm)'):
Lambda = 6.5e-7
espesor = (x**2 + y**2)/(2*R) + h*1e-6
delta = (2*np.pi/Lambda)*2*n*espesor + np.pi
Itot = I1 + I2 + 2*np.sqrt(I1*I2)*np.cos(delta)
#x = np.sqrt(m*Lambda*r - 2*h*r)
ax[0].imshow(Itot,cmap='copper')
ax[1].plot(xcut*1e3,Itot[:,np.shape(Itot)[1]/2])
ax[1].set_xlabel('Distancia al centro (mm)')
ax[1].set_ylabel('Intensidad')
display(fig)
return
lambdawidgets = widgets.Dropdown(options=['Azul (400nm)','Verde (550 nm)','Rojo (650 nm)'])
lambdawidgets.height = 30
widgets.interact(anillos,h=(0,0.5,0.01),R=(10.0,20.0,0.5),Color=lambdawidgets)#widgets.fixed(5.5e-7))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: El patrón de interferencias que aparece al reflejarse la luz entre dos superficies transparentes, una curva y otra plana, es conocido como Anillos de Newton.
Step2: Teoría
Step3: Vamos a calcular a que distancias del centro tenemos mínimos de luz, o lo que es lo mismo las posiciones de los anillos oscuros.
Step4: Vemos en la figura que el espesor total de la lámina de aire será el espesor anterior $e=x^2 / (2 R)$ más el desplazamiento vertical $h$. Por lo tanto el desfase entre las dos ondas será
|
9,003
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
df = pd.read_csv('example')
df
df.to_csv('example',index=False)
pd.read_excel('Excel_Sample.xlsx',sheetname='Sheet1')
df.to_excel('Excel_Sample.xlsx',sheet_name='Sheet1')
df = pd.read_html('http://www.fdic.gov/bank/individual/failed/banklist.html')
df[0]
from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:')
df.to_sql('data', engine)
sql_df = pd.read_sql('data',con=engine)
sql_df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CSV
Step2: CSV Output
Step3: Excel
Step4: Excel Output
Step5: HTML
Step6: SQL (Optional)
|
9,004
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
X = np.array([[1, 2, 3], [4, 5, 6]])
X
X + 2*X
np.matmul(X.transpose(), X) #X^t * X
X[1,1]
X[1, :] #1. Fila entera
X[:, 1] #2. Columna entera
X[0:2, 0:2] #3. Slice de n:m, n,n+1,...,m-1
X.shape #Dimensión de arrays
vec = np.array([1, 2, 3])
print(vec)
class Array:
"Clase mínima para Álgebra Lineal"
def __init__(self, list_of_rows):
"Constructor"
self.data = list_of_rows
self.shape = (len(list_of_rows), len(list_of_rows[0]))
A = Array([[1, 2, 3], [4, 5, 6]])
A.__dict__ # la propiedad oculta *dict* muestra las propiedades internas de la clase
A.data #accediendo a la propiedad *data* de la clase
A.shape #accediendo a la propiedad *shape* de la clase
Array([[1,2,3], [4,5,6]])
print(Array([[1,2,3], [4,5,6]]))
# Prueba de clase
class JorgeClass:
def __init__(self):
pass # para no hacer nada
def say_hi(self):
print("¡Método de impresión simple!")
def __repr__(self):
return "Representación sin imprimir nada"
def __str__(self):
return "Método explícito para *print* como objeto"
x = JorgeClass()
x.say_hi()
x
print(x)
import re
class Array:
"Clase mínima para Álgebra Lineal"
data = list()
def __init__(self, list_of_rows):
"Constructor"
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Ejercicio 1: Las filas deben ser del mismo tamano. ¡Validador funcionando!")
def say_hi(self):
print("Ejercicio 1: say_hi() method")
def __repr__(self):
retval = "["
for list in self.data:
retval += "["
for x in list:
retval += str(x) + ", "
retval += "], "
retval = retval + "]"
return retval.replace(", ]", "]")
def __str__(self):
return self.__repr__()
#return "Ejercicio 1: print"
def __getitem__(self, index):
return(self.data[index[0]][index[1]])
def __setitem__(self, index, newval):
self.data[index[0]][index[1]] = newval
return
Array([[1,2,3],[4,5]])
vec = Array([1,2,3])
vec.data
X = Array([[1,2],[3,4]])
X[1,0]
X[0,0] = 0
X[0,0]
X
np.zeros((3,6))
import re
import numpy as np
class zeros:
"Clase para llenar una matriz en Ceros"
data = list()
shape = None
def __init__(self, shape):
"Constructor"
self.data = list()
print(shape)
for i in range(0, shape[1]):
self.data.append(list())
for j in range(0, shape[0]):
self.data[i].append(list())
self.data[i][j] = 0.
#self.data[i].append(list())
# self.data[i][j] = 0
self.shape = shape
#self.data = np.zeros(shape)
def eye(self):
if self.shape[0] != self.shape[1]:
raise Exception("Ejercicio 3: las columnas y las filas deben ser de igual número n = m")
for i in range(0,self.shape[1]):
for j in range(0, self.shape[0]):
if(i == j):
self.data[i][j] = 1.0
def __repr__(self):
retval = "[\n"
for list in self.data:
retval += "["
for x in list:
retval += str(x) + ", "
retval += "], \n"
retval = retval + "]"
return retval.replace(", ]", "]")
def __str__(self):
return self.__repr__()
#return "Ejercicio 1: print"
def __getitem__(self, index):
return(self.data[index[0]][index[1]])
def __setitem__(self, index, newval):
self.data[index[0]][index[1]] = newval
return
X = zeros([3,4])
X
X.eye()
X
X = zeros((4, 4))
X.eye()
X
np.array([[1,2,3],[4,5,6],[7,8,9]]).transpose()
import re
class transpose:
"Clase para llenar una matriz en Ceros"
data = list()
shape = None
def __init__(self, list_of_rows):
"Constructor"
# obtener dimensiones
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Las filas deben ser del mismo tamano")
def transpose(self):
transposed = list()
for i in range(0, len(self.data[1])):
transposed.append(list())
for j in range(0, len(self.data[0])):
transposed[i].append(list())
transposed[i][j] = self.data[j][i]
print(transposed)
def eye(self):
if self.shape[0] != self.shape[1]:
raise Exception("Ejercicio 3: las columnas y las filas deben ser de igual número n = m")
for i in range(0,self.shape[1]):
for j in range(0, self.shape[0]):
if(i == j):
self.data[i][j] = 1.0
def __repr__(self):
retval = "[\n"
for list in self.data:
retval += "["
for x in list:
retval += str(x) + ", "
retval += "], \n"
retval = retval + "]"
return retval.replace(", ]", "]")
def __str__(self):
return self.__repr__()
#return "Ejercicio 1: print"
def __getitem__(self, index):
return(self.data[index[0]][index[1]])
def __setitem__(self, index, newval):
self.data[index[0]][index[1]] = newval
return
def __add__(self, other):
"Hora de sumar"
if isinstance(other, transpose):
if self.shape != other.shape:
raise Exception("Las dimensiones son distintas!")
rows, cols = self.shape
suma = transpose([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
suma.data[r][c] = self.data[r][c] + other.data[r][c]
return suma
elif isinstance(2, (int, float, complex)): # en caso de que el lado derecho sea solo un numero
rows, cols = self.shape
newArray = Array([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
newArray.data[r][c] = self.data[r][c] + other
return newArray
else:
return NotImplemented # es un tipo de error particular usado en estos metodos
def __mul__(self, other):
"Método de multiplicación"
if isinstance(other, transpose):
if self.shape[1] != other.shape[0]:
raise Exception("Las dimensiones son distintas!")
rows, cols = self.shape
suma = transpose([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
suma.data[r][c] = self.data[r][c] + other.data[r][c]
return suma
elif isinstance(2, (int, float, complex)): # en caso de que el lado derecho sea solo un numero
rows, cols = self.shape
newArray = Array([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
newArray.data[r][c] = self.data[r][c] + other
return newArray
else:
return NotImplemented # es un tipo de error particular usado en estos metodos
X = transpose([[1,2,3],[4,5,6],[7,8,9]])
X.transpose()
np.array([[1,2,3],[4,5,6],[7,8,9]]) + np.array([[10,11,12],[13,14,15],[16,17,18]])
Y = transpose([[10,11,12],[13,14,15],[16,17,18]])
X + Y
Z = X + 10
Z
Z.data
transpose([[1,1],[1,1]]) + transpose([[2,2],[2,2]])
import re
class transpose:
"Clase para llenar una matriz en Ceros"
data = list()
shape = None
def __init__(self, list_of_rows):
"Constructor"
# obtener dimensiones
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Las filas deben ser del mismo tamano")
def transpose(self):
transposed = list()
for i in range(0, len(self.data[1])):
transposed.append(list())
for j in range(0, len(self.data[0])):
transposed[i].append(list())
transposed[i][j] = self.data[j][i]
print(transposed)
def eye(self):
if self.shape[0] != self.shape[1]:
raise Exception("Ejercicio 3: las columnas y las filas deben ser de igual número n = m")
for i in range(0,self.shape[1]):
for j in range(0, self.shape[0]):
if(i == j):
self.data[i][j] = 1.0
def __repr__(self):
retval = "[\n"
for list in self.data:
retval += "["
for x in list:
retval += str(x) + ", "
retval += "], \n"
retval = retval + "]"
return retval.replace(", ]", "]")
def __str__(self):
return self.__repr__()
#return "Ejercicio 1: print"
def __getitem__(self, index):
return(self.data[index[0]][index[1]])
def __setitem__(self, index, newval):
self.data[index[0]][index[1]] = newval
return
def __add__(self, other):
"Hora de sumar"
if isinstance(other, transpose):
if self.shape != other.shape:
raise Exception("Las dimensiones son distintas!")
rows, cols = self.shape
suma = transpose([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
suma.data[r][c] = self.data[r][c] + other.data[r][c]
return suma
elif isinstance(2, (int, float, complex)): # en caso de que el lado derecho sea solo un numero
rows, cols = self.shape
newArray = Array([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
newArray.data[r][c] = self.data[r][c] + other
return newArray
else:
return NotImplemented # es un tipo de error particular usado en estos metodos
def __rmul__(self, other):
"Multiplicación escalar"
if isinstance(2, (int, float, complex)): # en caso de que el lado derecho sea solo un numero
rows, cols = self.shape
newArray = transpose([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
newArray.data[r][c] = self.data[r][c] * other
return newArray
else:
return NotImplemented # es un tipo de error particular usado en estos metodos
def __mul__(self, other):
"Multiplicación vectorial"
if isinstance(other, transpose):
if self.shape[1] != other.shape[0] or self.shape[0] != other.shape[1]:
raise Exception("Las dimensiones son distintas!")
rows, cols = self.shape[0], other.shape[1]
rowsL, colsL = other.shape[0], self.shape[1]
retval = transpose([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
print("R(" + str(r) + ", " + str(c) + ") = ", end=" ")
for i in range(colsL):
#for j in range(rowsL):
print("A("+str(c) + "," + str(i) + ") * B(" + str(r) + "," + str(c)+")",end=" + ")
print("")
#retval.data[r][c] += self.data[i][j] * other.data[i][j]
#print(str(r) + ", " + str(c) + " = ")
return retval
else:
return NotImplemented # es un tipo de error particular usado en estos metodos
transpose([[1,1,1],[1,1,1]]) * transpose([[2,2],[2,2],[2,2]])
np.matmul(np.array([[1,1,1],[1,1,1]]), np.array([[2,2,2],[2,2,2],[2,2,2]]))
class Vector(transpose):
"clase de Vectores Array"
def __init__(self, list_of_numbers):
self.vdata = list_of_numbers
list_of_rows = [[x] for x in list_of_numbers]
return transpose.__init__(self, list_of_rows)
def __repr__(self):
return "Vector(" + str(self.vdata) + ")"
def __str__(self):
return str(self.vdata)
def __add__(self, other):
if type(other) is Vector:
new_arr = transpose.__add__(self, other)
else:
"Suma escalar"
rows, cols = self.shape
new_arr = transpose([0 for r in range(rows)])
for r in range(rows):
new_arr.data[r][0] = self.data[r][0] + 10
print(new_arr.__dict__)
return Vector([x[0] for x in new_arr.data])
Vector([1,2,3]).__dict__
Vector([1,2,3])
Vector([1,2,3]) + Vector([4,5,6])
Vector([1,2,3]) + 10
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Operaciones con arrays
Step2: Multiplicación de matrices con numpy.matmul()
Step3: Obteniendo datos específicos
Step4: Selección y multi selección de elementos...
Step5: Vectores
Step6: Creando clases en Python
Step7: ... implementando métodos que faciliten la utilización de dicha clase para nuestros fines de Álgegra Lineal
Step8: <span style="color
Step9: Prueba de validador
Step10: Prueba con vectores
Step11: Index & Item Assignment
Step12: <span style="color
Step13: Inicialización de una matriz en ceros con numpy
Step14: <span style="color
Step15: Transposición
Step16: <span style="color
Step17: Suma
Step18: <span style = "color
Step19: <span style = "color
|
9,005
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
batch_size = 128
hidden_nodes = 1024
learning_rate = 0.5
beta = 0.005
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_1 = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_nodes]))
biases_1 = tf.Variable(tf.zeros([hidden_nodes]))
weights_2 = tf.Variable(
tf.truncated_normal([hidden_nodes, num_labels]))
biases_2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
def forward_prop(input):
h1 = tf.nn.relu(tf.matmul(input, weights_1) + biases_1)
return tf.matmul(h1, weights_2) + biases_2
logits = forward_prop(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Add the regularization term to the loss.
loss += beta * (tf.nn.l2_loss(weights_1) + tf.nn.l2_loss(weights_2))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
train_dataset_restricted = train_dataset[:130, :]
train_labels_restricted = train_labels[:130, :]
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels_restricted.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset_restricted[offset:(offset + batch_size), :]
batch_labels = train_labels_restricted[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
hidden_nodes_1 = 1024
hidden_nodes_2 = 1024
learning_rate = 0.0001
beta = 0.005
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Placeholder to control dropout probability.
keep_prob = tf.placeholder(tf.float32)
# Variables.
weights_1 = tf.Variable(tf.random_normal([image_size * image_size, hidden_nodes_1]))
biases_1 = tf.Variable(tf.zeros([hidden_nodes_1]))
weights_2 = tf.Variable(tf.random_normal([hidden_nodes_1, hidden_nodes_2]))
biases_2 = tf.Variable(tf.zeros([hidden_nodes_2]))
weights_out = tf.Variable(tf.random_normal([hidden_nodes_2, num_labels]))
biases_out = tf.Variable(tf.zeros([num_labels]))
# Training computation.
def forward_prop(input):
h1 = tf.nn.dropout(tf.nn.relu(tf.matmul(input, weights_1) + biases_1), keep_prob)
h2 = tf.nn.dropout(tf.nn.relu(tf.matmul( h1, weights_2) + biases_2), keep_prob)
return tf.matmul(h2, weights_out) + biases_out
logits = forward_prop(tf_train_dataset)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Add the regularization term to the loss.
loss += beta * (tf.nn.l2_loss(weights_1) + tf.nn.l2_loss(weights_2) + tf.nn.l2_loss(weights_out))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset))
num_steps = 5001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob: 1.0}
feed_dict_w_drop = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob: 0.5}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict_w_drop)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(feed_dict=feed_dict), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(feed_dict=feed_dict), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: Problem 1
Step4: Problem 2
Step5: Problem 3
|
9,006
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
# if using a jupyter notebook: include %matplotlib inline. If constructing a .py-file: comment out
%matplotlib inline
# if high-resolution images are desired: include %config InlineBackend.figure_format = 'svg'
%config InlineBackend.figure_format = 'svg'
import SchemDraw as schem
import SchemDraw.elements as e
d = schem.Drawing(unit=2.5)
R7 = d.add(e.RES, d='right', botlabel='$R_7$')
R6 = d.add(e.RES, d='right', botlabel='$R_6$')
d.add(e.LINE, d='right', l=2)
d.add(e.LINE, d='right', l=2)
R5 = d.add(e.RES, d='up' , botlabel='$R_5$')
R4 = d.add(e.RES, d='up', botlabel='$R_4$')
d.add(e.LINE, d='left', l=2)
d.push()
R3 = d.add(e.RES, d='down', toy=R6.end, botlabel='$R_3$')
d.pop()
d.add(e.LINE, d='left', l=2)
d.push()
R2 = d.add(e.RES, d='down', toy=R6.end, botlabel='$R_2$')
d.pop()
R1 = d.add(e.RES, d='left', tox=R7.start, label='$R_1$')
Vt = d.add(e.BATTERY, d='up', xy=R7.start, toy=R1.end, label='$V_t$', lblofst=0.3)
d.labelI(Vt, arrowlen=1.5, arrowofst=0.5)
d.draw()
d.save('7_resistors_3_loops.png')
#d.save('7_resistors_3_loops.pdf')
Vt = 5.2
R1 = 0.0132
R2 = 0.021
R3 = 0.00360
R4 = 0.0152
R5 = 0.0119
R6 = 0.0022
R7 = 0.00740
R45 = R4 + R5
R67 = R6 + R7
print(f'R45 = {round(R45,7)} Ohm, R67 = {round(R67,5)} Ohm')
d = schem.Drawing(unit=2.5)
R67 = d.add(e.RES, d='right', botlabel='$R_{67}$')
d.add(e.LINE, d='right', l=2)
d.add(e.LINE, d='right', l=2)
R45 = d.add(e.RES, d='up', botlabel='$R_{45}$')
d.add(e.LINE, d='left', l=2)
d.push()
R3 = d.add(e.RES, d='down', toy=R67.end, botlabel='$R_3$')
d.pop()
d.add(e.LINE, d='left', l=2)
d.push()
R2 = d.add(e.RES, d='down', toy=R67.end, botlabel='$R_2$')
d.pop()
R1 = d.add(e.RES, d='left', tox=R67.start, label='$R_1$')
Vt = d.add(e.BATTERY, d='up', xy=R67.start, toy=R1.end, label='$V_t$', lblofst=0.3)
d.labelI(Vt, arrowlen=1.5, arrowofst=0.5)
d.draw()
d.save('5_resistors_3_loops.png')
#d.save('5_resistors_3_loops.pdf')
Vt = 5.2
R1 = 0.0132
R2 = 0.021
R3 = 0.00360
R4 = 0.0152
R5 = 0.0119
R6 = 0.0022
R7 = 0.00740
R45 = R4 + R5
R67 = R6 + R7
R2345 = ((1/R2)+(1/R3)+(1/R45))**(-1)
print(f'R2345 = {round(R2345,7)} Ohm')
d = schem.Drawing(unit=2.5)
R67 = d.add(e.RES, d='right', botlabel='$R_{67}$')
R345 = d.add(e.RES, d='up' , botlabel='$R_{2345}$')
R1 = d.add(e.RES, d='left', tox=R67.start, label='$R_1$')
Vt = d.add(e.BATTERY, d='up', xy=R67.start, toy=R1.end, label='$V_t$', lblofst=0.3)
d.labelI(Vt, arrowlen=1.5, arrowofst=0.5)
d.draw()
d.save('3_resistors_1_loop.png')
#d.save('3_resistors_1_loop.pdf')
Vt = 5.2
R1 = 0.0132
R2 = 0.021
R3 = 0.00360
R4 = 0.0152
R5 = 0.0119
R6 = 0.0022
R7 = 0.00740
R45 = R4 + R5
R67 = R6 + R7
R2345 = ((1/R2)+(1/R3)+(1/R45))**(-1)
Rt = R1 + R2345 + R67
print(f'Rt = {round(Rt,7)} Ohm')
d = schem.Drawing(unit=2.5)
L2 = d.add(e.LINE, d='right')
Rt = d.add(e.RES, d='up' , botlabel='$R_{t}$')
L1 = d.add(e.LINE, d='left', tox=L2.start)
Vt = d.add(e.BATTERY, d='up', xy=L2.start, toy=L1.end, label='$V_t$', lblofst=0.3)
d.labelI(Vt, arrowlen=1.5, arrowofst=0.5)
d.draw()
d.save('1_resistor_no_loops.png')
#d.save('1_resistor_no_loops.pdf')
Vt = 5.2
R1 = 0.0132
R2 = 0.021
R3 = 0.00360
R4 = 0.0152
R5 = 0.0119
R6 = 0.0022
R7 = 0.00740
R45 = R4 + R5
R67 = R6 + R7
R2345 = ((1/R2)+(1/R3)+(1/R45))**(-1)
Rt = R1 + R2345 + R67
It = Vt/Rt
print(f'It = {round(It,2)} A')
I6 = It
I7 = It
V6 = I6 * R6
V7 = I7 * R7
print(f'V6 = {round(V6,5)} V, V7 = {round(V7,5)} V')
I2345 = It
V2345 = I2345 * R2345
print(f'V2345 = {round(V2345,5)} V')
V3 = V2345
I3 = V3 / R3
I6 = It
print(f'I3 = {round(I3,2)} A, I6 = {round(I6,2)} A')
I7 = It
P7 = R7 * I7**2
print(f'P7 = {round(P7,2)} W')
V45 = V2345
I45 = V45/R45
print(f'I45 = {round(I45,3)} A')
I4 = I45
P4 = R4 * I4**2
print(f'P4 = {round(P4,4)} W')
print(f'V6 = {round(V6,3)} V')
print(f'V7 = {round(V7,2)} V')
print(f'I3 = {round(I3,0)} A')
print(f'I6 = {round(I6,0)} A')
print(f'P4 = {round(P4,2)} W')
print(f'P7 = {round(P7,0)} W')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we'll build the circuit diagram by creating a SchemDraw Drawing object and adding elements to it.
Step2: Find R<sub>t</sub>
Step3: Find R<sub>45</sub> and R<sub>67</sub>
Step4: Let's redraw our circuit diagram to show the combined resistors.
Step5: Find R<sub>2345</sub>
Step6: OK, now let's construct a new SchemDraw diagram of the simplified the circuit. In this diagram, we'll combine $R_2$, $R_3$ and $R_{45}$ into one big resistor, $R_{2345}$.
Step7: Find R<sub>t</sub>
Step8: Last circuit diagram. The simplest one. This SchemDraw diagram just includes $V_t$ and $R_t$.
Step9: Find V<sub>6</sub> and V<sub>7</sub>
Step10: The total current of the circuit, $I_t$ is the same as the current running through resistor $R_6$ and resistor $R_7$.
Step11: Find I<sub>3</sub> and I<sub>6</sub>
Step12: The voltage drop across resistor $R_3$ is the same as the voltage drop across resistor $R_{2345}$.
Step13: Find P<sub>7</sub> and P<sub>4</sub>
Step14: Current $I_{45}$ is equal to current $I_4$. Voltage $V_{45} = V_{2345}$. Using Ohm's Law again
Step15: One more time using the power law
Step16: Final Answer
|
9,007
|
<ASSISTANT_TASK:>
Python Code:
!pip install nnabla-ext-cuda100
!git clone https://github.com/sony/nnabla-examples.git
%cd nnabla-examples
import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
import nnabla.solver as S
from nnabla.logger import logger
import nnabla.utils.save as save
from nnabla.utils.data_iterator import data_iterator_simple
from utils.neu.save_nnp import save_nnp
import numpy as np
import time
import os
import struct
import zlib
from nnabla.logger import logger
from nnabla.utils.data_iterator import data_iterator
from nnabla.utils.data_source import DataSource
from nnabla.utils.data_source_loader import download
def load_mnist(train=True):
'''
Load MNIST dataset images and labels from the original page by Yan LeCun or the cache file.
Args:
train (bool): The testing dataset will be returned if False. Training data has 60000 images, while testing has 10000 images.
Returns:
numpy.ndarray: A shape of (#images, 1, 28, 28). Values in [0.0, 1.0].
numpy.ndarray: A shape of (#images, 1). Values in {0, 1, ..., 9}.
'''
if train:
image_uri = 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz'
label_uri = 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz'
else:
image_uri = 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz'
label_uri = 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
logger.info('Getting label data from {}.'.format(label_uri))
r = download(label_uri)
data = zlib.decompress(r.read(), zlib.MAX_WBITS | 32)
_, size = struct.unpack('>II', data[0:8])
labels = np.frombuffer(data[8:], np.uint8).reshape(-1, 1)
r.close()
logger.info('Getting label data done.')
logger.info('Getting image data from {}.'.format(image_uri))
r = download(image_uri)
data = zlib.decompress(r.read(), zlib.MAX_WBITS | 32)
_, size, height, width = struct.unpack('>IIII', data[0:16])
images = np.frombuffer(data[16:], np.uint8).reshape(
size, 1, height, width)
r.close()
logger.info('Getting image data done.')
return images, labels
class MnistDataSource(DataSource):
'''
Get data directly from MNIST dataset from Internet(yann.lecun.com).
'''
def _get_data(self, position):
image = self._images[self._indexes[position]]
label = self._labels[self._indexes[position]]
return (image, label)
def __init__(self, train=True, shuffle=False, rng=None):
super(MnistDataSource, self).__init__(shuffle=shuffle)
self._train = train
self._images, self._labels = load_mnist(train)
self._size = self._labels.size
self._variables = ('x', 'y')
if rng is None:
rng = np.random.RandomState(313)
self.rng = rng
self.reset()
def reset(self):
if self._shuffle:
self._indexes = self.rng.permutation(self._size)
else:
self._indexes = np.arange(self._size)
super(MnistDataSource, self).reset()
@property
def images(self):
Get copy of whole data with a shape of (N, 1, H, W).
return self._images.copy()
@property
def labels(self):
Get copy of whole label with a shape of (N, 1).
return self._labels.copy()
def data_iterator_mnist(batch_size,
train=True,
rng=None,
shuffle=True,
with_memory_cache=False,
with_file_cache=False):
'''
Provide DataIterator with :py:class:`MnistDataSource`
with_memory_cache and with_file_cache option's default value is all False,
because :py:class:`MnistDataSource` is able to store all data into memory.
For example,
.. code-block:: python
with data_iterator_mnist(True, batch_size) as di:
for data in di:
SOME CODE TO USE data.
'''
return data_iterator(MnistDataSource(train=train, shuffle=shuffle, rng=rng),
batch_size,
rng,
with_memory_cache,
with_file_cache)
def mlp_net(x, n_h, n_y, test=False):
Args:
x(`~nnabla.Variable`): N-D array
n_h(int): number of units in an intermediate layer
n_y(int): number of classes
test: operation type train=True, test=False
Returns:
~nnabla.Variable: h
h = x
with nn.parameter_scope("fc1"):
h = F.relu(PF.batch_normalization(
PF.affine(h, n_h), batch_stat=not test), inplace=True)
with nn.parameter_scope("fc2"):
h = F.relu(PF.batch_normalization(
PF.affine(h, n_h), batch_stat=not test), inplace=True)
with nn.parameter_scope("fc3"):
h = PF.affine(h, n_y)
return h
def distance(y0, y1):
Distance function is Kullback-Leibler Divergence for categorical distribution
return F.kl_multinomial(F.softmax(y0), F.softmax(y1))
def calc_validation_error(di_v, xv, tv, err, val_iter):
Calculate validation error rate
Args:
di_v; validation dataset
xv: variable for input
tv: variable for label
err: variable for error estimation
val_iter: number of iteration
Returns:
error rate
ve = 0.0
for j in range(val_iter):
xv.d, tv.d = di_v.next()
xv.d = xv.d / 255
err.forward(clear_buffer=True)
ve += err.d
return ve / val_iter
# Get context.
from nnabla.ext_utils import get_extension_context
ctx = get_extension_context('cudnn')
nn.set_default_context(ctx)
# Load MNIST Dataset
images, labels = load_mnist(train=True)
rng = np.random.RandomState(706)
inds = rng.permutation(len(images))
def feed_labeled(i):
j = inds[i]
return images[j], labels[j]
def feed_unlabeled(i):
j = inds[i]
return images[j], labels[j]
shape_x = (1, 28, 28)
n_h = 1200 #number of units
n_y = 10 #number of classes
n_labeled = 100
n_train = 60000
batchsize_l = 100
batchsize_u = 250
batchsize_v = 100
di_l = data_iterator_simple(feed_labeled, n_labeled,
batchsize_l, shuffle=True, rng=rng, with_file_cache=False)
di_u = data_iterator_simple(feed_unlabeled, n_train,
batchsize_u, shuffle=True, rng=rng, with_file_cache=False)
di_v = data_iterator_mnist(batchsize_v, train=False)
# Create networks
# feed-forward-net building function
def forward(x, test=False):
return mlp_net(x, n_h, n_y, test)
# Net for learning labeled data
xl = nn.Variable((batchsize_l,) + shape_x, need_grad=False)
yl = forward(xl, test=False)
tl = nn.Variable((batchsize_l, 1), need_grad=False)
loss_l = F.mean(F.softmax_cross_entropy(yl, tl))
# Net for learning unlabeled data
xu = nn.Variable((batchsize_u,) + shape_x, need_grad=False)
yu = forward(xu, test=False)
y1 = yu.get_unlinked_variable()
y1.need_grad = False
xi_for_vat = 10.0
eps_for_vat = 1.5
noise = nn.Variable((batchsize_u,) + shape_x, need_grad=True)
r = noise / (F.sum(noise ** 2, [1, 2, 3], keepdims=True)) ** 0.5
r.persistent = True
y2 = forward(xu + xi_for_vat * r, test=False)
y3 = forward(xu + eps_for_vat * r, test=False)
loss_k = F.mean(distance(y1, y2))
loss_u = F.mean(distance(y1, y3))
# Net for evaluating validation data
xv = nn.Variable((batchsize_v,) + shape_x, need_grad=False)
hv = forward(xv, test=True)
tv = nn.Variable((batchsize_v, 1), need_grad=False)
err = F.mean(F.top_n_error(hv, tv, n=1))
# Create solver
solver = S.Adam(2e-3)
solver.set_parameters(nn.get_parameters())
# Monitor training and validation stats.
model_save_path = 'tmp.monitor.vat'
import nnabla.monitor as M
monitor = M.Monitor(model_save_path)
monitor_verr = M.MonitorSeries("Test error", monitor, interval=240)
monitor_time = M.MonitorTimeElapsed("Elapsed time", monitor, interval=240)
# Training Loop.
t0 = time.time()
max_iter = 24000
val_interval = 240
val_iter = 100
weight_decay = 0
n_iter_for_power_method = 1
iter_per_epoch = 240
learning_rate_decay = 0.9
for i in range(max_iter):
# Validation Test
if i % val_interval == 0:
valid_error = calc_validation_error(
di_v, xv, tv, err, val_iter)
monitor_verr.add(i, valid_error)
#################################
## Training by Labeled Data #####
#################################
# forward, backward and update
xl.d, tl.d = di_l.next()
xl.d = xl.d / 255
solver.zero_grad()
loss_l.forward(clear_no_need_grad=True)
loss_l.backward(clear_buffer=True)
solver.weight_decay(weight_decay)
solver.update()
#################################
## Training by Unlabeled Data ###
#################################
# Calculate y without noise, only once.
xu.d, _ = di_u.next()
xu.d = xu.d / 255
yu.forward(clear_buffer=True)
##### Calculate Adversarial Noise #####
# Do power method iteration
noise.d = np.random.normal(size=xu.shape).astype(np.float32)
for k in range(n_iter_for_power_method):
r.grad.zero()
loss_k.forward(clear_no_need_grad=True)
loss_k.backward(clear_buffer=True)
noise.data.copy_from(r.grad)
##### Calculate loss for unlabeled data #####
# forward, backward and update
solver.zero_grad()
loss_u.forward(clear_no_need_grad=True)
loss_u.backward(clear_buffer=True)
solver.weight_decay(weight_decay)
solver.update()
##### Learning rate update #####
if i % iter_per_epoch == 0:
solver.set_learning_rate(
solver.learning_rate() * learning_rate_decay)
monitor_time.add(i)
# Evaluate the final model by the error rate with validation dataset
valid_error = calc_validation_error(di_v, xv, tv, err, val_iter)
print(valid_error)
# If you need to save the model, please comment out the following lines:
# parameter_file = os.path.join(
# model_save_path, 'params_%06d.h5' % max_iter)
# nn.save_parameters(parameter_file)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's start by importing dependencies.
Step4: Let's also define data iterator for MNIST. You can disregard the details for now.
Step6: We now define a multi-layer perceptron (MLP) network to be used later. Our MLP consists of 3 fully-connected layers, two of whiich are followed by batch normalization and non-linear activation.
Step8: Let's also define a function to measure the distance between two distributions. In this example, we use a function called multinomial Kullback-Leibler divergence, commonly known as KL-divergence.
Step10: Before we get into the main computational graph, let's also define a function to evaluate the network. This function simply returns error rate during validation, which is averaged over the number of iterations.
Step11: Now we get into the main computational graph. We start by setting context to use cuDNN, and loading data iterator for MNIST.
Step12: Let's define two functions for loading data for labeled and unlabeled settings respectively. Although feed_unlabeled function is also returning labels, we will later see that the labels are disregarded in the graph.
Step13: We first define a simple forward function that calls the multi-layer perceptron network that we defined above.
Step14: We now define variables for noise, which are added to the input variable xu and fed to MLP. The KL-divergence between the MLP outputs of noisy variable and noise-free variable is used to compute loss. Of the two losses, one is used to perform power method iteration, and another one is loss for unlabeled data.
Step15: We define our solver and monitor variables. We will use Adam as our solver.
Step16: Now we get into our training loop. We will have separate training stages for labeled and unlabeled data. We first start with labeled data, which is pretty much the same as usual training graph.
Step17: Finally, we evaluate our model on the validation dataset. If the model was trained correctly, we should get an error rate of around 1.5%.
|
9,008
|
<ASSISTANT_TASK:>
Python Code:
from effect_demo_setup import *
from concise.models import single_layer_pos_effect as concise_model
import numpy as np
# Generate training data for the model, use a 1000bp sequence
param, X_feat, X_seq, y, id_vec = load_example_data(trim_seq_len = 1000)
# Generate the model
dc = concise_model(pooling_layer="sum",
init_motifs=["TGCGAT", "TATTTAT"],
n_splines=10,
n_covariates=0,
seq_length=X_seq.shape[1],
**param)
# Train the model
dc.fit([X_seq], y, epochs=1,
validation_data=([X_seq], y))
# In order to select the right output of a potential multitask model we have to generate a list of output labels, which will be used alongside the model itself.
model_output_annotation = np.array(["output_1"])
import h5py
dataset_path = "%s/data/sample_hqtl_res.hdf5"%concise_demo_data_path
dataset = {}
with h5py.File(dataset_path, "r") as ifh:
ref = ifh["test_in_ref"].value
alt = ifh["test_in_alt"].value
dirs = ifh["test_out"]["seq_direction"].value
# This datset is stored with forward and reverse-complement sequences in an interlaced manner
assert(dirs[0] == b"fwd")
dataset["ref"] = ref[::2,...]
dataset["alt"] = alt[::2,...]
dataset["ref_rc"] = ref[1::2,...]
dataset["alt_rc"] = alt[1::2,...]
dataset["y"] = ifh["test_out"]["type"].value[::2]
# The sequence is centered around the mutatiom with the mutation occuring on position when looking at forward sequences
dataset["mutation_position"] = np.array([500]*dataset["ref"].shape[0])
from concise.effects.ism import ism
from concise.effects.gradient import gradient_pred
from concise.effects.dropout import dropout_pred
ism_result = ism(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"],
alt_rc = dataset["alt_rc"],
mutation_positions = dataset["mutation_position"],
out_annotation_all_outputs = model_output_annotation, diff_type = "diff")
gradient_result = gradient_pred(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"],
alt_rc = dataset["alt_rc"],
mutation_positions = dataset["mutation_position"],
out_annotation_all_outputs = model_output_annotation)
dropout_result = dropout_pred(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"], alt_rc = dataset["alt_rc"], mutation_positions = dataset["mutation_position"], out_annotation_all_outputs = model_output_annotation)
gradient_result
from concise.effects.snp_effects import effect_from_model
# Define the parameters:
params = {"methods": [gradient_pred, dropout_pred, ism],
"model": dc,
"ref": dataset["ref"],
"ref_rc": dataset["ref_rc"],
"alt": dataset["alt"],
"alt_rc": dataset["alt_rc"],
"mutation_positions": dataset["mutation_position"],
"extra_args": [None, {"dropout_iterations": 60},
{"rc_handling" : "maximum", "diff_type":"diff"}],
"out_annotation_all_outputs": model_output_annotation,
}
results = effect_from_model(**params)
print(results.keys())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As with any prediction that you want to make with a model it is necessary that the input sequences have to fit the input dimensions of your model, in this case the reference and alternative sequences in their forward and reverse-complement state have to have the shape [?, 1000, 4].
Step2: All prediction functions have the same general set of required input values. Before going into more detail of the individual prediction functions We will look into how to run them. The following input arguments are availble for all functions
Step3: The output of all functions is a dictionary, please refer to the individual chapters further on for an explanation of the individual values. Every dictionary contains pandas dataframes as values. Every column of the dataframe is named according to the values given in the out_annotation_all_outputs labels and contains the respective predicted scores.
Step4: Again the returned value is a dictionary containing the results of the individual calculations, the keys are the names of the executed functions
|
9,009
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-lm', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
9,010
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# redefining the example DataFrame
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
countries['area'] # single []
countries[['area', 'population']] # double [[]]
countries[0:4]
countries['area'] > 100000
countries[countries['area'] > 100000]
countries[countries['population'] > 50]
s = countries['capital']
s.isin?
s.isin(['Berlin', 'London'])
countries[countries['capital'].isin(['Berlin', 'London'])]
string = 'Berlin'
string.startswith('B')
countries['capital'].str.startswith('B')
df = pd.read_csv("data/titanic.csv")
df.head()
males = df[df['Sex'] == 'male']
males['Age'].mean()
df[df['Sex'] == 'female']['Age'].mean()
len(df[df['Age'] > 70])
(df['Age'] > 70).sum()
df[(df['Age'] > 30) & (df['Age'] <= 40)]
name = 'Braund, Mr. Owen Harris'
name.split(",")[0]
df['Surname'] = df['Name'].str.split(",").str.get(0)
df['Surname']
df[df['Surname'].str.startswith('Williams')]
df[df['Surname'].str.len() > 15]
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
len(titles)
titles.sort_values('year').head(2)
len(titles[titles['title'] == 'Hamlet'])
titles[titles['title'] == 'Treasure Island'].sort_values('year')
len(titles[(titles['year'] >= 1950) & (titles['year'] <= 1959)])
len(titles[titles['year'] // 10 == 195])
inception = cast[cast['title'] == 'Inception']
len(inception[inception['n'].isna()])
inception['n'].isna().sum()
len(inception[inception['n'].notna()])
titanic = cast[(cast['title'] == 'Titanic') & (cast['year'] == 1997)]
titanic = titanic[titanic['n'].notna()]
titanic.sort_values('n')
brad = cast[cast['name'] == 'Brad Pitt']
brad = brad[brad['year'] // 10 == 199]
brad = brad[brad['n'] == 2]
brad.sort_values('year')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Subsetting data
Step2: Remember that the same syntax can also be used to add a new columns
Step3: Subset observations (rows)
Step4: Boolean indexing (filtering)
Step5: An overview of the possible comparison operations
Step6: This can then be used to filter the dataframe with boolean indexing
Step7: Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the startswith method
Step8: In pandas, these are available on a Series through the str namespace
Step9: For an overview of all string methods, see
Step10: <div class="alert alert-success">
Step11: We will later see an easier way to calculate both averages at the same time with groupby.
Step12: <div class="alert alert-success">
Step13: <div class="alert alert-success">
Step14: <div class="alert alert-success">
Step15: <div class="alert alert-success">
Step16: <div class="alert alert-success">
Step17: [OPTIONAL] more exercises
Step18: <div class="alert alert-success">
Step19: <div class="alert alert-success">
Step20: <div class="alert alert-success">
Step21: <div class="alert alert-success">
Step22: <div class="alert alert-success">
Step23: <div class="alert alert-success">
Step24: <div class="alert alert-success">
Step25: <div class="alert alert-success">
Step26: <div class="alert alert-success">
|
9,011
|
<ASSISTANT_TASK:>
Python Code:
import cv2
import numpy as np
import sys
import pandas as pd
# typeData 为"train"或者"test"
# labelsInfo 包含每一个图片的ID
# 图片存储在trainResized和testResized文件夹内
def read_data(typeData, labelsInfo, imageSize):
labelsIndex = labelsInfo["ID"]
x = np.zeros((np.size(labelsIndex), imageSize))
for idx, idImage in enumerate(labelsIndex):
# 得到图片文件名并读取
nameFile = typeData + "Resized/" + str(idImage) + ".Bmp"
img = cv2.imread(nameFile)
# 转化为灰度图
temp = np.mean(img, 2)
# 将图片转化为行向量
x[idx, :] = np.reshape(temp, (1, imageSize))
return x
imageSize = 400
trainlabels = pd.read_csv("trainLabels.csv")
testlabels = pd.read_csv("sampleSubmission.csv")
# 得到训练集的特征
xTrain = read_data('train', trainlabels, imageSize)
# 得到测试集的特征
xTest = read_data("test", testlabels, imageSize)
print trainlabels.head(2)
print testlabels.head(2)
yTrain = trainlabels["Class"]
yTrain = [ord(x) for x in yTrain]
from sklearn.ensemble import RandomForestClassifier
%time rfc = RandomForestClassifier(n_estimators = 500, max_features = 50, max_depth=None)
rfc.fit(xTrain, yTrain)
predTest = rfc.predict(xTest)
predResult = [chr(x) for x in predTest]
testlabels["Class"] = predResult
testlabels.to_csv("rf_500_50_result.csv",index = None)
from sklearn.naive_bayes import GaussianNB as GNB
model_GNB = GNB()
model_GNB.fit(xTrain, yTrain)
predTest = model_GNB.predict(xTest)
predResult = [chr(x) for x in predTest]
testlabels["Class"] = predResult
testlabels.to_csv("gnb_result.csv",index = None)
from sklearn.ensemble import GradientBoostingClassifier
%time GBDT = GradientBoostingClassifier(loss='deviance', learning_rate=0.1, n_estimators=100, subsample=1.0, \
min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, init=None, \
random_state=None, max_features=None, verbose=0, max_leaf_nodes=None, warm_start=False, presort='auto')
%time GBDT.fit(xTrain, yTrain)
%time predTest = GBDT.predict(xTest)
predResult = [chr(x) for x in predTest]
testlabels["Class"] = predResult
testlabels.to_csv("gbdt_result.csv",index = None)
import os
from skimage.io import imread
from lasagne import layers
from lasagne.nonlinearities import softmax
from nolearn.lasagne import NeuralNet, BatchIterator
# Define functions
def read_datax(typeData, labelsInfo, imageSize, path):
x = np.zeros((labelsInfo.shape[0], imageSize))
for (index, idImage) in enumerate(labelsInfo['ID']):
# use specially created 32 x 32 images
nameFile = '{0}/{1}Resized32/{2}.Bmp'.format(path,
typeData, idImage)
img = imread(nameFile, as_grey = True)
x[index, :] = np.reshape(img, (1, imageSize))
return x
def fit_model(reshaped_train_x, y, image_width,
image_height, reshaped_test_x):
net = NeuralNet(
layers = [
('input', layers.InputLayer),
('conv1', layers.Conv2DLayer),
('pool1', layers.MaxPool2DLayer),
('dropout1', layers.DropoutLayer),
('conv2', layers.Conv2DLayer),
('pool2', layers.MaxPool2DLayer),
('dropout2', layers.DropoutLayer),
('conv3', layers.Conv2DLayer),
('hidden4', layers.DenseLayer),
('output', layers.DenseLayer),
],
input_shape = (None, 1, 32, 32),
conv1_num_filters=32, conv1_filter_size=(5, 5),
pool1_pool_size=(2, 2),
dropout1_p=0.2,
conv2_num_filters=64, conv2_filter_size=(5, 5),
pool2_pool_size=(2, 2),
dropout2_p=0.2,
conv3_num_filters = 128, conv3_filter_size = (5, 5),
hidden4_num_units=500,
output_num_units = 62, output_nonlinearity = softmax,
update_learning_rate = 0.01,
update_momentum = 0.9,
batch_iterator_train = BatchIterator(batch_size = 100),
batch_iterator_test = BatchIterator(batch_size = 100),
use_label_encoder = True,
regression = False,
max_epochs = 100,
verbose = 1,
)
net.fit(reshaped_train_x, y)
prediction = net.predict(reshaped_test_x)
return prediction
# 预处理数据,首先将图片保存为32*32的小图片
imageSize = 1024 # 32 x 32
image_width = image_height = int(imageSize ** 0.5)
labelsInfoTrain = pd.read_csv\
('trainLabels.csv'.format(path))
labelsInfoTest = pd.read_csv\
('sampleSubmission.csv'.format(path))
# Load dataset
nnxTrain = read_datax('train', labelsInfoTrain, imageSize, '.')
nnxTest = read_datax('test', labelsInfoTest, imageSize, '.')
nnyTrain = map(ord, labelsInfoTrain['Class'])
nnyTrain = np.array(yTrain)
# 归一化数据
nnxTrain /= nnxTrain.std(axis = None)
nnxTrain -= nnxTrain.mean()
nnxTest /= nnxTest.std(axis = None)
nnxTest -= nnxTest.mean()
# Reshape data
train_x_reshaped = nnxTrain.reshape(nnxTrain.shape[0], 1,
image_height, image_width).astype('float32')
test_x_reshaped = nnxTest.reshape(nnxTest.shape[0], 1,
image_height, image_width).astype('float32')
# 进行训练和测试
predict = fit_model(train_x_reshaped, nnyTrain, image_width, image_height, test_x_reshaped)
# 保存结果
yTest = map(chr, predict)
labelsInfoTest['Class'] = yTest
labelsInfoTest.to_csv('nnresult.csv'.format(path), index = False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 我们希望所有的图片最后存储在一个numpy的矩阵当中,每一行为图片的像素值。为了得到统一的表达呢,我们将RGB三个通道的值做平均得到的灰度图像作为每个图片的表示
Step2: 预处理训练集和测试集
Step3: 预览数据:
Step4: 模型训练
Step5: 预测
Step6: 结果
Step7: 贝叶斯的训练非常的快,把结果提交kaggle后,得到0.02389的准确率,明显低于随机森林
Step8: 使用GBDT仅得到了0.31937的准确率,可能是我的默认参数没有调节好,关键是GBDT的训练时间太长,调试成本也比较高
|
9,012
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
9,013
|
<ASSISTANT_TASK:>
Python Code:
# Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import corner
import matplotlib.pyplot as plt
from matplotlib import rcParams
from NPTFit import nptfit # module for performing scan
from NPTFit import create_mask as cm # module for creating the mask
from NPTFit import psf_correction as pc # module for determining the PSF correction
from NPTFit import dnds_analysis # module for analysing the output
n = nptfit.NPTF(tag='non-Poissonian_Example')
fermi_data = np.load('fermi_data/fermidata_counts.npy')
fermi_exposure = np.load('fermi_data/fermidata_exposure.npy')
n.load_data(fermi_data, fermi_exposure)
analysis_mask = cm.make_mask_total(mask_ring = True, inner = 0, outer = 5, ring_b = 90, ring_l = 0)
n.load_mask(analysis_mask)
iso = np.load('fermi_data/template_iso.npy')
n.add_template(iso, 'iso')
n.add_poiss_model('iso','$A_\mathrm{iso}$', False, fixed=True, fixed_norm=1.47)
n.add_non_poiss_model('iso',
['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'],
[[-6,1],[2.05,30],[-2,1.95]],
[True,False,False],
fixed_params = [[3,22.]])
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812)
f_ary = pc_inst.f_ary
df_rho_div_f_ary = pc_inst.df_rho_div_f_ary
n.configure_for_scan(f_ary=f_ary, df_rho_div_f_ary=df_rho_div_f_ary, nexp=1)
n.perform_scan(nlive=500)
n.load_scan()
cs=dnds_analysis.Analysis(n)
cs.make_triangle()
cs.plot_source_count_median('iso',smin=0.01,smax=10000,nsteps=1000,spow=2,color='forestgreen')
cs.plot_source_count_band('iso',smin=0.01,smax=10000,nsteps=1000,qs=[0.16,0.5,0.84],spow=2,color='forestgreen',alpha=0.3)
plt.yscale('log')
plt.xscale('log')
plt.xlim([1e-11,1e-7])
plt.ylim([1e-16,1e-10])
plt.tick_params(axis='x', length=5,width=2,labelsize=18)
plt.tick_params(axis='y',length=5,width=2,labelsize=18)
plt.ylabel('$F^2 dN/dF$ [counts cm$^{-2}$s$^{-1}$deg$^{-2}$]', fontsize=18)
plt.xlabel('$F$ [counts cm$^{-2}$ s$^{-1}$]', fontsize=18)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: We also show a plot of the source count function, although a careful explanation of the details here are deferred until Example 8.
|
9,014
|
<ASSISTANT_TASK:>
Python Code::
def skipIndices(N , T , arr):
sum = 0
count = { }
for i in range(N):
d = sum + arr[i]- T
k = 0
if(d > 0):
for u in list(count . keys())[: : - 1]:
j = u
x = j * count[j]
if(d <= x):
k +=(d + j - 1)// j
break
k += count[j]
d -= x
sum += arr[i]
count[arr[i]] = count . get(arr[i], 0)+ 1
print(k , end = " ▁ ")
if __name__ == ' _ _ main _ _ ' :
N = 7
T = 15
arr =[1 , 2 , 3 , 4 , 5 , 6 , 7]
skipIndices(N , T , arr)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
9,015
|
<ASSISTANT_TASK:>
Python Code:
list_with_for_loop = [x for x in range(10)]
print list_with_for_loop
list_with_for_loop_conditional = [x for x in range(10) if x%2 == 1]
print list_with_for_loop_conditional
list_with_nested_loops = [ [x, y] for x in range(3) for y in range(3) ]
print list_with_nested_loops
list_with_nested_loops_2 = [ x for x in range(y) for y in range(3)]
print list_with_nested_loops_2
matrix = [[11,12],[21,22]]
row = [1,2]
wrong_flatten_of_matrix = [x for x in row for row in matrix]
print "matrix is", matrix
print "flattened matrix is", wrong_flatten_of_matrix
right_flatten_of_matrix = [x for row in matrix for x in row]
print "matrix is", matrix
print "flattened matrix is", right_flatten_of_matrix
right_flatten_of_matrix_line_breaking = [
x
for row in matrix
for x in row
]
print "matrix is", matrix
print "flattened matrix is", right_flatten_of_matrix_line_breaking
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Even with conditions in the for loop
Step2: Nested loops in a list
Step3: Another example of nested loops
Step4: The article gives an example of how to flatten a matrix using this trick. Semantically, one would using
Step5: which is obviously WRONG. The correct code is given by the author as
Step6: The key is to write the nested loops in a list as the normal nested loops.
|
9,016
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import gdal, osr
import matplotlib.pyplot as plt
import sys
import matplotlib.pyplot as plt
from scipy import ndimage as ndi
%matplotlib inline
#Import biomass specific libraries
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage.measure import regionprops
from sklearn.ensemble import RandomForestRegressor
#Define plot band array function
def plot_band_array(band_array,image_extent,title,cmap_title,colormap,colormap_limits):
plt.imshow(band_array,extent=image_extent)
cbar = plt.colorbar(); plt.set_cmap(colormap); plt.clim(colormap_limits)
cbar.set_label(cmap_title,rotation=270,labelpad=20)
plt.title(title); ax = plt.gca()
ax.ticklabel_format(useOffset=False, style='plain')
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90)
def array2raster(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array,epsg):
cols = array.shape[1]
rows = array.shape[0]
originX = rasterOrigin[0]
originY = rasterOrigin[1]
driver = gdal.GetDriverByName('GTiff')
outRaster = driver.Create(newRasterfn, cols, rows, 1, gdal.GDT_Float32)
outRaster.SetGeoTransform((originX, pixelWidth, 0, originY, 0, pixelHeight))
outband = outRaster.GetRasterBand(1)
outband.WriteArray(array)
outRasterSRS = osr.SpatialReference()
outRasterSRS.ImportFromEPSG(epsg)
outRaster.SetProjection(outRasterSRS.ExportToWkt())
outband.FlushCache()
chm_file = 'C:/RSDI_2017/Day4/Biomass/NEON_D17_SJER_DP3_256000_4106000_CHM.tif'
#Get info from chm file for outputting results
just_chm_file = os.path.basename(chm_file)
just_chm_file_split = just_chm_file.split(sep="_")
#Open the CHM file with GDAL
chm_dataset = gdal.Open(chm_file)
#Get the raster band object
chm_raster = chm_dataset.GetRasterBand(1)
#Get the NO DATA value
noDataVal_chm = chm_raster.GetNoDataValue()
#Get required metadata from CHM file
cols_chm = chm_dataset.RasterXSize
rows_chm = chm_dataset.RasterYSize
bands_chm = chm_dataset.RasterCount
mapinfo_chm =chm_dataset.GetGeoTransform()
xMin = mapinfo_chm[0]
yMax = mapinfo_chm[3]
xMax = xMin + chm_dataset.RasterXSize/mapinfo_chm[1]
yMin = yMax + chm_dataset.RasterYSize/mapinfo_chm[5]
image_extent = (xMin,xMax,yMin,yMax)
#Plot the original CHM
plt.figure(1)
chm_array = chm_raster.ReadAsArray(0,0,cols_chm,rows_chm).astype(np.float)
#PLot the CHM figure
plot_band_array(chm_array,image_extent,'Canopy height Model','Canopy height (m)','Greens',[0, 9])
plt.savefig(just_chm_file_split[0]+'_'+just_chm_file_split[1]+'_'+just_chm_file_split[2]+'_'+just_chm_file_split[3]+'_'+just_chm_file_split[4]+'_'+just_chm_file_split[5]+'_'+'CHM.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
#Smooth the CHM using a gaussian filter to remove spurious points
chm_array_smooth = ndi.gaussian_filter(chm_array,2,mode='constant',cval=0,truncate=2.0)
chm_array_smooth[chm_array==0] = 0
#Save the smoothed CHM
array2raster('C:/RSDI_2017/Day4/Biomass/chm_filter.tif',(xMin,yMax),1,-1,np.array(chm_array_smooth/10000,dtype=float),32611)
#Calculate local maximum points in the smoothed CHM
local_maxi = peak_local_max(chm_array_smooth,indices=False, footprint=np.ones((5, 5)))
#Plot the local maximums
plt.figure(2)
plot_band_array(local_maxi,image_extent,'Maximum','Maxi','Greys',[0, 1])
plt.savefig(just_chm_file_split[0]+'_'+just_chm_file_split[1]+'_'+just_chm_file_split[2]+'_'+just_chm_file_split[3]+'_'+just_chm_file_split[4]+'_'+just_chm_file_split[5]+'_'+'Maximums.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
#Identify all the maximum points
markers = ndi.label(local_maxi)[0]
#Create a CHM mask so the segmentation will only occur on the trees
chm_mask = chm_array_smooth
chm_mask[chm_array_smooth != 0] = 1
#Perfrom watershed segmentation
labels = watershed(chm_array_smooth, markers, mask=chm_mask)
#Get the properties of each segment
tree_properties = regionprops(labels,chm_array, ['Area','BoundingBox','Centroid','Orientation','MajorAxisLength','MinorAxisLength','MaxIntensity','MinIntensity'])
#Determine how many individual trees were identified
max_labels = labels.max()
segment_labels = np.zeros(max_labels+1)
segment_id = np.zeros(max_labels+1)
for counter in range (1,max_labels+1):
segment_labels[counter] = len(labels[labels==counter])
segment_id[counter]=counter
#Remove the non-zero elements
segment_id = segment_id[np.nonzero(segment_labels)]
#Change the lebels to flow and plot them and save as raster
labels = np.array((labels),dtype=float)
plt.figure(3)
array2raster('C:/RSDI_2017/Day4/Biomass/SegmentedData.tif',(xMin,yMax),1,-1,labels,32611)
#Change the zero labels to nans so they won't show up in the plot
labels[labels==0] = np.nan
#Plot the segments
plot_band_array(labels,image_extent,'Crown Segmentation','Tree Crown Number','Spectral',[0, max_labels])
plt.savefig(just_chm_file_split[0]+'_'+just_chm_file_split[1]+'_'+just_chm_file_split[2]+'_'+just_chm_file_split[3]+'_'+just_chm_file_split[4]+'_'+just_chm_file_split[5]+'_'+'Segmentation.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
#Define several of the predictor variables
area=np.zeros(len(tree_properties))
diameter=np.zeros(len(tree_properties))
max_tree_height=np.zeros(len(tree_properties))
min_tree_height=np.zeros(len(tree_properties))
#Retreive the predictor variables from the region properties
for counter in range(0,len(tree_properties)):
area[counter] = tree_properties[counter]['Area']
diameter[counter] = tree_properties[counter]['MajorAxisLength']
max_tree_height[counter] = tree_properties[counter]['MaxIntensity']
min_tree_height[counter] = tree_properties[counter]['MinIntensity']
#Define the remaining predictor variables
crown_geometric_volume_full=np.zeros(len(segment_id))
crown_geometric_volume_50th_percentile=np.zeros(len(segment_id))
crown_geometric_volume_60th_percentile=np.zeros(len(segment_id))
crown_geometric_volume_70th_percentile=np.zeros(len(segment_id))
percentile_50th=np.zeros(len(segment_id))
percentile_60th=np.zeros(len(segment_id))
percentile_70th=np.zeros(len(segment_id))
#Cycle through all of the tree segments
counter=0
for segment in segment_id:
#Pull out the tree of interest
indexes_of_tree = np.asarray(np.where(labels==segment)).T
tree_data = chm_array[indexes_of_tree[:,0],indexes_of_tree[:,1]]
#Calculate the geometric volume
crown_geometric_volume_full[counter]=np.sum([tree_data-np.min(tree_data)])
#Pull out 50th percentile stats
percentile_50th[counter]=np.percentile(tree_data,50)
tree_data_50th = chm_array[indexes_of_tree[:,0],indexes_of_tree[:,1]]
tree_data_50th[tree_data_50th>percentile_50th[counter]] = percentile_50th[counter]
crown_geometric_volume_50th_percentile[counter]=np.sum([tree_data_50th-min_tree_height[counter]])
#Pull out 60th percentile stats
percentile_60th[counter]=np.percentile(tree_data,60)
tree_data_60th = chm_array[indexes_of_tree[:,0],indexes_of_tree[:,1]]
tree_data_60th[tree_data_60th>percentile_60th[counter]] = percentile_60th[counter]
crown_geometric_volume_60th_percentile[counter]=np.sum([tree_data_60th-min_tree_height[counter]])
#Pull out 60th percentile stats
percentile_70th[counter]=np.percentile(tree_data,70)
tree_data_70th = chm_array[indexes_of_tree[:,0],indexes_of_tree[:,1]]
tree_data_70th[tree_data_70th>percentile_70th[counter]] = percentile_70th[counter]
crown_geometric_volume_70th_percentile[counter]=np.sum([tree_data_70th-min_tree_height[counter]])
counter=counter+1
#Define the file of training data
training_data_file = 'C:/RSDI_2017/Day4/Biomass/training/SJER_Biomass_Training.csv'
#Read in the training data from a CSV file
training_data = np.genfromtxt(training_data_file,delimiter=',')
#Grab the biomass (Y) from the first line
biomass = training_data[:,0]
#Grab the biomass prdeictors from the remaining lines
biomass_predictors = training_data[:,1:12]
#Define paraemters for Random forest regressor
max_depth = 30
#Define regressor rules
regr_rf = RandomForestRegressor(max_depth=max_depth, random_state=2)
#Fit the biomass to regressor variables
regr_rf.fit(biomass_predictors,biomass)
#Stack the predictor variables for all the individual trees
all_training_data = np.stack([area,diameter,max_tree_height,min_tree_height,percentile_50th,percentile_60th,percentile_70th,crown_geometric_volume_full,crown_geometric_volume_50th_percentile,crown_geometric_volume_60th_percentile,crown_geometric_volume_70th_percentile],axis=-1)
#Apply the model to the
pred_biomass = regr_rf.predict(all_training_data)
#Set an out raster with the same size as the labels
biomass_out = labels
#Set counter to zero
counter = 0
#Assign each tree by the associated biomass
for segment in segment_id:
biomass_out[biomass_out==segment] = pred_biomass[counter]
counter = counter+1
#Get biomass stats for plotting
mean_biomass = np.mean(pred_biomass)
std_biomass = np.std(pred_biomass)
min_biomass = np.min(pred_biomass)
sum_biomass = np.sum(pred_biomass)
print('Sum of biomass is ',sum_biomass,' kg')
#Plot the biomass!
plt.figure(5)
plot_band_array(biomass_out,image_extent,'Biomass (kg)','Biomass (kg)','winter',[min_biomass+std_biomass, mean_biomass+std_biomass*3])
plt.savefig(just_chm_file_split[0]+'_'+just_chm_file_split[1]+'_'+just_chm_file_split[2]+'_'+just_chm_file_split[3]+'_'+just_chm_file_split[4]+'_'+just_chm_file_split[5]+'_'+'Biomass.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
array2raster('biomass.tif',(xMin,yMax),1,-1,np.array(biomass_out,dtype=float),32611)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we will add libraries from skilearn which will help with the watershed delination, determination of predictor variables and random forest algorithm
Step2: Define a function that will allow us to plot our spatial data
Step3: Define a function that will allow us to output geotiff files
Step4: Now we will define the file path to our CHM file
Step5: We will want to output the results with the same file information as the input, so we will gather the file name information
Step6: Now we will get the CHM data, plot it and save the figure
Step7: Now we will run a Gaussian smoothing kernal (convolution) across the data set to remove spurious high vegetation points. This will help ensure we are finding the treetops properly before running the watershed segmentation algorithm. For different forest types it may be necessary to change the input parameters. Information on the function can be found at (https
Step8: Now save a copy of filtered CHM
Step9: Now we will run an algorithm to determine local maximums within the image. Setting indices to 'False' returns a raster of the maximum points, as opposed to a list of coordinates. The footprint parameter is an area where only a single peak can be found. This should be approximately the size of the smallest tree. Information on more sophisticated methods to define the window can be found in Chen (2006).
Step10: Plot the raster of local maximums. The following figure shows the difference in finding local maximums for a filtered vs. non-filtered CHM.
Step11: Apply labels to all of the local maximum points
Step12: Next we will create a mask layer of all of the vegettion points so that the watershed segmentation will only occur on the trees and not extend into the surrounding ground points. Since 0 represent ground points in the CHM, setting the mask to 1 where the CHM is not zero will define the mask
Step13: Next we will perfrom the watershed segmentation, which produces a raster of labels
Step14: Max_filtred_non_filtered.JPG
Step15: It was found that occasionally the segmenting skippen an integer number. We want to be able to match our segments to the trees in later steps, so we will create an array with only the segment numbers used.
Step16: Next we will save the segments as a geotiff and plot them
Step17: Now we will define the predictor variables and begin to fill out their values
Step18: Now we will define the remaining predictor variables
Step19: We will now run through a loop of all tree segments and gather the remaining predictor variables which include height percentiles and crown geometric volume percentiles. Inside the loop, we use logical indexing to retrieve each individual tree. We then calculate our predictor variables of interest.
Step20: We now bring in the training data file which is a simple CSV file with no header. The first column is biomass, and the remaining columns are the same predictor variables defined above. The tree diameter and max height were dfined in the TOS data along with the DBH. The field validated values are used for training, while the other were determined from the CHM and camera images by manually delineating the tree crowns and pulling out the relevant information from the CHM. Biomass was calculated from DBH accordaing to the formulas in Jenkins et al. (2003).
Step21: We then define paraemters of the Random Forest classifier and fit the predictor variables from the training data to the Biomass estaimtes.
Step22: Now we will gather the predictor variables gathered from all the segmented trees into a single array
Step23: We know apply the Random Forest model to the predictor variables to retreive biomass
Step24: For outputting a raster, copy the labels raster to a biomass raster, then cycle through the segments and assign the biomass estaimte to each individual tree segment.
Step25: Collect some of the biomass statistics and then plot the results and save an output geotiff
|
9,017
|
<ASSISTANT_TASK:>
Python Code:
# variable assignment
# https://www.digitalocean.com/community/tutorials/how-to-use-variables-in-python-3
# strings -- enclose in single or double quotes, just make sure they match
my_name = 'Cody'
# numbers
int_num = 6
float_num = 6.4
# the print function
print(8)
print('Hello!')
print(my_name)
print(int_num)
print(float_num)
# booleans
print(True)
print(False)
print(4 > 6)
print(6 == 6)
print('ell' in 'Hello')
# addition
add_eq = 4 + 2
# subtraction
sub_eq = 4 - 2
# multiplication
mult_eq = 4 * 2
# division
div_eq = 4 / 2
# etc.
# create a list: name, hometown, age
# an item's position in the list is the key thing
cody = ['Cody', 'Midvale, WY', 32]
# create another list of mixed data
my_list = [1, 2, 3, 'hello', True, ['a', 'b', 'c']]
# use len() to get the number of items in the list
my_list_count = len(my_list)
print('There are', my_list_count, 'items in my list.')
# use square brackets [] to access items in a list
# (counting starts at zero in Python)
# get the first item
first_item = my_list[0]
print(first_item)
# you can do negative indexing to get items from the end of your list
# get the last item
last_item = my_list[-1]
print(last_item)
# Use colons to get a range of items in a list
# get the first two items
# the last number in a list slice is the first list item that's ~not~ included in the result
my_range = my_list[0:2]
print(my_range)
# if you leave the last number off, it takes the item at the first number's index and everything afterward
# get everything from the third item onward
my_open_range = my_list[2:]
print(my_open_range)
# Use append() to add things to a list
my_list.append(5)
print(my_list)
# Use pop() to remove items from the end of a list
my_list.pop()
print(my_list)
# use join() to join items from a list into a string with a delimiter of your choosing
letter_list = ['a', 'b', 'c']
joined_list = '-'.join(letter_list)
print(joined_list)
my_dict = {'name': 'Cody', 'title': 'Training director', 'organization': 'IRE'}
# Access items in a dictionary using square brackets and the key (typically a string)
my_name = my_dict['name']
print(my_name)
# You can also use the `get()` method to retrieve values
# you can optionally provide a second argument as the default value
# if the key doesn't exist (otherwise defaults to `None`)
my_name = my_dict.get('name', 'Jefferson Humperdink')
print(my_name)
# Use the .keys() method to get the keys of a dictionary
print(my_dict.keys())
# Use the .values() method to get the values
print(my_dict.values())
# add items to a dictionary using square brackets, the name of the key (typically a string)
# and set the value like you'd set a variable, with =
my_dict['my_age'] = 32
print(my_dict)
# delete an item from a dictionary with `del`
del my_dict['my_age']
print(my_dict)
# this is a one-line comment
This is a
multi-line comment
~~~
4 > 6
'Hello!' == 'Hello!'
(2 + 2) != (4 * 2)
100.2 >= 100
whitespace_str = ' hello! '
print(whitespace_str.strip())
my_name = 'Cody'
my_name_upper = my_name.upper()
print(my_name_upper)
my_name_lower = my_name.lower()
print(my_name_lower)
company = 'Bausch & Lomb'
company_no_ampersand = company.replace('&', 'and')
print(company_no_ampersand)
date = '6/4/2011'
date_split = date.split('/')
print(date_split)
mangled_zip = '2301'
fixed_zip = mangled_zip.zfill(5)
print(fixed_zip)
num_zip = 2301
fixed_num_zip = str(num_zip).zfill(5)
print(fixed_num_zip)
my_string = 'supercalifragilisticexpialidocious'
chunk = my_string[9:20]
print(chunk)
str_to_test = 'hello'
print(str_to_test.startswith('hel'))
print(str_to_test.endswith('lo'))
print('el' in str_to_test)
print(str_to_test in ['hi', 'whatsup', 'salutations', 'hello'])
# date in m/d/yyyy format
in_date = '8/17/1982'
# split out individual pieces of the date
# using a shortcut method to assign variables to the resulting list
month, day, year = in_date.split('/')
# reshuffle as yyyy-mm-dd using .format()
# use a formatting option (:0>2) to left-pad month/day numbers with a zero
out_date = '{}-{:0>2}-{:0>2}'.format(year, month, day)
print(out_date)
# construct a greeting template
greeting = 'Hello, {}! My name is {}.'
your_name = 'Pat'
my_name = 'Cody'
print(greeting.format(your_name, my_name))
# two strings of numbers
num_1 = '100'
num_2 = '200'
# what happens when you add them without coercing?
concat = num_1 + num_2
print(concat)
# coerce to integer, then add them
added = int(num_1) + int(num_2)
print(added)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic math
Step2: Lists
Step3: Dictionaries
Step5: Commenting your code
Step7: Comparison operators
Step8: String functions
Step9: upper() and lower()
Step10: replace()
Step11: split()
Step12: zfill()
Step13: slicing
Step14: startswith(), endswith() and in
Step15: String formatting
Step16: Type coercion
|
9,018
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('../data/wine.data')
X_train = df[df.columns[1:]]
y_train = df[df.columns[0]]
kf = KFold(n_splits=5, random_state=42, shuffle=True)
def test_accuracy(kf, X, y):
means = list()
means_range = range(1, 51)
for r in means_range:
knn = KNeighborsClassifier(n_neighbors=r)
score = cross_val_score(knn, X, y, cv=kf, scoring='accuracy')
score_mean = np.mean(score)
means.append(score_mean)
return pd.DataFrame(means, means_range).mean(axis=1).sort_values(ascending=False)
accuracy_df = test_accuracy(kf, X_train, y_train)
top_accuracy = accuracy_df.head(1)
print(top_accuracy.index[0])
print("{:0.2f}".format(top_accuracy.values[0]))
X_train_scaled = preprocessing.scale(X_train)
accuracy_df = test_accuracy(kf, X_train_scaled, y_train)
top_accuracy = accuracy_df.head()
print(top_accuracy.index[0])
print("{:0.2f}".format(top_accuracy.values[0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Извлеките из данных признаки и классы. Класс записан в первом столбце (три варианта), признаки — в столбцах со второго по последний. Более подробно о сути признаков можно прочитать по адресу https
Step2: 3. Оценку качества необходимо провести методом кросс-валидации по 5 блокам (5-fold). Создайте генератор разбиений, который перемешивает выборку перед формированием блоков (shuffle=True). Для воспроизводимости результата, создавайте генератор KFold с фиксированным параметром random_state=42. В качестве меры качества используйте долю верных ответов (accuracy).
Step3: 4. Найдите точность классификации на кросс-валидации для метода k ближайших соседей (sklearn.neighbors.KNeighborsClassifier), при k от 1 до 50. При каком k получилось оптимальное качество? Чему оно равно (число в интервале от 0 до 1)? Данные результаты и будут ответами на вопросы 1 и 2.
Step4: 5. Произведите масштабирование признаков с помощью функции sklearn.preprocessing.scale. Снова найдите оптимальное k на кросс-валидации.
Step5: 6. Какое значение k получилось оптимальным после приведения признаков к одному масштабу? Приведите ответы на вопросы 3 и 4. Помогло ли масштабирование признаков?
|
9,019
|
<ASSISTANT_TASK:>
Python Code:
# Load Module
import numpy as np
from sklearn import datasets
from sklearn import metrics
from sklearn import model_selection
import tensorflow as tf
# Load dataset.
iris = datasets.load_iris() # 총 150개의 붓꽃 사진과 class load
x_train, x_test, y_train, y_test = model_selection.train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)
print('train and test ready')
x_train[:10]
# 각 열은 꽃받침 길이, 꽃받침 너비, 꽃잎 길이, 꽃잎 너비
y_train[:10] # 0,1,2는 꽃의 종 의미
# 10, 20, 10 단위로 각각 3층 DNN 생성
feature_columns = tf.contrib.learn.infer_real_valued_columns_from_input(x_train) # list feature column
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns, hidden_units=[10, 20, 10], n_classes=3)
# Train.
classifier.fit(x_train, y_train, steps=200)
predictions = list(classifier.predict(x_test, as_iterable=True))
# Score with sklearn.
score = metrics.accuracy_score(y_test, predictions)
print('Accuracy: {0:f}'.format(score))
new_samples = np.array(
[[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=float)
y = list(classifier.predict(new_samples, as_iterable=True))
print('Predictions: {}'.format(str(y)))
input = ...
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
input = ...
net = slim.conv2d(input, 128, [3, 3], scope='conv1_1')
import tensorflow as tf
a = tf.add(2, 3)
a = tf.add(3, 5)
print (a)
sess = tf.Session()
sess.run(a)
a = tf.add(3, 5)
with tf.Session() as sess:
print (sess.run(a))
x = 2
y = 3
op1 = tf.add(x, y)
op2 = tf.multiply(x, y)
useless = tf.multiply(x, op1)
op3 = tf.pow(op2, op1)
with tf.Session() as sess:
op3 = sess.run(op3)
x = 2
y = 3
op1 = tf.add(x, y)
op2 = tf.multiply(x, y)
useless = tf.multiply(x, op1)
op3 = tf.pow(op2, op1)
with tf.Session() as sess:
op3, not_useless = sess.run([op3, useless])
# Creates a graph.
with tf.device("/cpu:0"): # 연산장치 선택 가능
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape = [2,3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape = [3,2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) # tf.ConfigProto(log_device_placement=True) cpu와 gpu연산이 모두 가능할 때 gpu선택
# Runs the op.
print (sess.run(a))
print (sess.run(b))
print (sess.run(c))
a = tf.constant(2)
b = tf.constant(3)
x = tf.add(a, b)
with tf.Session() as sess:
writer = tf.summary.FileWriter('./graphs', sess.graph) # 텐서보드에서 볼 수 있는 그래프 저장
print (sess.run(x))
# close the writer when you’re done using it
writer.close()
# constant of 1d tensor (vector)
a = tf.constant([2, 2], name="vector")
# constant of 2x2 tensor (matrix)
b = tf.constant([[0, 1], [2, 3]], name="b")
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(b))
with tf.Session() as sess:
print (sess.run(tf.zeros([2, 3], tf.int32))) # [[0, 0, 0], [0, 0, 0]]
import numpy as np
np.zeros((2,3), dtype=np.int32)
input_tensor = [[0, 1], [2, 3], [4, 5]]
with tf.Session() as sess:
print (sess.run(tf.zeros_like(input_tensor))) # [[0, 0], [0, 0], [0, 0]]
np.zeros_like(input_tensor)
with tf.Session() as sess:
print(sess.run(tf.ones([2, 3], tf.int32))) # [[1, 1, 1], [1, 1, 1]]
np.ones([2,3], dtype=np.int32)
input_tensor = [[0, 1], [2, 3], [4, 5]]
with tf.Session() as sess:
print(sess.run(tf.ones_like(input_tensor))) # [[1, 1], [1, 1], [1, 1]]
np.ones_like(input_tensor)
with tf.Session() as sess:
print(sess.run(tf.fill([2, 3], 8))) # [[8, 8, 8], [8, 8, 8]]
with tf.Session() as sess:
print(sess.run(tf.linspace(10.0, 13.0, 4, name="linspace"))) # [10.0 11.0 12.0 13.0]
with tf.Session() as sess:
print(sess.run(tf.range(3, 18, 3))) # [3, 6, 9, 12, 15]
for _ in range(4):# OK
a
for _ in tf.range(4): # TypeError("'Tensor' object is not iterable.")
a
with tf.Session() as sess:
print(sess.run(tf.random_normal(shape = [2,3])))
with tf.Session() as sess:
print(sess.run(tf.truncated_normal(shape = [2,3])))
with tf.Session() as sess:
print(sess.run(tf.multinomial(tf.random_normal(shape = [2,3]),5)))
with tf.Session() as sess:
print(sess.run(tf.random_gamma(shape = [2,3], alpha = 1)))
a = tf.constant([[2,1], [3,2], [7,3]])
with tf.Session() as sess:
print(sess.run(tf.random_shuffle(a)))
with tf.Session() as sess:
print(sess.run(tf.random_crop(a, [2,1])))
a = tf.constant([3, 6])
b = tf.constant([2, 2])
with tf.Session() as sess:
print(sess.run(tf.add(a, b))) # >> [5 8], 2개의 input을 받아 덧셈
with tf.Session() as sess:
print(sess.run(tf.add_n([a, b, b]))) # >> [7 10]. 모든 input을 덧셈
with tf.Session() as sess:
print(sess.run(tf.multiply(a, b))) # >> [6 12] because mul is element wise
# matmul: 2차원이상의 텐서간의 곱
with tf.Session() as sess:
print(sess.run(tf.matmul(tf.reshape(a, shape=[1, 2]),
tf.reshape(b, shape=[2, 1]))))
with tf.Session() as sess:
print(sess.run(tf.div(a, b))) # >> [1 3], 나눗셈 실행
with tf.Session() as sess:
print(sess.run(tf.mod(a, b))) # >> [1 0], 나머지 반환
# 0차원 상수텐서 - 스칼라
t_0 = 19
with tf.Session() as sess:
print(sess.run(tf.zeros_like(t_0))) # ==> 0
print(sess.run(tf.ones_like(t_0))) # ==> 1
# 1차원 텐서 - 벡터
t_1 = [b"apple", b"peach", b"grape"]
with tf.Session() as sess:
print(sess.run(tf.zeros_like(t_1))) # ==> ['' '' '']
print(sess.run(tf.ones_like(t_1))) # ==> TypeError: Expected string, got 1 of type 'int' instead.
# 2차원 텐서 - 메트릭스
t_2 = [[True, False, False],
[False, False, True],
[False, True, False]]
with tf.Session() as sess:
print(sess.run(tf.zeros_like(t_2))) # ==> 2x2 tensor, 모든 원소값 False
print(sess.run(tf.ones_like(t_2))) # ==> 2x2 tensor, 모든 원소값 True
my_const = tf.constant([1.0, 2.0], name="my_const")
print (tf.get_default_graph().as_graph_def())
# a를 스칼라 값으로 생성
a = tf.Variable(2, name="scalar")
# b를 벡터로 생성
b = tf.Variable([2, 3], name="vector")
# c를 2x2 matrix로 생성
c = tf.Variable([[0, 1], [2, 3]], name="matrix")
# W를 0으로 채워진 784x10 tensor로 생성
W = tf.Variable(tf.zeros([784,10]))
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
init_ab = tf.variables_initializer([a, b], name = "init_ab")
with tf.Session() as sess:
sess.run(init)
W = tf.Variable(tf.truncated_normal([700, 10]))
with tf.Session() as sess:
sess.run(W.initializer)
print (W)
with tf.Session() as sess:
sess.run(W.initializer)
print (W.eval())
W = tf.Variable(10)
W.assign(100) # 100이 W에 할당되지 않음
with tf.Session() as sess:
sess.run(W.initializer)
print (W.eval()) # >> 10
W = tf.Variable(10)
assign_op = W.assign(100) # assign이 W를 initialize시킴
with tf.Session() as sess:
sess.run(assign_op)
print (W.eval()) # >> 100
# 값이 2인 변수 a 생성
a = tf.Variable(2, name="scalar")
# a_times_two에 a * 2 할당
a_times_two = a.assign(a * 2)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init) # a_times_two가 a에 따라 바뀌기 때문에 반드시 a를 initialize시켜줘야 함
sess.run(a_times_two) # >> 4
# sess.run(a_times_two) # >> 8
# sess.run(a_times_two) # >> 16
print (a_times_two.eval())
W = tf.Variable(10)
with tf.Session() as sess:
sess.run(W.initializer) # assign_add와 assign_sub는 assign과는 다르게 variable을 initialize시켜주지 않음
print(sess.run(W.assign_add(10)))
print(sess.run(W.assign_sub(2)))
W = tf.Variable(10)
sess1 = tf.Session()
sess2 = tf.Session()
sess1.run(W.initializer)
sess2.run(W.initializer)
print(sess1.run(W.assign_add(10))) # ==> 20
print(sess2.run(W.assign_sub(2))) # ==> 8
print(sess1.run(W.assign_add(100))) # ==> 120
print(sess2.run(W.assign_sub(50))) # ==> -42
sess1.close()
sess2.close()
W = tf.Variable(tf.truncated_normal([700, 10]))
U = tf.Variable(W * 2)
sess = tf.InteractiveSession()
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
print(c.eval())
sess.close()
# your graph g have 5 ops: a, b, c, d, e
with g.control_dependencies([a, b, c]):
d = ....
e = ....
# create a placeholder of type float 32-bit, shape is a vector of 3 elements
a = tf.placeholder(tf.float32, shape=[3])
# create a constant of type float 32-bit, shape is a vector of 3 elements
b = tf.constant([5, 5, 5], tf.float32)
# use the placeholder as you would a constant or a variable
c = a + b # Short for tf.add(a, b)
with tf.Session() as sess:
# feed [1, 2, 3] to placeholder a via the dict {a: [1, 2, 3]}
# fetch value of c
writer = tf.summary.FileWriter('./my_graph', sess.graph)
# print(sess.run(c)) # ==> Error
print(sess.run(c, {a: [1, 2, 3]}))
# create Operations, Tensors, etc (using the default graph)
a = tf.add(2, 5)
b = tf.multiply(a, 3)
# start up a `Session` using the default graph
sess = tf.Session()
# define a dictionary that says to replace the value of `a` with 15
replace_dict = {a: 15}
# Run the session, passing in `replace_dict` as the value to `feed_dict`
sess.run(b, feed_dict=replace_dict) # returns 45
# Normal loading
x = tf.Variable(10, name='x')
y = tf.Variable(20, name='y')
z = tf.add(x, y)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter('./my_graph/l2', sess.graph)
for _ in range(10):
sess.run(z)
writer.close()
# Lazy loading
x = tf.Variable(10, name='x')
y = tf.Variable(20, name='y')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter('./my_graph/l2', sess.graph)
for _ in range(10):
sess.run(tf.add(x, y)) # someone decides to be clever to save one line of code
writer.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. TF-slim(tf.contrib.slim)
Step2: with TF-Slim
Step3: Data Flow Graph
Step4: How to get the value of a?
Step5: More graphs
Step6: Why graphs
Step7: 텐서보드 실행법
Step8: 텐서의 원소로 특정한 값을 생성할 수 있음
Step9: tf.zeros_like(input, dtype=None, name=None, opitmize=True)
Step10: tf.one(shape, dtype=tf.float32, name=None)
Step11: tf.ones_like(input_tensor, dtype=None, name=None, optimize=True)
Step12: tf.fill(dims, value, name=None)
Step13: tf.linspace(start, stop, num, name=None)
Step14: tf.range(start, limit=None, delta=1, dtype=None, name='range')
Step15: 텐서는 반복문에 사용할 수 없음
Step16: 특정 분포에서 난수를 생성할 수 있음
Step17: 3. Math Operations
Step18: 4. Data Types
Step19: TensorFlow Native Types
Step20: Declare variables
Step21: 변수를 사용하기 전에는 항상 변수를 초기화해야함
Step22: To initialize only as subset of varuables
Step23: Evaluate values of variables
Step24: eval()함수를 사용하면 값까지 볼수 있음
Step25: Assign values to variables
Step26: Tensorflow session은 각각 유지됨
Step27: 다른 변수를 사용해서 변수를 만들수 있음
Step28: 6. InteractiveSession
Step29: 7. Control Dependencies
Step30: 8. Placeholders and feed_dict
Step31: 꼭 placeholder가 아니여도 feed 가능
Step32: 9. The trap of lazy loading
|
9,020
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Required imports
from wikitools import wiki
from wikitools import category
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import gensim
import numpy as np
import lda
import lda.datasets
from time import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import matplotlib.pyplot as plt
import pylab
from test_helper import Test
site = wiki.Wiki("https://en.wikipedia.org/w/api.php")
# Select a category with a reasonable number of articles (>100)
# cat = "Economics"
cat = "Pseudoscience"
print cat
# Loading category data. This may take a while
print "Loading category data. This may take a while..."
cat_data = category.Category(site, cat)
corpus_titles = []
corpus_text = []
for n, page in enumerate(cat_data.getAllMembersGen()):
print "\r Loading article {0}".format(n + 1),
corpus_titles.append(page.title)
corpus_text.append(page.getWikiText())
n_art = len(corpus_titles)
print "\nLoaded " + str(n_art) + " articles from category " + cat
# n = 5
# print corpus_titles[n]
# print corpus_text[n]
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "punkt"
# Select option "d) Download", and identifier "stopwords"
# nltk.download()
stopwords_en = stopwords.words('english')
corpus_clean = []
for n, art in enumerate(corpus_text):
print "\rProcessing article {0} out of {1}".format(n + 1, n_art),
# This is to make sure that all characters have the appropriate encoding.
art = art.decode('utf-8')
# Tokenize each text entry.
# scode: tokens = <FILL IN>
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: filtered_tokens = <FILL IN>
# Remove all tokens in the stopwords list and append the result to corpus_clean
# scode: clean_tokens = <FILL IN>
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after processing:"
print corpus_clean[0][0:30]
Test.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')
Test.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
corpus_stemmed = []
for n, token_list in enumerate(corpus_clean):
print "\rStemming article {0} out of {1}".format(n + 1, n_art),
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: stemmed_tokens = <FILL IN>
# Add art to the stemmed corpus
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemmed[0][0:30]
Test.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])),
'It seems that stemming has not been applied properly')
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "wordnet"
# nltk.download()
wnl = WordNetLemmatizer()
# Select stemmer.
corpus_lemmat = []
for n, token_list in enumerate(corpus_clean):
print "\rLemmatizing article {0} out of {1}".format(n + 1, n_art),
# scode: lemmat_tokens = <FILL IN>
# Add art to the stemmed corpus
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_lemmat[0][0:30]
# Create dictionary of tokens
D = gensim.corpora.Dictionary(corpus_clean)
n_tokens = len(D)
print "The dictionary contains {0} tokens".format(n_tokens)
print "First tokens in the dictionary: "
for n in range(10):
print str(n) + ": " + D[n]
# Transform token lists into sparse vectors on the D-space
corpus_bow = [D.doc2bow(doc) for doc in corpus_clean]
Test.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size')
print "Original article (after cleaning): "
print corpus_clean[0][0:30]
print "Sparse vector representation (first 30 components):"
print corpus_bow[0][0:30]
print "The first component, {0} from document 0, states that token 0 ({1}) appears {2} times".format(
corpus_bow[0][0], D[0], corpus_bow[0][0][1])
print "{0} tokens".format(len(D))
print "{0} Wikipedia articles".format(len(corpus_bow))
# SORTED TOKEN FREQUENCIES (I):
# Create a "flat" corpus with all tuples in a single list
corpus_bow_flat = [item for sublist in corpus_bow for item in sublist]
# Initialize a numpy array that we will use to count tokens.
# token_count[n] should store the number of ocurrences of the n-th token, D[n]
token_count = np.zeros(n_tokens)
# Count the number of occurrences of each token.
for x in corpus_bow_flat:
# Update the proper element in token_count
# scode: <FILL IN>
# Sort by decreasing number of occurences
ids_sorted = np.argsort(- token_count)
tf_sorted = token_count[ids_sorted]
print D[ids_sorted[0]]
print "{0} times in the whole corpus".format(tf_sorted[0])
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
# Example data
n_bins = 25
hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]
y_pos = np.arange(len(hot_tokens))
z = tf_sorted[n_bins-1::-1]/n_art
plt.barh(y_pos, z, align='center', alpha=0.4)
plt.yticks(y_pos, hot_tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# SORTED TOKEN FREQUENCIES:
# Example data
plt.semilogy(tf_sorted)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# scode: <WRITE YOUR CODE HERE>
# Example data
cold_tokens = [D[i] for i in ids_sorted if tf_sorted[i]==1]
print "There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format(
len(cold_tokens), float(len(cold_tokens))/n_tokens*100)
# scode: <WRITE YOUR CODE HERE>
# SORTED TOKEN FREQUENCIES (I):
# Count the number of occurrences of each token.
token_count2 = np.zeros(n_tokens)
for x in corpus_bow_flat:
token_count2[x[0]] += (x[1]>0)
# Sort by decreasing number of occurences
ids_sorted2 = np.argsort(- token_count2)
tf_sorted2 = token_count2[ids_sorted2]
# SORTED TOKEN FREQUENCIES (II):
# Example data
n_bins = 25
hot_tokens2 = [D[i] for i in ids_sorted2[n_bins-1::-1]]
y_pos2 = np.arange(len(hot_tokens2))
z2 = tf_sorted2[n_bins-1::-1]/n_art
plt.barh(y_pos2, z2, align='center', alpha=0.4)
plt.yticks(y_pos2, hot_tokens2)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
tfidf = gensim.models.TfidfModel(corpus_bow)
doc_bow = [(0, 1), (1, 1)]
tfidf[doc_bow]
corpus_tfidf = tfidf[corpus_bow]
print corpus_tfidf[0][0:5]
# Initialize an LSI transformation
n_topics = 5
# scode: lsi = <FILL IN>
lsi.show_topics(num_topics=-1, num_words=10, log=False, formatted=True)
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
n_bins = 25
# Example data
y_pos = range(n_bins-1, -1, -1)
pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
for i in range(n_topics):
### Plot top 25 tokens for topic i
# Read i-thtopic
# scode: <FILL IN>
tokens = [t[0] for t in topic_i]
weights = [t[1] for t in topic_i]
# Plot
# scode: <FILL IN>
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Top {0} topic weights'.format(n_bins))
plt.title('Topic {0}'.format(i))
plt.show()
# On real corpora, target dimensionality of
# 200–500 is recommended as a “golden standard”
# Create a double wrapper over the original
# corpus bow tfidf fold-in-lsi
corpus_lsi = lsi[corpus_tfidf]
print corpus_lsi[0]
# Extract weights from corpus_lsi
# scode weight0 = <FILL IN>
# Locate the maximum positive weight
nmax = np.argmax(weight0)
print nmax
print weight0[nmax]
print corpus_lsi[nmax]
# Get topic 0
# scode: topic_0 = <FILL IN>
# Compute a list of tuples (token, wordcount) for all tokens in topic_0, where wordcount is the number of
# occurences of the token in the article.
# scode: token_counts = <FILL IN>
print "Topic 0 is:"
print topic_0
print "Token counts:"
print token_counts
ldag = gensim.models.ldamodel.LdaModel(
corpus=corpus_tfidf, id2word=D, num_topics=10, update_every=1, passes=10)
ldag.print_topics()
# For testing LDA, you can use the reuters dataset
# X = lda.datasets.load_reuters()
# vocab = lda.datasets.load_reuters_vocab()
# titles = lda.datasets.load_reuters_titles()
X = np.int32(np.zeros((n_art, n_tokens)))
for n, art in enumerate(corpus_bow):
for t in art:
X[n, t[0]] = t[1]
print X.shape
print X.sum()
vocab = D.values()
titles = corpus_titles
# Default parameters:
# model = lda.LDA(n_topics, n_iter=2000, alpha=0.1, eta=0.01, random_state=None, refresh=10)
model = lda.LDA(n_topics=10, n_iter=1500, random_state=1)
model.fit(X) # model.fit_transform(X) is also available
topic_word = model.topic_word_ # model.components_ also works
# Show topics...
n_top_words = 8
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
print('Topic {}: {}'.format(i, ' '.join(topic_words)))
doc_topic = model.doc_topic_
for i in range(10):
print("{} (top topic: {})".format(titles[i], doc_topic[i].argmax()))
# This is to apply the model to a new doc(s)
# doc_topic_test = model.transform(X_test)
# for title, topics in zip(titles_test, doc_topic_test):
# print("{} (top topic: {})".format(title, topics.argmax()))
# Adapted from an example in sklearn site
# http://scikit-learn.org/dev/auto_examples/applications/topics_extraction_with_nmf_lda.html
# You can try also with the dataset provided by sklearn in
# from sklearn.datasets import fetch_20newsgroups
# dataset = fetch_20newsgroups(shuffle=True, random_state=1,
# remove=('headers', 'footers', 'quotes'))
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
print("Loading dataset...")
# scode: data_samples = <FILL IN>
data_samples = [" ".join(c) for c in corpus_clean]
print 'Document 0:'
print data_samples[0][0:200], '...'
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
n_features = 1000
n_samples = 2000
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print tf[0][0][0]
print("Fitting LDA models with tf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
# scode: lda = <FILL IN>
learning_method='online', learning_offset=50., random_state=0)
# doc_topic_prior= 1.0/n_topics, topic_word_prior= 1.0/n_topics)
t0 = time()
corpus_lda = lda.fit_transform(tf)
print corpus_lda[10]/np.sum(corpus_lda[10])
print("done in %0.3fs." % (time() - t0))
print corpus_titles[10]
# print corpus_text[10]
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, 20)
topics = lda.components_
topic_probs = [t/np.sum(t) for t in topics]
# print topic_probs[0]
print -np.sort(-topic_probs[0])
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
n_bins = 50
# Example data
y_pos = range(n_bins-1, -1, -1)
pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
for i in range(n_topics):
### Plot top 25 tokens for topic i
# Read i-thtopic
# scode: <FILL IN>
rank = np.argsort(- topic_i)[0:n_bins]
tokens = [tf_feature_names[r] for r in rank]
weights = [topic_i[r] for r in rank]
# Plot
# scode: <FILL IN>
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Top {0} topic weights'.format(n_bins))
plt.title('Topic {0}'.format(i))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Corpus acquisition.
Step2: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https
Step3: Now, we have stored the whole text collection in two lists
Step4: 2. Corpus Processing
Step5: 2.2. Stemming vs Lemmatization
Step6: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
Step7: Task
Step8: One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
Step9: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Step10: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
Step11: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
Step12: and a bow representation of a corpus with
Step13: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
Step14: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
Step15: which appears
Step16: In the following we plot the most frequent terms in the corpus.
Step17: Exercise
Step18: Exercise
Step19: 3. Semantic Analysis
Step20: From now on, tfidf can be used to convert any vector from the old representation (bow integer counts) to the new one (TfIdf real-valued weights)
Step21: Or to apply a transformation to a whole corpus
Step22: 3.1. Latent Semantic Indexing (LSI)
Step23: From LSI, we can check both the topic-tokens matrix and the document-topics matrix.
Step24: However, a more useful representation of topics is as a list of tuples (token, value). This is provided by the show_topic method.
Step25: LSI approximates any document as a linear combination of the topic vectors. We can compute the topic weights for any input corpus entered as input to the lsi model.
Step26: Task
Step27: 3.2. Latent Dirichlet Allocation (LDA)
Step28: 3.2.2. LDA using python lda library
Step29: Document-topic distribution
Step30: It allows incremental updates
Step31: Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator.
Step32: Now we are ready to compute the token counts.
Step33: Now we can apply the LDA algorithm.
Step34: Task
Step35: Exercise
|
9,021
|
<ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
%watermark -a 'Ethen' -d -t -v
class A:
def __init__(self):
self.__priv = "I am private"
self._prot = "I am protected"
self.pub = "I am public"
x = A()
print(x.pub)
# Whenever we assign or retrieve any object attribute
# Python searches it in the object's __dict__ dictionary
print(x.__dict__)
class Celsius:
def __init__(self, temperature = 0):
self.set_temperature(temperature)
def to_fahrenheit(self):
return (self.get_temperature() * 1.8) + 32
def get_temperature(self):
return self._temperature
def set_temperature(self, value):
if value < -273:
raise ValueError('Temperature below -273 is not possible')
self._temperature = value
# c = Celsius(-277) # this returns an error
c = Celsius(37)
c.get_temperature()
class Celsius:
def __init__(self, temperature = 0):
self._temperature = temperature
def to_fahrenheit(self):
return (self.temperature * 1.8) + 32
# have access to the value like it is an attribute instead of a method
@property
def temperature(self):
return self._temperature
# like accessing the attribute with an extra layer of error checking
@temperature.setter
def temperature(self, value):
if value < -273:
raise ValueError('Temperature below -273 is not possible')
print('Setting value')
self._temperature = value
c = Celsius(37)
# much easier to access then the getter, setter way
print(c.temperature)
# note that you can still access the private attribute
# and violate the temperature checking,
# but then it's the users fault not yours
c._temperature = -300
print(c._temperature)
# accessing the attribute will return the ValueError error
# c.temperature = -300
print(dict.fromkeys(['raymond', 'rachel', 'mathew']))
import time
class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
# Primary
a = Date(2012, 12, 21)
print(a.__dict__)
# Alternate
b = Date.today()
print(b.__dict__)
class NewDate(Date):
pass
# Creates an instance of Date (cls=Date)
c = Date.today()
print(c.__dict__)
# Creates an instance of NewDate (cls=NewDate)
d = NewDate.today()
print(d.__dict__)
class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
# the logic belongs with the date class
@staticmethod
def show_tomorrow_date():
t = time.localtime()
return t.tm_year, t.tm_mon, t.tm_mday + 1
Date.show_tomorrow_date()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Working with Python Classes
Step2: When the Python compiler sees a private attribute, it actually transforms the actual name to _[Class name]__[private attribute name]. However, this still does not prevent the end-user from accessing the attribute. Thus in Python land, it is more common to use public and protected attribute, write proper docstrings and assume that everyone is a consenting adult, i.e. won't do anything with the protected method unless they know what they are doing.
Step3: Instead of that, now the property way. Where we define the @property and the @[attribute name].setter.
Step4: @classmethod and @staticmethod
Step5: The cls is critical, as it is an object that holds the class itself. This makes them work with inheritance.
Step6: The purpose of @staticmethod is to attach functions to classes. We do this to improve the findability of the function and to make sure that people are using the function in the appropriate context.
|
9,022
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!pip install git+https://github.com/jamesvuc/jax-bayes
!pip install SGMCMCJax
!pip install distrax
import jax.numpy as jnp
from jax.experimental import optimizers
import jax
import jax_bayes
import sys, os, math, time
import numpy as np
from functools import partial
from matplotlib import pyplot as plt
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import tensorflow_datasets as tfds
import sgmcmcjax
from jax import jit, vmap
from jax.random import split, PRNGKey
import distrax
from tqdm.auto import tqdm
import tensorflow_probability.substrates.jax.distributions as tfd
def load_dataset(split, is_training, batch_size):
if batch_size == -1:
ds = tfds.load("mnist:3.*.*", split=split, batch_size=-1)
else:
ds = tfds.load("mnist:3.*.*", split=split).cache().repeat()
if is_training and batch_size > 0:
ds = ds.shuffle(10 * batch_size, seed=0)
if batch_size > 0:
ds = ds.batch(batch_size)
return iter(tfds.as_numpy(ds)) if batch_size > 0 else tfds.as_numpy(ds)
# load the data into memory and create batch iterators
train_batches = load_dataset("train", is_training=True, batch_size=1_000)
val_batches = load_dataset("train", is_training=False, batch_size=10_000)
test_batches = load_dataset("test", is_training=False, batch_size=10_000)
# ==========
# Functions to initialise parameters
# initialise params: list of tuples (W, b) for each layer
def random_layer(key, m, n, scale=1e-2):
key, subkey = jax.random.split(key)
return (scale * jax.random.truncated_normal(key, -2, 2, (n, m)), jnp.zeros((n,)))
def init_network(key, sizes):
keys = jax.random.split(key, len(sizes))
return [random_layer(k, m, n) for k, m, n in zip(keys, sizes[:-1], sizes[1:])]
# ===========
# predict and accuracy functions
@partial(jit, static_argnames=("activation_fn"))
def predict(params, x, activation_fn):
# per-example predictions
activations = x
for w, b in params[:-1]:
outputs = activations @ w.T + b
activations = activation_fn(outputs)
final_w, final_b = params[-1]
logits = activations @ final_w.T + final_b
return logits
# =================
# Log-posterior
@partial(jit, static_argnames=("activation_fn"))
def loglikelihood(params, X, y, activation_fn):
return jnp.sum(y * jax.nn.log_softmax(predict(params, X, activation_fn)))
def logprior(params):
logP = 0.0
dist = distrax.Normal(0, 1)
for w, b in params:
logP += jnp.sum(dist.log_prob(w))
logP += jnp.sum(dist.log_prob(b))
return logP
# Accuracy for a single sample
batch_predict = vmap(predict, in_axes=(None, 0, None))
@partial(jit, static_argnames=("activation_fn"))
def accuracy(params, batch, activation_fn):
X, target_class = batch["image"].reshape((-1, D)), batch["label"]
predicted_class = jnp.argmax(batch_predict(params, X, activation_fn), axis=1)
return jnp.mean(predicted_class == target_class)
batch = next(train_batches)
nclasses = 10
x = batch["image"]
D = np.prod(x.shape[1:]) # 784
sizes = [D, 300, 100, nclasses]
def loss(params, batch, activation_fn):
logits = predict(params, batch["image"].reshape((-1, D)), activation_fn)
labels = jax.nn.one_hot(batch["label"], nclasses)
l2_loss = 0.5 * sum(jnp.sum(jnp.square(p)) for p in jax.tree_leaves(params))
softmax_crossent = -jnp.mean(labels * jax.nn.log_softmax(logits))
return softmax_crossent + reg * l2_loss
@partial(jit, static_argnames=("activation_fn"))
def train_step(i, opt_state, batch, activation_fn):
params = opt_get_params(opt_state)
dx = jax.grad(loss)(params, batch, activation_fn)
opt_state = opt_update(i, dx, opt_state)
return opt_state
reg = 1e-3
lr = 1e-3
opt_init, opt_update, opt_get_params = optimizers.rmsprop(lr)
initial_params = init_network(PRNGKey(0), sizes)
opt_state = opt_init(initial_params)
activation_fn = jax.nn.relu
%%time
accuracy_list_train, accuracy_list_test = [], []
nsteps = 2000
print_every = 100
for step in tqdm(range(nsteps + 1)):
opt_state = train_step(step, opt_state, next(train_batches), activation_fn)
params_sgd = opt_get_params(opt_state)
if step % print_every == 0:
# Periodically evaluate classification accuracy on train & test sets.
train_accuracy = accuracy(params_sgd, next(val_batches), activation_fn)
test_accuracy = accuracy(params_sgd, next(test_batches), activation_fn)
accuracy_list_train.append(train_accuracy)
accuracy_list_test.append(test_accuracy)
fig, axes = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, figsize=(20, 5))
for ls, ax in zip([accuracy_list_train, accuracy_list_test], axes.flatten()):
ax.plot(ls[:])
ax.set_title(f"Final accuracy: {100*ls[-1]:.1f}%")
from sgmcmcjax.kernels import build_sgld_kernel
from sgmcmcjax.util import progress_bar_scan
lr = 5e-5
activation_fn = jax.nn.softmax
data = load_dataset("train", is_training=True, batch_size=-1)
data = (jnp.array(data["image"].reshape((-1, D)) / 255.0), jax.nn.one_hot(jnp.array(data["label"]), nclasses))
batch_size = int(0.01 * len(data[0]))
init_fn, my_kernel, get_params = build_sgld_kernel(
lr, partial(loglikelihood, activation_fn=activation_fn), logprior, data, batch_size
)
my_kernel = jit(my_kernel)
# define the inital state
key = jax.random.PRNGKey(10)
key, subkey = jax.random.split(key, 2)
params_IC = init_network(subkey, sizes)
%%time
# iterate the the Markov chain
nsteps = 2000
Nsamples = 10
@partial(jit, static_argnums=(1,))
def sampler(key, Nsamples, params):
def body(carry, i):
key, state = carry
key, subkey = jax.random.split(key)
state = my_kernel(i, subkey, state)
return (key, state), get_params(state)
key, subkey = jax.random.split(key)
state = init_fn(subkey, params)
(_, state), samples = jax.lax.scan(body, (key, state), jnp.arange(Nsamples))
return samples, state
accuracy_list_test, accuracy_list_val = [], []
params = params_IC
for step in tqdm(range(nsteps)):
key, sample_key = jax.random.split(key, 2)
samples, state = sampler(sample_key, Nsamples, params)
params = get_params(state)
if step % print_every == 0:
test_acc, val_acc = accuracy(params, next(test_batches), activation_fn), accuracy(
params, next(val_batches), activation_fn
)
accuracy_list_test.append(test_acc)
accuracy_list_val.append(val_acc)
fig, axes = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, figsize=(20, 5))
for ls, ax in zip([accuracy_list_test, accuracy_list_val], axes.flatten()):
ax.plot(ls[:])
ax.set_title(f"Final accuracy: {100*ls[-1]:.2f}%")
def certainty_acc(pp, targets, cert_threshold=0.5):
Calculates the accuracy-at-certainty from the predictive probabilites pp
on the targets.
Args:
pp: (batch_size, n_classes) array of probabilities
targets: (batch_size, n_calsses) array of label class indices
cert_threhsold: (float) minimum probability for making a prediction
Returns:
accuracy at certainty, indicies of those prediction instances for which
the model is certain.
preds = jnp.argmax(pp, axis=1)
pred_probs = jnp.max(pp, axis=1)
certain_idxs = pred_probs >= cert_threshold
acc_at_certainty = jnp.mean(targets[certain_idxs] == preds[certain_idxs])
return acc_at_certainty, certain_idxs
@jit
@vmap
def entropy(p):
computes discrete Shannon entropy.
p: (n_classes,) array of probabilities corresponding to each class
p += 1e-12 # tolerance to avoid nans while ensuring 0log(0) = 0
return -jnp.sum(p * jnp.log(p))
test_batch = next(test_batches)
def plot_acc_vs_confidence(predict_fn, test_batch):
# plot how accuracy changes as we increase the required level of certainty
preds = predict_fn(test_batch) # (batch_size, n_classes) array of probabilities
acc, mask = certainty_acc(preds, test_batch["label"], cert_threshold=0)
thresholds = [0.1 * i for i in range(11)]
cert_accs, pct_certs = [], []
for t in thresholds:
cert_acc, cert_mask = certainty_acc(preds, test_batch["label"], cert_threshold=t)
cert_accs.append(cert_acc)
pct_certs.append(cert_mask.mean())
fig, ax = plt.subplots(1)
line1 = ax.plot(thresholds, cert_accs, label="accuracy at certainty", marker="x")
line2 = ax.axhline(y=acc, label="regular accuracy", color="black")
ax.set_ylabel("accuracy")
ax.set_xlabel("certainty threshold")
axb = ax.twinx()
line3 = axb.plot(thresholds, pct_certs, label="pct of certain preds", color="green", marker="x")
axb.set_ylabel("pct certain")
lines = line1 + [line2] + line3
labels = [l.get_label() for l in lines]
ax.legend(lines, labels, loc=6)
return fig, ax
# plugin approximation to posterior predictive
@partial(jit, static_argnames=("activation_fn"))
def posterior_predictive_plugin(params, X, activation_fn):
logit_pp = predict(params, X, activation_fn)
return jax.nn.softmax(logit_pp, axis=-1)
def pred_fn_sgd(batch):
X = batch["image"].reshape((-1, D))
return posterior_predictive_plugin(params_sgd, X, jax.nn.relu)
fig, ax = plot_acc_vs_confidence(pred_fn_sgd, test_batch)
plt.savefig("acc-vs-conf-sgd.pdf")
plt.show()
def posterior_predictive_bayes(params_sampled, batch, activation_fn):
computes the posterior_predictive P(class = c | inputs, params) using a histogram
X = batch["image"].reshape((-1, D))
y = batch["label"]
pred_fn = lambda p: predict(p, X, activation_fn)
pred_fn = jax.vmap(pred_fn)
logit_samples = pred_fn(params_sampled) # n_samples x batch_size x n_classes
pred_samples = jnp.argmax(logit_samples, axis=-1) # n_samples x batch_size
n_classes = logit_samples.shape[-1]
batch_size = logit_samples.shape[1]
probs = np.zeros((batch_size, n_classes))
for c in range(n_classes):
idxs = pred_samples == c
probs[:, c] = idxs.sum(axis=0)
return probs / probs.sum(axis=1, keepdims=True)
def pred_fn_sgld(batch):
return posterior_predictive_bayes(samples, batch, jax.nn.softmax)
fig, ax = plot_acc_vs_confidence(pred_fn_sgld, test_batch)
plt.savefig("acc-vs-conf-sgld.pdf")
plt.show()
fashion_ds = tfds.load("fashion_mnist:3.*.*", split="test").cache().repeat()
fashion_test_batches = tfds.as_numpy(fashion_ds.batch(10_000))
fashion_test_batches = iter(fashion_test_batches)
fashion_batch = next(fashion_test_batches)
fig, ax = plot_acc_vs_confidence(pred_fn_sgd, fashion_batch)
plt.savefig("acc-vs-conf-sgd-fashion.pdf")
plt.show()
fig, ax = plot_acc_vs_confidence(pred_fn_sgld, fashion_batch)
plt.savefig("acc-vs-conf-sgld-fashion.pdf")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: The Bayesian NN is taken from SGMCMCJAX. However, there are couple of changes made. These can be listed as follows
Step3: Model
Step4: SGLD
Step7: Uncertainty analysis
Step8: SGD
Step10: SGLD
Step11: Distribution shift
Step12: SGD
Step13: SGLD
|
9,023
|
<ASSISTANT_TASK:>
Python Code:
import packages.initialization
import pioneer3dx as p3dx
p3dx.init()
def forward():
# copy and paste your code here
...
def turn():
# copy and paste your code here
...
print('Pose of the robot at the start')
p3dx.pose()
for _ in range(4):
forward()
turn()
print('Pose of the robot at the end')
p3dx.pose()
%matplotlib inline
import matplotlib.pyplot as plt
x, y = p3dx.trajectory()
plt.plot(x,y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3. Program
Step2: The trajectory can also be displayed
|
9,024
|
<ASSISTANT_TASK:>
Python Code:
import IPython.display as IPdisplay
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pynamical
from pynamical import simulate, bifurcation_plot, save_fig
%matplotlib inline
title_font = pynamical.get_title_font()
label_font = pynamical.get_label_font()
# run the logistic model for 20 generations for 7 growth rates between 0.5 and 3.5 then view the output
pops = simulate(num_gens=20, rate_min=0.5, rate_max=3.5, num_rates=7)
pops.applymap(lambda x: '{:03.3f}'.format(x))
def get_colors(cmap, n, start=0., stop=1., alpha=1., reverse=False):
'''return n-length list of rgba colors from the passed colormap name and alpha,
limit extent by start/stop values and reverse list order if flag is true'''
colors = [cm.get_cmap(cmap)(x) for x in np.linspace(start, stop, n)]
colors = [(r, g, b, alpha) for r, g, b, _ in colors]
return list(reversed(colors)) if reverse else colors
# plot the results of the logistic map run for these 7 different growth rates
#color_list = ['#cc00cc', '#4B0082', '#0066cc', '#33cc00', '#cccc33', '#ff9900', '#ff0000']
color_list = get_colors('viridis', n=len(pops.columns), start=0., stop=1)
for color, rate in reversed(list(zip(color_list, pops.columns))):
ax = pops[rate].plot(kind='line', figsize=[10, 6], linewidth=2.5, alpha=0.95, c=color)
ax.grid(True)
ax.set_ylim([0, 1])
ax.legend(title='Growth Rate', loc=3, bbox_to_anchor=(1, 0.525))
ax.set_title('Logistic Model Results by Growth Rate', fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
save_fig('logistic-map-growth-rates')
plt.show()
# run the model for 100 generations across 1000 growth rate steps from 0 to 4 then plot the bifurcation diagram
pops = simulate(num_gens=100, rate_min=0, rate_max=4, num_rates=1000, num_discard=1)
bifurcation_plot(pops, filename='logistic-map-bifurcation-0')
# plot the bifurcation diagram for 200 generations, but this time throw out the first 100 rows
# 200-100=100, so we still have 100 generations in the plot, just like in the previous cell
# this will show us only the attractors (aka, the values that each growth rate settles on over time)
pops = simulate(num_gens=100, rate_min=0, rate_max=4, num_rates=1000, num_discard=100)
bifurcation_plot(pops, filename='logistic-map-bifurcation-1')
# run the model for 300 generations across 1,000 growth rate steps from 2.8 to 4, and plot the bifurcation diagram
# this plot is a zoomed-in look at the first plot and shows the period-doubling path to chaos
pops = simulate(num_gens=100, rate_min=2.8, rate_max=4, num_rates=1000, num_discard=200, initial_pop=0.1)
bifurcation_plot(pops, xmin=2.8, xmax=4, filename='logistic-map-bifurcation-2')
# run the model for 200 generations across 1,000 growth rate steps from 3.7 to 3.9, and plot the bifurcation diagram
# this plot is a zoomed-in look at the first plot and shows more detail in the chaotic regimes
pops = simulate(num_gens=100, rate_min=3.7, rate_max=3.9, num_rates=1000, num_discard=100)
bifurcation_plot(pops, xmin=3.7, xmax=3.9, filename='logistic-map-bifurcation-3')
# run the model for 500 generations across 1,000 growth rate steps from 3.84 to 3.856, and plot the bifurcation diagram
# throw out the first 300 generations, so we end up with 200 generations in the plot
# this plot is a zoomed-in look at the first plot and shows the same structure we saw at the macro-level
pops = simulate(num_gens=200, rate_min=3.84, rate_max=3.856, num_rates=1000, num_discard=300)
bifurcation_plot(pops, xmin=3.84, xmax=3.856, ymin=0.445, ymax=0.552, filename='logistic-map-bifurcation-4')
# plot the numeric output of the logistic model for growth rates of 3.9 and 3.90001
# this demonstrates sensitive dependence on the parameter
rate1 = 3.9
rate2 = rate1 + 0.00001
pops = simulate(num_gens=40, rate_min=rate1, rate_max=rate2, num_rates=2)
ax = pops.plot(kind='line', figsize=[10, 6], linewidth=3, alpha=0.6, style=['#003399','#cc0000'])
ax.grid(True)
ax.set_title('Logistic Model Results by Growth Rate', fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
ax.legend(title='Growth Rate', loc=3)
save_fig('logistic-map-parameter-sensitivity')
plt.show()
# plot the numeric output of the logistic model at growth rate 3.9 for 2 similar starting population values
# this demonstrates sensitive dependence on initial conditions, as they diverge through chaos
r = 3.9
pops1 = simulate(num_gens=55, rate_min=r, rate_max=4.0, num_rates=1, initial_pop=0.5)
pops2 = simulate(num_gens=55, rate_min=r, rate_max=4.0, num_rates=1, initial_pop=0.50001)
pops = pd.concat([pops1, pops2], axis=1)
pops.columns = ['0.5', '0.50001']
ax = pops.plot(kind='line', figsize=[10, 6], linewidth=3, alpha=0.6, style=['#003399','#cc0000'])
ax.grid(True)
ax.set_title('Logistic Model Results by Initial Conditions, r={}'.format(r), fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
ax.legend(title='Initial Population', loc=3)
save_fig('logistic-map-initial-conditions')
plt.show()
# plot the numeric output of the logistic model at growth rate 3.65 for 2 similar starting population values
# this demonstrates how very similar conditions do not diverge when the rate is not chaotic
r = 3.65
pops1 = simulate(num_gens=55, rate_min=r, num_rates=1, initial_pop=0.5)
pops2 = simulate(num_gens=55, rate_min=r, num_rates=1, initial_pop=0.50001)
pops = pd.concat([pops1, pops2], axis=1)
pops.columns = ['0.5', '0.50001']
ax = pops.plot(kind='line', figsize=[10, 6], linewidth=3, alpha=0.6, style=['#003399','#cc0000'])
ax.grid(True)
ax.set_title('Logistic Model Results by Initial Conditions, r={}'.format(r), fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
ax.legend(title='Initial Population', loc=3)
save_fig('logistic-map-initial-conditions-stable')
plt.show()
# here's an example of the phase diagrams that I create in pynamical-demo-phase-diagrams.ipynb
IPdisplay.Image(url='images/3d-logistic-map-attractor-1.png', width=500)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's see the population values the logistic map produces for a range of growth rate parameters
Step2: Now let's visualize the system attractors for a large range of growth rate parameters, using bifurcation diagrams
Step3: In the chaotic regime (r=3.6 to 4=4.0), the system has a strange attractor with fractal structure
Step4: Now let's visualize the system's sensitive dependence on initial conditions
Step5: In part 2, I look at phase diagrams that let us visualize our strange attractors and disambiguate chaos from random noise
|
9,025
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import os
import numpy as np
import tempfile
import matplotlib.pyplot as pyplot
import logging
logging.basicConfig(level=logging.INFO)
import minimask.mask as mask
import minimask.healpix_projection as hp
import minimask.io.mosaic as mosaic
filename = "masks/mosaic.txt"
try:
os.mkdir(os.path.dirname(filename))
except:
pass
tile = np.array([[[-0.5, -0.5],[0.5, -0.5],[0.5,0.5],[-0.5,0.5]]])*8
grid = hp.HealpixProjector(nside=4)
lon, lat = grid.pix2ang(np.arange(grid.npix))
centers = np.transpose([lon, lat])
mosaic.Mosaic(tile, centers).write(filename)
M = mask.Mask(filename)
print "The number of polygons in the mask is {}.".format(len(M))
import healpy
healpy.mollview(title="")
for x,y in M.render(1):
healpy.projplot(x,y,lonlat=True)
map = M.pixelize(nside=64, n=10, weight=False)
healpy.mollview(map, title="")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify the location of the mask file to write
Step2: Construct a mask using a tile pattern with centers specified by the healpix grid.
Step3: Load the file as a mask object
Step4: Plot the mask on a mollweide projection using healpy.
Step5: Pixelize the mask onto the healpix grid
|
9,026
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
imax = 1 # Normalize imax to 1
freq = 50 # [Hz]
w = 2*pi*freq # [rad/s] angluar velocity
t = linspace(0, 1./50, 100) # 100 values for one period
wt = w*t # we are going to use this quite often
# amplitudes (change them to see effect of non-symmetry):
I_amp = [[1.0], [1.0], [1.0]]
# time variants
I_time = array([sin(wt),
sin(wt-2*pi/3),
sin(wt+2*pi/3)])
# vectorial shifts
I_shift = [[cos(0) + 1j*sin(0)],
[cos(2*pi/3) + 1j*sin(2*pi/3)],
[cos(-2*pi/3) + 1j*sin(-2*pi/3)]]
# all combined
I_ph = I_amp * I_time
I = I_ph * I_shift
Itot = I[0] + I[1] + I[2]
# Its amplitude
In_amp = (I_ph[0] + I_ph[1] + I_ph[2])
# Its angle:
In_ang = angle(Itot)
In = In_amp * exp(1j*In_ang) # combine to a complex In
circle = 1.5 * (cos(wt) + 1j*sin(wt))
# First set up the figure, the axis, and the plot element we want to animate
from matplotlib import animation
fig = figure()
ax1 = fig.add_subplot(1, 2, 1)
ax2 = fig.add_subplot(1, 2, 2)
ax1.set_title('Space vectors in motion')
ax1.set_xlabel('Real')
ax1.set_ylabel('Imag')
ax1.set_xlim(-1.6, 1.6)
ax1.set_ylim(-1.6, 1.6)
ax1.set_aspect('equal')
ax2.set_title('Sinusoidal three-phase')
ax2.set_xlabel('wt [rad]')
ax2.set_xlim(0, 2*pi)
ax2.set_ylim(-1.6, 1.6)
# set up the different line colours
la, lb, lc, ltot, ctot, ln, cn = ax1.plot([], [], 'red',
[], [], 'green',
[], [], 'blue',
[], [], 'magenta',
[], [], 'magenta',
[], [], 'y',
[], [], 'y',
lw=2)
# set up the moving dots
da, db, dc, dn = ax2.plot([], [], 'ro',
[], [], 'go',
[], [], 'bo',
[], [], 'yo',
lw=2)
tight_layout() # sometimes useful when sub-plots get a bit crowded
# initialization function: plot the background of each frame
def init():
ax1.plot(real(circle), imag(circle), 'black');
ax2.plot(wt, I_ph[0,], 'red',
wt, I_ph[1,], 'green',
wt, I_ph[2,], 'blue',
wt, In_amp, 'y',
lw=1);
return
# animation function. This is called sequentially
def animate(simData):
i = simData - 1 # python index starts at 0
re = [real(I[0,i]), real(I[1,i]), real(I[2,i])]
im = [imag(I[0,i]), imag(I[1,i]), imag(I[2,i])]
ren = real(In[i])
imn = imag(In[i])
la.set_data([0, re[0]],
[0, im[0]])
lb.set_data([0, re[1]],
[0, im[1]])
lc.set_data([0, re[2]],
[0, im[2]])
ltot.set_data([0, real(Itot[i])], [0, imag(Itot[i])])
ctot.set_data(real(Itot[:i+1]),imag(Itot[:i+1]))
ln.set_data([0, real(In[i])], [0, imag(In[i])])
cn.set_data(real(In[:i]), imag(In[:i]))
da.set_data(wt[i], I_ph[0,i])
db.set_data(wt[i], I_ph[1,i])
dc.set_data(wt[i], I_ph[2,i])
dn.set_data(wt[i], In_amp[i])
return la, lb, lc, ltot, da, db, dc
# call the animator:
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=50)
from IPython.display import HTML
HTML(anim.to_html5_video())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up the basic conditions
Step2: First, generate the three component magnetic fields
Step3: Calculate the combined current vector
Step4: Calculate neutral current $I_n$
Step5: Calculate a circle representing the expected maximum value of Itot
Step6: Generating the animation
Step7: The animation above might be a bit "skippy" due to the browser performance trying to cope with the inline animation.
|
9,027
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame([[1,2,3,1],[0,0,0,0],[1,0,0,1],[0,1,2,0],[1,1,0,1]],columns=['A','B','C','D'])
def g(df):
return df.loc[(df.max(axis=1) != 2), (df.max(axis=0) != 2)]
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
9,028
|
<ASSISTANT_TASK:>
Python Code:
import jax.numpy as jnp
import jax
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 18})
import ott
def create_points(rng, n, m, d):
rngs = jax.random.split(rng, 4)
x = jax.random.normal(rngs[0], (n,d)) + 1
y = jax.random.uniform(rngs[1], (m,d))
a = jax.random.uniform(rngs[2], (n,))
b = jax.random.uniform(rngs[3], (m,))
a = a / jnp.sum(a)
b = b / jnp.sum(b)
return x, y, a, b
rng = jax.random.PRNGKey(0)
n, m, d = 19, 35, 2
x, y, a, b = create_points(rng, n=n, m=m, d=d)
geom = ott.geometry.pointcloud.PointCloud(x, y, epsilon=0.1)
ot_prob = ott.core.problems.LinearProblem(geom, a, b)
solver = ott.core.sinkhorn.Sinkhorn()
ot_sink = solver(ot_prob)
transp_cost = jnp.sum(ot_sink.matrix * geom.cost_matrix)
plt.imshow(ot_sink.matrix, cmap='Purples')
plt.title('Sinkhorn, Cost: ' + str(transp_cost))
plt.colorbar()
plt.show()
plott = ott.tools.plot.Plot()
_ = plott(ot_sink)
solver = ott.core.sinkhorn_lr.LRSinkhorn(rank=int(min(n,m)/2))
ot_lr = solver(ot_prob)
transp_cost = ot_lr.compute_reg_ot_cost(ot_prob)
plt.imshow(ot_lr.matrix, cmap='Purples')
plt.colorbar()
plt.title('LR, Cost: ' + str(transp_cost))
plt.show()
plott = ott.tools.plot.Plot()
_ = plott(ot_lr)
n, m, d =10^6, 10^6+1, 7
x, y, a, b = create_points(rng, n=n, m=m, d=d)
geom = ott.geometry.pointcloud.PointCloud(x, y, epsilon=0.1)
ot_prob = ott.core.problems.LinearProblem(geom, a, b)
costs = []
ranks = [1, 5, 10, 15, 20, 35, 50, 100, 500, 1000]
for rank in ranks:
solver = ott.core.sinkhorn_lr.LRSinkhorn(rank=rank)
ot_lr = solver(ot_prob)
costs.append(ot_lr.compute_reg_ot_cost(ot_prob))
plt.plot(ranks, costs)
plt.xscale('log')
plt.xlabel('rank')
plt.ylabel('cost')
plt.title('Transport cost as a function of rank')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an OT problem comparing two point clouds
Step2: Solve it with Sinkhorn and plot plan/map
Step3: Experimentations with the Low-Rank approach
Step4: Play with larger scales
Step5: We compute plans satisfy a rank constraint $r$, for various values of $r$,
Step6: As expected, the optimal cost decreases with rank, as shown in the plot below. Recall that, because of the non-convexity of the original problem, there may be small bumps along the way.
|
9,029
|
<ASSISTANT_TASK:>
Python Code:
# Here's a string representing a three-line SAM file. I'm temporarily
# ignoring the fact that SAM files usually have several header lines at
# the beginning.
samStr = '''\
r1 0 gi|9626243|ref|NC_001416.1| 18401 42 122M * 0 0 TGAATGCGAACTCCGGGACGCTCAGTAATGTGACGATAGCTGAAAACTGTACGATAAACNGTACGCTGAGGGCAGAAAAAATCGTCGGGGACATTNTAAAGGCGGCGAGCGCGGCTTTTCCG +"@6<:27(F&5)9)"B:%B+A-%5A?2$HCB0B+0=D<7E/<.03#!.F77@6B==?C"7>;))%;,3-$.A06+<-1/@@?,26">=?*@'0;$:;??G+:#+(A?9+10!8!?()?7C> AS:i:-5 XN:i:0 XM:i:3 XO:i:0 XG:i:0 NM:i:3 MD:Z:59G13G21G26 YT:Z:UU
r2 0 gi|9626243|ref|NC_001416.1| 8886 42 275M * 0 0 NTTNTGATGCGGGCTTGTGGAGTTCAGCCGATCTGACTTATGTCATTACCTATGAAATGTGAGGACGCTATGCCTGTACCAAATCCTACAATGCCGGTGAAAGGTGCCGGGATCACCCTGTGGGTTTATAAGGGGATCGGTGACCCCTACGCGAATCCGCTTTCAGACGTTGACTGGTCGCGTCTGGCAAAAGTTAAAGACCTGACGCCCGGCGAACTGACCGCTGAGNCCTATGACGACAGCTATCTCGATGATGAAGATGCAGACTGGACTGC (#!!'+!$""%+(+)'%)%!+!(&++)''"#"#&#"!'!("%'""("+&%$%*%%#$%#%#!)*'(#")(($&$'&%+&#%*)*#*%*')(%+!%%*"$%"#+)$&&+)&)*+!"*)!*!("&&"*#+"&"'(%)*("'!$*!!%$&&&$!!&&"(*"$&"#&!$%'%"#)$#+%*+)!&*)+(""#!)!%*#"*)*')&")($+*%%)!*)!('(%""+%"$##"#+(('!*(($*'!"*('"+)&%#&$+('**$$&+*&!#%)')'(+(!%+ AS:i:-14 XN:i:0 XM:i:8 XO:i:0 XG:i:0 NM:i:8 MD:Z:0A0C0G0A108C23G9T81T46 YT:Z:UU
r3 16 gi|9626243|ref|NC_001416.1| 11599 42 338M * 0 0 GGGCGCGTTACTGGGATGATCGTGAAAAGGCCCGTCTTGCGCTTGAAGCCGCCCGAAAGAAGGCTGAGCAGCAGACTCAAGAGGAGAAAAATGCGCAGCAGCGGAGCGATACCGAAGCGTCACGGCTGAAATATACCGAAGAGGCGCAGAAGGCTNACGAACGGCTGCAGACGCCGCTGCAGAAATATACCGCCCGTCAGGAAGAACTGANCAAGGCACNGAAAGACGGGAAAATCCTGCAGGCGGATTACAACACGCTGATGGCGGCGGCGAAAAAGGATTATGAAGCGACGCTGTAAAAGCCGAAACAGTCCAGCGTGAAGGTGTCTGCGGGCGAT 7F$%6=$:9B@/F'>=?!D?@0(:A*)7/>9C>6#1<6:C(.CC;#.;>;2'$4D:?&B!>689?(0(G7+0=@37F)GG=>?958.D2E04C<E,*AD%G0.%$+A:'H;?8<72:88?E6((CF)6DF#.)=>B>D-="C'B080E'5BH"77':"@70#4%A5=6.2/1>;9"&-H6)=$/0;5E:<8G!@::1?2DC7C*;@*#.1C0.D>H/20,!"C-#,6@%<+<D(AG-).?�.00'@)/F8?B!&"170,)>:?<A7#1(A@0E#&A.*DC.E")AH"+.,5,2>5"2?:G,F"D0B8D-6$65D<D!A/38860.*4;4B<*31?6 AS:i:-22 XN:i:0 XM:i:8 XO:i:0 XG:i:0 NM:i:8 MD:Z:80C4C16A52T23G30A8T76A41 YT:Z:UU'''
# I'll read this string in line-by-line as though it were a file.
# I'll (lightly) parse the alignment records as I go.
import string
from io import StringIO # reading from string rather than file
for ln in StringIO(samStr):
qname, flag, rname, pos, mapq, cigar, rnext, \
pnext, tlen, seq, qual, extras = str.split(ln, '\t', 11)
print(qname, len(seq)) # print read name, length of read sequence
def cigarToList(cigar):
''' Parse CIGAR string into a list of CIGAR operations. For more
info on CIGAR operations, see SAM spec:
http://samtools.sourceforge.net/SAMv1.pdf '''
ret, i = [], 0
op_map = {'M':0, # match or mismatch
'=':0, # match
'X':0, # mismatch
'I':1, # insertion in read w/r/t reference
'D':2, # deletion in read w/r/t reference
'N':3, # long gap due e.g. to splice junction
'S':4, # soft clipping due e.g. to local alignment
'H':5, # hard clipping
'P':6} # padding
# Seems like = and X together are strictly more expressive than M.
# Why not just have = and X and get rid of M? Space efficiency,
# mainly. The titans discuss: http://www.biostars.org/p/17043/
while i < len(cigar):
run = 0
while i < len(cigar) and cigar[i].isdigit():
# parse one more digit of run length
run *= 10
run += int(cigar[i])
i += 1
assert i < len(cigar)
# parse cigar operation
op = cigar[i]
i += 1
assert op in op_map
# append to result
ret.append([op_map[op], run])
return ret
cigarToList('10=1X10=')
def mdzToList(md):
''' Parse MD:Z string into a list of operations, where 0=match,
1=read gap, 2=mismatch. '''
i = 0;
ret = [] # list of (op, run, str) tuples
while i < len(md):
if md[i].isdigit(): # stretch of matches
run = 0
while i < len(md) and md[i].isdigit():
run *= 10
run += int(md[i])
i += 1 # skip over digit
if run > 0:
ret.append([0, run, ""])
elif md[i].isalpha(): # stretch of mismatches
mmstr = ""
while i < len(md) and md[i].isalpha():
mmstr += md[i]
i += 1
assert len(mmstr) > 0
ret.append([1, len(mmstr), mmstr])
elif md[i] == "^": # read gap
i += 1 # skip over ^
refstr = ""
while i < len(md) and md[i].isalpha():
refstr += md[i]
i += 1 # skip over inserted character
assert len(refstr) > 0
ret.append([2, len(refstr), refstr])
else:
raise RuntimeError('Unexpected character in MD:Z: "%d"' % md[i])
return ret
# Each element in the list returned by this call is itself a list w/ 3
# elements. Element 1 is the MD:Z operation (0=match, 1=mismatch,
# 2=deletion). Element 2 is the length and element 3 is the relevant
# sequence of nucleotides from the reference.
mdzToList('10A5^AC6')
def cigarMdzToStacked(seq, cgp, mdp_orig):
''' Takes parsed CIGAR and parsed MD:Z, generates a stacked alignment:
a pair of strings with gap characters inserted (possibly) and where
characters at at the same offsets are opposite each other in the
alignment. Only knows how to handle CIGAR ops M=XDINSH right now.
'''
mdp = mdp_orig[:]
rds, rfs = [], []
mdo, rdoff = 0, 0
for c in cgp:
op, run = c
skipping = (op == 4 or op == 5)
assert skipping or mdo < len(mdp)
if op == 0: # CIGAR op M, = or X
# Look for block matches and mismatches in MD:Z string
mdrun = 0
runleft = run
while runleft > 0 and mdo < len(mdp):
op_m, run_m, st_m = mdp[mdo]
run_comb = min(runleft, run_m)
runleft -= run_comb
assert op_m == 0 or op_m == 1
rds.append(seq[rdoff:rdoff + run_comb])
if op_m == 0: # match from MD:Z string
rfs.append(seq[rdoff:rdoff + run_comb])
else: # mismatch from MD:Z string
assert len(st_m) == run_comb
rfs.append(st_m)
mdrun += run_comb
rdoff += run_comb
# Stretch of matches in MD:Z could span M and I CIGAR ops
if run_comb < run_m:
assert op_m == 0
mdp[mdo][1] -= run_comb
else:
mdo += 1
elif op == 1: # CIGAR op I
rds.append(seq[rdoff:rdoff + run])
rfs.append("-" * run)
rdoff += run
elif op == 2: # D
op_m, run_m, st_m = mdp[mdo]
assert op_m == 2
assert run == run_m
assert len(st_m) == run
mdo += 1
rds.append("-" * run)
rfs.append(st_m)
elif op == 3: # N
rds.append("-" * run)
rfs.append("-" * run)
elif op == 4: # S
rds.append(seq[rdoff:rdoff + run].lower())
rfs.append(' ' * run)
rdoff += run
elif op == 5: # H
rds.append('!' * run)
rfs.append(' ' * run)
elif op == 6: # P
raise RuntimeError("Don't know how to handle P in CIGAR")
else:
raise RuntimeError('Unexpected CIGAR op: %d' % op)
assert mdo == len(mdp)
return ''.join(rds), ''.join(rfs)
# Following example includes gaps and mismatches
cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGATAAACC', cigarToList('12M2D17M2I14M'), mdzToList('12^AT30G0'))
# Following example also includes soft clipping (CIGAR: S)
# SAM spec: Soft clipping: "clipped sequences present in SEQ"
# We print them in lowercase to emphasize their clippedness
cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGAGAAGCC', cigarToList('12M2D17M2I8M6S'), mdzToList('12^AT25'))
# Following example also includes hard clipping (CIGAR: H)
# SAM spec: Hard clipping: "clipped sequences NOT present in SEQ"
cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGAGAAGCC', cigarToList('12M2D17M2I8M6S3H'), mdzToList('12^AT25'))
# Note: don't see hard clipping in practice much
# Following example also includes skipping (CIGAR: N), as seen in
# TopHat alignments
cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGAGAAGCC',
cigarToList('12M2D10M10N7M2I8M6S3H'),
mdzToList('12^AT25'))
def cigarize(rds, rfs):
off = 0
oplist = []
lastc, cnt = '', 0
for i in range(len(rds)):
c = None
if rfs[i] == ' ':
c = 'S'
elif rds[i] == '-' and rfs[i] == '-':
c = 'N'
elif rds[i] == '-':
c = 'D'
elif rfs[i] == '-':
c = 'I'
elif rds[i] != rfs[i]:
c = 'X'
else:
c = '='
if c == lastc:
cnt += 1
else:
if len(lastc) > 0:
oplist.append((lastc, cnt))
lastc, cnt = c, 1
if len(lastc) > 0:
oplist.append((lastc, cnt))
return ''.join(map(lambda x: str(x[1]) + x[0], oplist))
x, y = cigarMdzToStacked('ACGTACGT', cigarToList('8M'), mdzToList('4G3'))
cigarize(x, y)
x, y = cigarMdzToStacked('GGACGCTCAGTAGTGACGATAGCTGAAAACCCTGTACGAGAAGCC',
cigarToList('12M2D10M10N7M2I8M6S3H'),
mdzToList('12^AT25'))
cigarize(x, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SAM fields
Step2: Next we construct a function to parse the MD
Step3: Now we can write a fucntion that takes a read sequennce, a parsed CIGAR string, and a parse MD
Step4: From the stacked alignment, it's easy to do other things. E.g. we can turn a stacked alignment into a new CIGAR string that uses the = and X operations instead of the less specific M operation
|
9,030
|
<ASSISTANT_TASK:>
Python Code:
import argparse
import logging
import joblib
import sys
import pandas as pd
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from xgboost import XGBClassifier
logging.basicConfig(format='%(message)s')
logging.getLogger().setLevel(logging.INFO)
import os
import fairing
# Setting up google container repositories (GCR) for storing output containers
# You can use any docker container registry istead of GCR
# For local notebook, GCP_PROJECT should be set explicitly
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
GCP_Bucket = os.environ['GCP_BUCKET'] # e.g., 'gs://kubeflow-demo-g/'
# This is for local notebook instead of that in kubeflow cluster
# os.environ['GOOGLE_APPLICATION_CREDENTIALS']=
def gcs_copy(src_path, dst_path):
import subprocess
print(subprocess.run(['gsutil', 'cp', src_path, dst_path], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
def gcs_download(src_path, file_name):
import subprocess
print(subprocess.run(['gsutil', 'cp', src_path, file_name], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
def read_input(source_path, test_size=0.25):
Read input data and split it into train and test.
file_name = source_path.split('/')[-1]
gcs_download(source_path, file_name)
data = pd.read_csv(file_name)
data.dropna(axis=0, inplace=True)
y = data.Class
X = data.drop(['Class', 'Amount', 'Time'], axis=1).select_dtypes(exclude=['object'])
train_X, test_X, train_y, test_y = train_test_split(X.values,
y.values,
test_size=test_size,
shuffle=True)
imputer = SimpleImputer()
train_X = imputer.fit_transform(train_X)
test_X = imputer.transform(test_X)
return (train_X, train_y), (test_X, test_y)
def train_model(train_X,
train_y,
test_X,
test_y,
n_estimators,
learning_rate):
Train the model using XGBRegressor.
model = XGBClassifier(n_estimators=n_estimators, learning_rate=learning_rate)
model.fit(train_X,
train_y,
early_stopping_rounds=40,
eval_set=[(test_X, test_y)])
print("Best loss on eval: %.2f with %d rounds",
model.best_score,
model.best_iteration+1)
return model
def eval_model(model, test_X, test_y):
Evaluate the model performance.
predictions = model.predict_proba(test_X)
logging.info("auc=%.2f", roc_auc_score(test_y, predictions[:,1]))
def save_model(model, model_file):
Save XGBoost model for serving.
joblib.dump(model, model_file)
gcs_copy(model_file, GCP_Bucket + model_file)
logging.info("Model export success: %s", model_file)
class FraudServe(object):
def __init__(self):
self.train_input = GCP_Bucket + "train_fraud.csv"
self.n_estimators = 50
self.learning_rate = 0.1
self.model_file = "trained_fraud_model.joblib"
self.model = None
def train(self):
(train_X, train_y), (test_X, test_y) = read_input(self.train_input)
model = train_model(train_X,
train_y,
test_X,
test_y,
self.n_estimators,
self.learning_rate)
eval_model(model, test_X, test_y)
save_model(model, self.model_file)
def predict(self, X, feature_names):
Predict using the model for given ndarray.
if not self.model:
self.model = joblib.load(self.model_file)
# Do any preprocessing
prediction = self.model.predict(data=X)
# Do any postprocessing
return [[prediction.item(0), prediction.item(0)]]
FraudServe().train()
# In this demo, I use gsutil, therefore i compile a special image to install GoogleCloudSDK as based image
base_image = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT)
!docker build --build-arg PY_VERSION=3.6.4 . -t {base_image}
!docker push {base_image}
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job-xgboost'.format(GCP_PROJECT)
BASE_IMAGE = base_image
from fairing import TrainJob
from fairing.backends import GKEBackend
train_job = TrainJob(FraudServe, BASE_IMAGE, input_files=["requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=GKEBackend())
train_job.submit()
from fairing import TrainJob
from fairing.backends import GCPManagedBackend
train_job = TrainJob(FraudServe, BASE_IMAGE, input_files=["requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=GCPManagedBackend())
train_job.submit()
from fairing import PredictionEndpoint
from fairing.backends import KubeflowGKEBackend
# The trained_ames_model.joblib is exported during the above local training
endpoint = PredictionEndpoint(FraudServe, BASE_IMAGE, input_files=['trained_fraud_model.joblib', "requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=KubeflowGKEBackend())
endpoint.create()
# Deploy model to gcp
# from fairing.deployers.gcp.gcpserving import GCPServingDeployer
# deployer = GCPServingDeployer()
# deployer.deploy(VERSION_DIR, MODEL_NAME, VERSION_NAME)
(train_X, train_y), (test_X, test_y) = read_input(GCP_Bucket + "train_fraud.csv")
endpoint.predict_nparray(test_X)
endpoint.delete()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Define the model logic
Step6: Define functions to train, evaluate, and save the trained model.
Step8: Define a class for your model, with methods for training and prediction.
Step9: Train an XGBoost model in a notebook
Step10: Make Use of Fairing
Step11: Train an XGBoost model remotely on Kubeflow
Step12: Train an XGBoost model remotely on Cloud ML Engine
Step13: Deploy the trained model to Kubeflow for predictions
Step14: Deploy to GCP
Step15: Call the prediction endpoint
Step16: Clean up the prediction endpoint
|
9,031
|
<ASSISTANT_TASK:>
Python Code:
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# Create training and test dataloaders
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class Autoencoder(nn.Module):
def __init__(self, encoding_dim):
super(Autoencoder, self).__init__()
## encoder ##
# linear layer (784 -> encoding_dim)
self.fc1 = nn.Linear(28 * 28, encoding_dim)
## decoder ##
# linear layer (encoding_dim -> input size)
self.fc2 = nn.Linear(encoding_dim, 28*28)
def forward(self, x):
# add layer, with relu activation function
x = F.relu(self.fc1(x))
# output layer (sigmoid for scaling from 0 to 1)
x = F.sigmoid(self.fc2(x))
return x
# initialize the NN
encoding_dim = 32
model = Autoencoder(encoding_dim)
print(model)
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 20
for epoch in range(1, n_epochs+1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data in train_loader:
# _ stands in for labels, here
images, _ = data
# flatten images
images = images.view(images.size(0), -1)
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
outputs = model(images)
# calculate the loss
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch,
train_loss
))
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images_flatten = images.view(images.size(0), -1)
# get sample outputs
output = model(images_flatten)
# prep images for display
images = images.numpy()
# output is resized into a batch of images
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
# input images on top row, reconstructions on bottom
for images, row in zip([images, output], axes):
for img, ax in zip(images, row):
ax.imshow(np.squeeze(img), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualize the Data
Step2: Linear Autoencoder
Step3: Training
Step4: Checking out the results
|
9,032
|
<ASSISTANT_TASK:>
Python Code:
#Check that you are using the correct version of Python (should be 3.4+, otherwise gdal won't work)
import sys
sys.version
import numpy as np
import h5py
import gdal, osr
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
f = h5py.File('/Users/olearyd/Git/data/NEON_D02_SERC_DP1_20160807_160559_reflectance.h5','r')
#list_dataset lists the names of datasets in an hdf5 file
def list_dataset(name,node):
if isinstance(node, h5py.Dataset):
print(name)
f.visititems(list_dataset)
#ls_dataset displays the name, shape, and type of datasets in hdf5 file
def ls_dataset(name,node):
if isinstance(node, h5py.Dataset):
print(node)
f.visititems(ls_dataset)
serc_refl = f['SERC']['Reflectance']
print(serc_refl)
serc_reflArray = serc_refl['Reflectance_Data']
print(serc_reflArray)
refl_shape = serc_reflArray.shape
print('SERC Reflectance Data Dimensions:',refl_shape)
#View wavelength information and values
wavelengths = serc_refl['Metadata']['Spectral_Data']['Wavelength']
print(wavelengths)
# print(wavelengths.value)
# Display min & max wavelengths
print('min wavelength:', np.amin(wavelengths),'nm')
print('max wavelength:', np.amax(wavelengths),'nm')
#show the band width
print('band width =',(wavelengths.value[1]-wavelengths.value[0]),'nm')
print('band width =',(wavelengths.value[-1]-wavelengths.value[-2]),'nm')
serc_mapInfo = serc_refl['Metadata']['Coordinate_System']['Map_Info']
print('SERC Map Info:\n',serc_mapInfo.value)
#First convert mapInfo to a string, and divide into separate strings using a comma separator
mapInfo_string = str(serc_mapInfo.value) #convert to string
mapInfo_split = mapInfo_string.split(",") #split the strings using the separator ","
print(mapInfo_split)
#Extract the resolution & convert to floating decimal number
res = float(mapInfo_split[5]),float(mapInfo_split[6])
print('Resolution:',res)
#Extract the upper left-hand corner coordinates from mapInfo
xMin = float(mapInfo_split[3])
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions
#xMax = left corner + (# of columns * resolution)
xMax = xMin + (refl_shape[1]*res[0])
yMin = yMax - (refl_shape[0]*res[1])
# print('xMin:',xMin) ; print('xMax:',xMax)
# print('yMin:',yMin) ; print('yMax:',yMax)
serc_ext = (xMin, xMax, yMin, yMax)
print('serc_ext:',serc_ext)
#Can also create a dictionary of extent:
serc_extDict = {}
serc_extDict['xMin'] = xMin
serc_extDict['xMax'] = xMax
serc_extDict['yMin'] = yMin
serc_extDict['yMax'] = yMax
print('serc_extDict:',serc_extDict)
print('b56 wavelngth:',wavelengths[56],"nanometers")
b56 = serc_reflArray[:,:,55].astype(np.float)
print('b56 type:',type(b56))
print('b56 shape:',b56.shape)
print('Band 56 Reflectance:\n',b56)
# plt.hist(b56.flatten())
#View and apply scale factor and data ignore value
scaleFactor = serc_reflArray.attrs['Scale_Factor']
noDataValue = serc_reflArray.attrs['Data_Ignore_Value']
print('Scale Factor:',scaleFactor)
print('Data Ignore Value:',noDataValue)
b56[b56==int(noDataValue)]=np.nan
b56 = b56/scaleFactor
print('Cleaned Band 56 Reflectance:\n',b56)
plt.hist(b56[~np.isnan(b56)],50);
plt.title('Histogram of SERC Band 56 Reflectance')
plt.xlabel('Reflectance'); plt.ylabel('Frequency')
serc_fig = plt.figure(figsize=(20,10))
ax1 = serc_fig.add_subplot(1,2,1)
# serc_plot = ax1.imshow(b56,extent=serc_ext,cmap='jet',clim=(0,0.1))
serc_plot = ax1.imshow(b56,extent=serc_ext,cmap='jet')
cbar = plt.colorbar(serc_plot,aspect=50); cbar.set_label('Reflectance')
plt.title('SERC Band 56 Reflectance'); #ax = plt.gca();
ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=270) #rotate x tick labels 90 degree
# plot histogram of reflectance values (with 50 bins)
ax2 = serc_fig.add_subplot(2,2,2)
ax2.hist(b56[~np.isnan(b56)],50);
plt.title('Histogram of SERC Reflectance')
plt.xlabel('Reflectance'); plt.ylabel('Frequency')
# plot histogram, zooming in on values < 0.5
ax3 = serc_fig.add_subplot(2,2,4)
ax3.hist(b56[~np.isnan(b56)],50);
plt.title('Histogram of SERC Reflectance, 0-0.5')
plt.xlabel('Reflectance'); plt.ylabel('Frequency')
ax3.set_xlim([0,0.5])
# Plot in grayscale with different color limits
# Higher reflectance is lighter/brighter, lower reflectance is darker
serc_fig2 = plt.figure(figsize=(15,15))
ax1 = serc_fig2.add_subplot(1,3,1)
serc_plot = ax1.imshow(b56,extent=serc_ext,cmap='gray',clim=(0,0.3))
cbar = plt.colorbar(serc_plot,aspect=50); cbar.set_label('Reflectance')
plt.title('clim = 0-0.3'); #ax = plt.gca();
ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=270) #rotate x tick labels 90 degree
ax2 = serc_fig2.add_subplot(1,3,2)
serc_plot = ax2.imshow(b56,extent=serc_ext,cmap='gray',clim=(0,0.2))
cbar = plt.colorbar(serc_plot,aspect=50); cbar.set_label('Reflectance')
plt.title('clim = 0-0.2'); #ax = plt.gca();
ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=270) #rotate x tick labels 90 degree
ax3 = serc_fig2.add_subplot(1,3,3)
serc_plot = ax3.imshow(b56,extent=serc_ext,cmap='gray',clim=(0,0.1))
cbar = plt.colorbar(serc_plot,aspect=50); cbar.set_label('Reflectance')
plt.title('clim = 0-0.1'); #ax = plt.gca();
ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax3.get_xticklabels(),rotation=270) #rotate x tick labels 90 degree
def calc_clip_index(clipExtent, fullExtent, xscale=1, yscale=1):
h5rows = fullExtent['yMax'] - fullExtent['yMin']
h5cols = fullExtent['xMax'] - fullExtent['xMin']
indExtent = {}
indExtent['xMin'] = round((clipExtent['xMin']-fullExtent['xMin'])/xscale)
indExtent['xMax'] = round((clipExtent['xMax']-fullExtent['xMin'])/xscale)
indExtent['yMax'] = round(h5rows - (clipExtent['yMin']-fullExtent['yMin'])/xscale)
indExtent['yMin'] = round(h5rows - (clipExtent['yMax']-fullExtent['yMin'])/yscale)
return indExtent
#Define clip extent
clipExtent = {}
clipExtent['xMin'] = 367400
clipExtent['xMax'] = 368100
clipExtent['yMin'] = 4305750
clipExtent['yMax'] = 4306350
serc_subInd = calc_clip_index(clipExtent,serc_extDict)
print('SERC Subset Index:',serc_subInd)
serc_subArray = serc_reflArray[serc_subInd['yMin']:serc_subInd['yMax'],serc_subInd['xMin']:serc_subInd['xMax'],:]
serc_subExt = (clipExtent['xMin'],clipExtent['xMax'],clipExtent['yMin'],clipExtent['yMax'])
print('SERC Reflectance Subset Dimensions:',serc_subArray.shape)
serc_b56_subset = serc_subArray[:,:,55].astype(np.float)
serc_b56_subset[serc_b56_subset==int(noDataValue)]=np.nan
serc_b56_subset = serc_b56_subset/scaleFactor
#print(serc_b56_subset)
print('SERC Subsetted Band 56 Reflectance Stats:')
print('min reflectance:',np.nanmin(serc_b56_subset))
print('mean reflectance:',round(np.nanmean(serc_b56_subset),2))
print('max reflectance:',round(np.nanmax(serc_b56_subset),2))
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(1,2,1)
serc_subset_plot = plt.imshow(serc_b56_subset,extent=serc_subExt,cmap='gist_earth')
cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Reflectance')
plt.title('SERC Subset Band 56 Reflectance');
ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
ax2 = fig.add_subplot(1,2,2)
plt.hist(serc_b56_subset[~np.isnan(serc_b56_subset)],50);
plt.title('Histogram of SERC Subset Band 56 Reflectance')
plt.xlabel('Reflectance'); plt.ylabel('Frequency')
from skimage import exposure
# Contrast stretching
p2, p98 = np.percentile(serc_b56_subset[~np.isnan(serc_b56_subset)], (2, 98))
img_rescale2pct = exposure.rescale_intensity(serc_b56_subset, in_range=(p2, p98))
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(1,2,1)
plt.imshow(img_rescale2pct,extent=serc_subExt,cmap='gist_earth')
cbar = plt.colorbar(); cbar.set_label('Reflectance')
plt.title('SERC Band 56 Subset \n Linear 2% Contrast Stretch');
rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
p8, p92 = np.percentile(serc_b56_subset[~np.isnan(serc_b56_subset)], (8, 92))
img_rescale8pct = exposure.rescale_intensity(serc_b56_subset, in_range=(p8, p92))
ax2 = fig.add_subplot(1,2,2)
plt.imshow(img_rescale8pct,extent=serc_subExt,cmap='gist_earth')
cbar = plt.colorbar(); cbar.set_label('Reflectance')
plt.title('SERC Band 56 Subset \n Linear 8% Contrast Stretch');
rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
from IPython.html.widgets import *
def linearStretch(percent):
pLow, pHigh = np.percentile(serc_b56_subset[~np.isnan(serc_b56_subset)], (percent,100-percent))
img_rescale = exposure.rescale_intensity(serc_b56_subset, in_range=(pLow,pHigh))
plt.imshow(img_rescale,extent=serc_subExt,cmap='gist_earth')
cbar = plt.colorbar(); cbar.set_label('Reflectance')
plt.title('SERC Band 56 Subset \n Linear ' + str(percent) + '% Contrast Stretch');
ax = plt.gca()
ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
interact(linearStretch,percent=(0,100,1))
#Adaptive Equalized Histogram
img_nonan = np.ma.masked_invalid(serc_b56_subset) #first mask the image
img_adapteq = exposure.equalize_adapthist(img_nonan, clip_limit=.05)
print('img_adapteq min:',np.min(img_adapteq))
print('img_adapteq max:',np.max(img_adapteq))
# Display Adaptively Equalized Image
fig = plt.figure(figsize=(15,6))
ax1 = fig.add_subplot(1,2,1)
ax1.imshow(img_adapteq,extent=serc_subExt,cmap='gist_earth')
rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
plt.title('SERC Band 56 Subset \n Adaptive Equalized Histogram');
# Display histogram
bins=100
ax_hist = fig.add_subplot(1,2,2)
ax_hist.hist(img_adapteq.ravel(),bins); #np.ravel flattens an array into one dimension
plt.title('SERC Band 56 Subset \n Adaptive Equalized Histogram');
ax_hist.set_xlabel('Pixel Intensity'); ax_hist.set_ylabel('# of Pixels')
# Display cumulative distribution
ax_cdf = ax_hist.twinx()
img_cdf, bins = exposure.cumulative_distribution(img_adapteq,bins)
ax_cdf.plot(bins, img_cdf, 'r')
ax_cdf.set_ylabel('Fraction of Total Intensity')
fig = plt.figure(figsize=(15,12))
#spectral Colormap, 0-0.08
ax1 = fig.add_subplot(2,2,1)
serc_subset_plot = plt.imshow(serc_b56_subset,extent=serc_subExt,cmap='Spectral',clim=(0,0.08))
cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Reflectance')
plt.title('Subset SERC Band 56 Reflectance\n spectral colormap, 0-0.08');
ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
#gist_earth colormap, 0-0.10
ax2 = fig.add_subplot(2,2,2)
serc_subset_plot = plt.imshow(serc_b56_subset,extent=serc_subExt,cmap='gist_earth',clim=(0,0.1))
cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Reflectance')
plt.title('Subset SERC Band 56 Reflectance\n gist_earth colormap, 0-0.10');
ax2.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
#YlGn_r colormap, 0-0.08
ax3 = fig.add_subplot(2,2,3)
serc_subset_plot = plt.imshow(serc_b56_subset,extent=serc_subExt,cmap='YlGn_r',clim=(0,0.08))
cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Reflectance')
plt.title('Subset SERC Band 56 Reflectance\n YlGn_r colormap, 0-0.08');
ax3.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax3.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
#For the last example, take the logarithm of the reflectance data to stretch the values:
serc_b56_subset_log = np.log(serc_b56_subset);
ax4 = fig.add_subplot(2,2,4)
serc_subset_plot = plt.imshow(serc_b56_subset_log,extent=serc_subExt,cmap='jet',clim=(-5,-3))
cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Log(Reflectance)')
plt.title('Subset SERC log(Band 56 Reflectance)\n jet colormap');
ax4.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax4.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First let's import the required packages and set our display preferences so that plots are inline and plot warnings are off
Step2: Read hdf5 file into Python
Step3: Explore the Files
Step4: We can display the name, shape, and type of each of these datasets using the ls_dataset function defined below, which is also called with visititems
Step5: Now that we see the general structure of the hdf5 file, let's take a look at some of the information that is stored inside. Let's start by extracting the reflectance data, which is nested under SERC/Reflectance/Reflectance_Data.
Step6: The two members of the HDF5 group /SERC/Reflectance are Metadata and
Step7: We can extract the shape as follows
Step8: This corresponds to (y,x, # of bands), where (x,y) are the dimensions of the
Step9: The wavelengths recorded range from 383.66 - 2511.94 nm, and each band covers a range of ~5 nm. Now let's extract spatial information, which is stored under SERC/Reflectance/Metadata/Coordinate_System/Map_Info
Step10: Understanding the output
Step11: Now we can extract the spatial information we need from the map info values, convert them to the appropriate data types (eg. float) and store it in a way that will enable us to access and apply it later
Step12: Extract a Single Band from Array
Step13: Scale factor and No Data Value
Step14: Plot histogram
Step15: Plot Single Band
Step16: Note from both the plot and histogram of the reflectance values that almost all of the reflectance values range from 0.0-0.35. In order to see more contrast in the plot, we try out a couple things
Step17: Extension
Step18: Use this function to find the indices corresponding to the clip extent that we specified above for SERC
Step19: We can now use these indices to create a subsetted array, with dimensions 600 x 700 x 426.
Step20: Extract band 56 from this subset, and clean by applying the no data value and scale factor
Step21: Take a quick look at the minimum, maximum, and mean reflectance values in this subsetted area
Step22: Lastly, plot the data and a histogram of the reflectance values to see what the distribution looks like.
Step23: Note that most of the reflectance values are < 0.5, but the colorbar scale
Step24: Notice that the 8% stretch image (right) washes out some of the objects with higher reflectance (eg. the dock & buildings), but does a better job showing contrast of the vegetation (eg. grass, trees, shadows).
Step25: Apply Adaptive Histogram Equalization to Improve Image Contrast
Step26: With contrast-limited adaptive histogram equalization, you can see more detail in the image, and the highly reflective objects are not washed out, as they were in the linearly-stretched images.
|
9,033
|
<ASSISTANT_TASK:>
Python Code:
# general imports
import pandas as pd
import numpy as np
from datetime import datetime
from collections import defaultdict
import pickle
# imports for webscraping and text manipulation
import requests
import re
import io
import urllib
# imports to convert pdf to text
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
# text cleaning imports
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
# imports for cosine similarity with NMF
from sklearn.decomposition import NMF
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import normalize
from sklearn.feature_extraction import text
from collections import namedtuple
# imports for data visualization
import matplotlib.pyplot as plt
import seaborn as sns
# special matplotlib argument for in notebook improved plots
from matplotlib import rcParams
sns.set_style("whitegrid")
sns.set_context("poster")
# import for data exploration
nltk.download('stopwords')
from nltk.corpus import stopwords
import itertools
from collections import Counter
import seaborn as sns
plt.style.use('ggplot')
# load json twitter data
twitter_json = r'data/twitter_01_20_17_to_3-2-18.json'
# Convert to pandas dataframe
tweet_data = pd.read_json(twitter_json)
# helper functions
# identify hash tags
def hash_tag(text):
return re.findall(r'(#[^\s]+)', text)
# identify @mentions
def at_tag(text):
return re.findall(r'(@[A-Za-z_]+)[^s]', text)
# set column 'created_at' to the index
tweet_data.set_index('created_at', drop=True, inplace= True)
# convert timestamp index to a datetime index
pd.to_datetime(tweet_data.index)
# tokenize all the tweet's text
tweet_data['text_tokenized'] = tweet_data['text'].apply(lambda x: word_tokenize(x.lower()))
# apply hash tag function to text column
tweet_data['hash_tags'] = tweet_data['text'].apply(lambda x: hash_tag(x))
# apply at_tag function to text column
tweet_data['@_tags'] = tweet_data['text'].apply(lambda x: at_tag(x))
tweet_data.head()
# pickle data
tweet_pickle_path = r'data/twitter_01_20_17_to_3-2-18.pickle'
tweet_data.to_pickle(tweet_pickle_path)
# Define the 2017 url that contains all of the Executive Office of the President's published documents
executive_office_url_2017 = r'https://www.federalregister.gov/index/2017/executive-office-of-the-president'
executive_office_url_2018 = r'https://www.federalregister.gov/index/2018/executive-office-of-the-president'
# scrape all urls for pdf documents published in 2017 and 2018 by the U.S.A. Executive Office
pdf_urls= []
for url in [executive_office_url_2017,executive_office_url_2018]:
response = requests.get(url)
pattern = re.compile(r'https:.*\.pdf')
pdfs = re.findall(pattern, response.text)
pdf_urls.append(pdfs)
# writes all of the pdfs to the data folder
start = 'data/'
end = '.pdf'
num = 0
for i in range(0,(len(pdf_urls))):
for url in pdf_urls[i]:
ver = str(num)
pdf_path = start + ver + end
r = requests.get(url)
file = open(pdf_path, 'wb')
file.write(r.content)
file.close()
num = num + 1
# helper functions
# function to convert pdf to text from stack overflow (https://stackoverflow.com/questions/26494211/extracting-text-from-a-pdf-file-using-pdfminer-in-python/44476759#44476759)
def convert_pdf_to_txt(path):
rsrcmgr = PDFResourceManager()
retstr = io.StringIO()
codec = 'utf-8'
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
fp = open(path, 'rb')
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0
caching = True
pagenos = set()
for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages,
password=password,
caching=caching,
check_extractable=True):
interpreter.process_page(page)
text = retstr.getvalue()
fp.close()
device.close()
retstr.close()
return text
# finds the first time the name of a day appears in the txt, and returns that name
def find_day(word_generator):
day_list = ['Monday,', 'Tuesday,', 'Wednesday,', 'Thursday,', 'Friday,', 'Saturday,', 'Sunday,']
day_name_dict = {'Mon':'Monday,', 'Tue':'Tuesday,','Wed':'Wednesday,','Thu':'Thursday,','Fri':'Friday,','Sat':'Saturday,','Sun':'Sunday,'}
day_name = []
for val in word_generator:
if val in day_list:
num_position = txt.index(val)
day_name.append(txt[num_position] + txt[num_position + 1] + txt[num_position +2])
break
return day_name_dict[day_name[0]]
# takes text and returns the first date in the document
def extract_date(txt):
word_generator = (word for word in txt.split())
day_name = find_day(word_generator)
txt_start = int(txt.index(day_name))
txt_end = txt_start + 40
date_txt = txt[txt_start:txt_end].replace('\n','')
cleaned_txt = re.findall('.* \d{4}', date_txt)
date_list = cleaned_txt[0].split()
clean_date_list = map(lambda x:x.strip(","), date_list)
clean_date_string = ", ".join(clean_date_list)
date_obj = datetime.strptime(clean_date_string, '%A, %B, %d, %Y')
return date_obj
# create dictionary where: publication date = key, text = value
start_path = r'data/'
end_path = '.pdf'
data_dict = defaultdict(list)
for i in range(0,312):
file_path = start_path + str(i) + end_path
txt = convert_pdf_to_txt(file_path)
date_obj = extract_date(txt)
data_dict[date_obj].append(txt)
# create list of tuples where: (date, text)
tuple_lst = []
for k, v in data_dict.items():
if v != None:
for text in v:
tuple_lst.append((k, text))
else:
print(k)
# create dataframe from list of tuples
fed_reg_dataframe = pd.DataFrame.from_records(tuple_lst, columns=['date','str_text'], index = 'date')
# tokenize all the pdf text
fed_reg_dataframe['token_text'] = fed_reg_dataframe['str_text'].apply(lambda x: word_tokenize(x.lower()))
# final dataframe
final_df = fed_reg_dataframe[fed_reg_dataframe.index > '2017-01-20']
# pickle final data
fed_reg_data = r'data/fed_reg_data.pickle'
final_df.to_pickle(fed_reg_data)
# load federal document data from pickle file
fed_reg_data = r'data/fed_reg_data.pickle'
fed_data = pd.read_pickle(fed_reg_data)
# load twitter data from csv
twitter_file_path = r'data/twitter_01_20_17_to_3-2-18.pickle'
twitter_data = pd.read_pickle(twitter_file_path)
# find the most used hashtags
hashtag_freq = Counter(list(itertools.chain(*(twitter_data.hash_tags))))
hashtag_top20 = hashtag_freq.most_common(20)
# find the most used @ tags
at_tag_freq = Counter(list(itertools.chain(*(twitter_data['@_tags']))))
at_tags_top20 = at_tag_freq.most_common(20)
# frequency plot for the most used hashtags
df = pd.DataFrame(hashtag_top20, columns=['Hashtag', 'frequency'])
ax = df.plot(kind='bar', x='Hashtag',legend=None,figsize=(14,10))
# change tick markes
plt.tick_params(top='off', bottom = 'on', left = 'on', right = 'off', labelleft = 'on', labelbottom = 'on')
#plt.xticks(rotation=45)
plt.ylabel('Frequency', fontsize=25)
plt.xlabel('Hashtag', fontsize=25)
plt.tight_layout()
ax.set_ylim([0,27.5])
plt.show()
# frequency plot for the most used @ tags
df = pd.DataFrame(at_tags_top20, columns=['@ Tag', 'frequency'])
ax = df.plot(kind='bar', x='@ Tag',legend=None, figsize=(14,10))
# change tick markes
plt.tick_params(top='off', bottom = 'on', left = 'on', right = 'off', labelleft = 'on', labelbottom = 'on')
#plt.xticks(rotation=45)
plt.ylabel('Frequency', fontsize=25)
plt.xlabel('@Tag', fontsize=25)
plt.tight_layout()
plt.show()
# use nltk's list of stopwords
stop_words = set(stopwords.words('english'))
# add puncuation to stopwords
stop_words.update(['.', ',','get','going','one', 'amp','like' '"','...',"''", "'","n't", '?', '!', ':', ';', '#','@', '(', ')', 'https', '``',"'s", 'rt' ])
# combine the hashtags and @ tags, flatten the list of lists, keep the unique items
stop_twitter = set(list(itertools.chain(*(twitter_data.hash_tags + twitter_data['@_tags']))))
# stop words for federal documents
stop_fed_docs = ['united', 'states', '1','2','3','4','5','6','7','8','9','10', '11','12','13','14','15',
'16','17','18','19','20','21','22','23','24','25','26','27','28','29','30','31','2016', '2015','2014',
'federal','shall','1.','2.','3.', '4790', 'national', '2017', 'order','president', 'presidential', 'sep',
'register','po','verdate', 'jkt','00000','frm','fmt','sfmt','vol','section','donald',
'act','america', 'executive','secretary', 'law', 'proclamation','81','day','including', 'code',
'4705','authority', 'agencies', '241001', 'americans','238001','year', 'amp',
'government','agency','hereby','people','public','person','state','american','two',
'nation', '82', 'sec', 'laws', 'policy','set','fr','appropriate','doc','new','filed',
'u.s.c','department','ii','also','office','country','within','memorandum', 'director', 'us',
'sunday','monday', 'tuesday','wednesday','thursday', 'friday', 'saturday', 'title','upon',
'constitution','support', 'vested','part', 'month', 'subheading','foreign','general','january',
'february', 'march', 'april','may','june','july','august', 'september', 'october',
'november', 'december', 'council','provide','consistent','pursuant','thereof','00001','documents',
'11:15', 'area','management','following','house','white','week','therefore',
'amended', 'continue', 'chapter','must','years', '00002', 'use','make','date','one',
'many','12', 'commission','provisions', 'every','u.s.','functions','made','hand','necessary',
'witness','time','otherwise', 'proclaim', 'follows','thousand', 'efforts','jan', 'trump','j.',
'applicable', '4717','whereof','hereunto', 'subject', 'report','3—', '3295–f7–p']
# helper functions
def remove_from_fed_data(token_lst):
# remove stopwords and one letter words
filtered_lst = [word for word in token_lst if word.lower() not in stop_fed_docs and len(word) > 1 and word.lower() not in stop_words]
return filtered_lst
def remove_from_twitter_data(token_lst):
# remove stopwords and one letter words
filtered_lst = [word for word in token_lst if word.lower() not in stop_words and len(word) > 1 and word.lower() not in stop_twitter]
return filtered_lst
# apply the remove_stopwords function to all of the tokenized twitter text
twitter_words = twitter_data.text_tokenized.apply(lambda x: remove_from_twitter_data(x))
# apply the remove_stopwords function to all of the tokenized document text
document_words = fed_data.token_text.apply(lambda x: remove_from_fed_data(x))
# flatten each the word lists into one list
all_twitter_words = list(itertools.chain(*twitter_words))
all_document_words =list(itertools.chain(*document_words))
# create a dictionary using the Counter method, where the key is a word and the value is the number of time it was used
twitter_freq = Counter(all_twitter_words)
doc_freq = Counter(all_document_words)
# determine the top 30 words used in the twitter data
top_30_tweet = twitter_freq.most_common(30)
top_30_fed = doc_freq.most_common(30)
# frequency plot for the most used Federal Data
df = pd.DataFrame(top_30_fed, columns=['Federal Data', 'frequency'])
ax = df.plot(kind='bar', x='Federal Data',legend=None, figsize=(14,10))
# change tick markes
plt.tick_params(top='off', bottom = 'on', left = 'on', right = 'off', labelleft = 'on', labelbottom = 'on')
plt.ylabel('Frequency', fontsize=20)
plt.xlabel('Top Federal Data Words', fontsize=20)
plt.tight_layout()
plt.show()
# frequency plot for the most used words in the twitter data
df = pd.DataFrame(top_30_tweet, columns=['Twitter Data', 'frequency'])
ax = df.plot(kind='bar', x='Twitter Data',legend=None, figsize=(14,10))
plt.ylabel('Frequency')
# change tick markes
plt.tick_params(top='off', bottom = 'on', left = 'on', right = 'off', labelleft = 'on', labelbottom = 'on')
plt.ylabel('Frequency', fontsize=20)
plt.xlabel('Top Twitter Words', fontsize=20)
plt.tight_layout()
plt.show()
# find the unique words in each dataset
joint_words = list((set(all_document_words)).intersection(all_twitter_words))
# make array of zeros
values = np.zeros(len(joint_words))
# create dictionary
joint_words_dict = dict(zip(joint_words, values))
# create a dictionary with a word as key, and a value = number of documents that contain the word for Twitter
twitter_document_freq = joint_words_dict.copy()
for word in joint_words:
for lst in twitter_data.text_tokenized:
if word in lst:
twitter_document_freq[word]= twitter_document_freq[word] + 1
# create a dictionary with a word as key, and a value = number of documents that contain the word for Fed Data
fed_document_freq = joint_words_dict.copy()
for word in joint_words:
for lst in fed_data.token_text:
if word in lst:
fed_document_freq[word]= fed_document_freq[word] + 1
df = pd.DataFrame([fed_document_freq, twitter_document_freq]).T
df.columns = ['Fed', 'Tweet']
df['% Fed'] = (df.Fed/len(df.Fed))*100
df['% Tweet'] = (df.Tweet/len(df.Tweet))*100
top_joint_fed = df[['% Fed','% Tweet']].sort_values(by='% Fed', ascending=False)[0:30]
top_joint_tweet = df[['% Fed','% Tweet']].sort_values(by='% Tweet', ascending=False)[0:30]
top_joint_fed = df[['% Fed','% Tweet']].sort_values(by='% Fed', ascending=False)[0:30]
top_joint_tweet = df[['% Fed','% Tweet']].sort_values(by='% Tweet', ascending=False)[0:30]
# plot the top words used in the fedaral data that are also in tweets
ax = top_joint_fed.plot.bar(figsize=(14,9))
# change tick markes
plt.tick_params(top='off', bottom = 'on', left = 'on', right = 'off', labelleft = 'on', labelbottom = 'on')
plt.ylabel('Word Document Frequency of Documents(Tweets)', fontsize=20)
plt.xlabel('Top Words that Occur in Both Tweets and Federal Documents', fontsize=20)
plt.tight_layout()
plt.show()
# plot the top words used in tweets that are also in federal data
ax = top_joint_tweet.plot.bar(figsize=(14,9))
# change tick markes
plt.tick_params(top='off', bottom = 'on', left = 'on', right = 'off', labelleft = 'on', labelbottom = 'on')
plt.ylabel('Word Document Frequency of Tweets(Documents)', fontsize=20)
plt.xlabel('Top Words that Occur in Both Tweets and Federal Documents', fontsize=20)
plt.tight_layout()
plt.show()
# plot the words that are used with the same frequency in both the twitter and federal data
df['diff %'] = df['% Fed'] - df['% Tweet']
top_same = df[df['diff %'] == 0].sort_values(by='% Fed', ascending=False)[0:50]
ax = top_same[['% Fed', '% Tweet']].plot.bar(figsize=(14,8))
# change tick markes
plt.tick_params(top='off', bottom = 'on', left = 'on', right = 'off', labelleft = 'on', labelbottom = 'on')
plt.ylabel('Document Frequency', fontsize=20)
plt.xlabel('Top Words that Occur in Both Tweets and Federal Documents', fontsize=20)
plt.tight_layout()
plt.show()
# load federal document data from pickle file
fed_reg_data = r'data/fed_reg_data.pickle'
fed_data = pd.read_pickle(fed_reg_data)
# load twitter data from csv
twitter_file_path = r'data\twitter_01_20_17_to_3-2-18.pickle'
twitter_data = pd.read_pickle(twitter_file_path)
# Change the index (date), to a column
fed_data['date'] = fed_data.index
twitter_data['date'] = twitter_data.index
# keep text strings and rename columns
fed = fed_data[['str_text', 'date']].rename({'str_text': 'texts'}, axis = 'columns')
tweet = twitter_data[['text', 'date']].rename({'text': 'texts'}, axis = 'columns')
# Add a label for the type of document (Tweet = 0, Fed = 1)
tweet['label'] = 0
fed['label'] = 1
# concatinate the dataframes
comb_text = pd.concat([fed,tweet])
# Re_index so that each doc has a unique id_number
comb_text = comb_text.reset_index()
comb_text['ID'] = range(0,len(comb_text))
# Look at the dataframe to make sure it works
comb_text = comb_text[['texts','date','label', 'ID']]
comb_text.head(3)
# nonsense words, and standard words like proclimation and dates
more_stop = set(['presidential', 'documents', 'therfore','i','donald', 'j', 'trump', 'president', 'order',
'authority', 'vested', 'articles','january','february','march','april','may','june','july','august','september','october',
'november','december','jan','feb','mar','apr','jun','jul','aug','sep','oct','nov','dec',
'2017','2018','act','agencies','agency','wh','rtlwanjjiq','pmgil08opp','blkgzkqemw','qcdljff3wn','erycjgj23r ','fzep1e9mo7','m0hmpbuz6c','rdo6jt2pip','kyv866prde','aql4jlvndh',
'tx5snacaas','t0eigo6lp8','jntoth0mol','8b8aya7v1s', 'x25t9tqani','q7air0bum2','ypfvhtq8te','ejxevz3a1r','1zo6zc2pxt',
'strciewuws','lhos4naagl','djlzvlq6tj', 'theplumlinegs', '3eyf3nir4b','cbewjsq1a3','lvmjz9ax0u',
'dw0zkytyft','sybl47cszn','6sdcyiw4kt','¼ï','yqf6exhm7x','cored8rfl2','6xjxeg1gss','dbvwkddesd',
'ncmsf4fqpr','twunktgbnb','ur0eetseno','ghqbca7yii','cbqrst4ln4','c3zikdtowc','6snvq0dzxn','ekfrktnvuy',
'k2jakipfji','œthe ','p1fh8jmmfa','vhmv7qoutk','mkuhbegzqs','ajic3flnki','mvjbs44atr',
'wakqmkdpxa','e0bup1k83z','ðÿ','ºðÿ','µðÿ','eqmwv1xbim','hlz48rlkif','td0rycwn8c','vs4mnwxtei','75wozgjqop',
'e1q36nkt8g','u8inojtf6d','rmq1a5bdon','5cvnmhnmuh','pdg7vqqv6m','s0s6xqrjsc','5cvnmhnmuh','wlxkoisstg',
'tmndnpbj3m','dnzrzikxhd','4qckkpbtcr','x8psdeb2ur','fejgjt4xp9','evxfqavnfs','aty8r3kns2','pdg7vqqv6m','nqhi7xopmw',
'lhos4naagl','32tfova4ov','zkyoioor62','np7kyhglsv','km0zoaulyh','kwvmqvelri','pirhr7layt',
'v3aoj9ruh4','https','cg4dzhhbrv','qojom54gy8','75wozgjqop','aty8r3kns2','nxrwer1gez','rvxcpafi2a','vb0ao3s18d',
'qggwewuvek','ddi1ywi7yz','r5nxc9ooa4','6lt9mlaj86','1jb53segv4','vhmv7qoutk','i7h4ryin3h',
'aql4jlvndh','yfv0wijgby','nonhjywp4j','zomixteljq','iqum1rfqso','2nl6slwnmh','qejlzzgjdk',
'p3crvve0cy','s0s6xqrjsc','gkockgndtc','2nl6slwnmh','zkyoioor62','clolxte3d4','iqum1rfqso',
'msala9poat','p1f12i9gvt','mit2lj7q90','qejlzzgjdk','pjldxy3hd9','vjzkgtyqb9','b2nqzj53ft',
'tpz7eqjluh','enyxyeqgcp','avlrroxmm4','2kuqfkqbsx','kwvmqvelri','œi','9lxx1iqo7m','vdtiyl0ua7',
'dmhl7xieqv','3jbddn8ymj','gysxxqazbl','ðÿž','tx5snacaas','4igwdl4kia','kqdbvxpekk','1avysamed4',
'cr4i8dvunc','bsp5f3pgbz','rlwst30gud','rlwst30gud','g4elhh9joh', '2017', 'January', 'kuqizdz4ra',
'nvdvrrwls4','ymuqsvvtsb', 'rgdu9plvfk','bk7sdv9phu','b5qbn6llze','xgoqphywrt ','hscs4y9zjk ',
'soamdxxta8','erycjgj23r','ryyp51mxdq','gttk3vjmku','j882zbyvkj','9pfqnrsh1z','ubbsfohmm7',
'xshsynkvup','xwofp9z9ir','1iw7tvvnch','qeeknfuhue','riqeibnwk2','seavqk5zy5','7ef6ac6kec',
'htjhrznqkj','8vsfl9mzxx','xgoqphywrt','zd0fkfvhvx','apvbu2b0jd','mstwl628xe','4hnxkr3ehw','mjij7hg3eu',
'1majwrga3d','x6fuuxxyxe','6eqfmrzrnv','h1zi5xrkeo','kju0moxchk','trux3wzr3u','suanjs6ccz',
'ecf5p4hjfz','m5ur4vv6uh','8j7y900vgk','7ef6ac6kec','d0aowhoh4x','aqqzmt10x7','zauqz4jfwv',
'bmvjz1iv2a','gtowswxinv','1w3lvkpese','8n4abo9ihp','f6jo60i0ul','od7l8vpgjq','odlz2ndrta',
'9tszrcc83j','6ocn9jfmag','qyt4bchvur','wkqhymcya3','tp4bkvtobq','baqzda3s2e','March','April',
'op2xdzxvnc','d7es6ie4fy','proclamation','hcq9kmkc4e','rf9aivvb7g','sutyxbzer9','s0t3ctqc40','aw0av82xde'])
# defines all stop words
my_stop = text.ENGLISH_STOP_WORDS.union(more_stop)
# Instantiate TfidfVectorizer to remove common english words, and any word used in 99% of the documents
tfidf = TfidfVectorizer(stop_words = my_stop , max_df = 0.99)
# create matrix representation of all documents
text_mat = tfidf.fit_transform(comb_text.texts)
# make a list of feature words
words = tfidf.get_feature_names()
# instantiate model
NMF_model = NMF(n_components=200 , init = 'nndsvd')
# fit the model
NMF_model.fit(text_mat)
# transform the text frequecy matrix using the fitted NMF model
nmf_features = NMF_model.transform(text_mat)
# create a dataframe with words as a columns, NMF components as rows
components_df = pd.DataFrame(NMF_model.components_, columns = words)
# create dictionary with the key = component, value = top 5 words
topic_dict = {}
for i in range(0,160):
component = components_df.iloc[i, :]
topic_dict[i] = component.nlargest()
# look at a few of the component topics
print(topic_dict[124].index)
print(topic_dict[10].index)
# normalize previouly found nmf features
norm_features = normalize(nmf_features)
#dataframe of document's NMF features, where rows are documents and columns are NMF components
df_norms = pd.DataFrame(norm_features)
# initialize empty dictionary
similarity_dict= {}
# loop through each row of the df_norms dataframe
for i in range(len(norm_features)):
# isolate one row, by ID number
row = df_norms.loc[i]
# calculate the top cosine similarities
top_sim = (df_norms.dot(row)).nlargest()
# append results to dictionary
similarity_dict[i] = (top_sim.index, top_sim)
# dataframe with document ID and labels
doc_label_df = comb_text[['label', 'ID']].copy().set_index('ID')
# inialize list for the sum of all similar documents label
label_sums =[]
similarity_score_sum = []
# loop through all of the documents
for doc_num in doc_label_df.index:
# sum the similarity scores
similarity_sum = similarity_dict[doc_num][1].sum()
similarity_score_sum.append(similarity_sum)
#find the list of similar documents
similar_doc_ID_list = list(similarity_dict[doc_num][0])
# loop through labels
s_label = 0
for ID_num in similar_doc_ID_list:
# sum the label values for each similar document
s_label = s_label + doc_label_df.loc[ID_num].label
# append the sum of the labels for ONE document
label_sums.append(s_label)
# add the similarity score sum to dataframe as separate column
doc_label_df['similarity_score_sum'] = similarity_score_sum
# add the similar document's summed label value to the dataframe as a separate column
doc_label_df['sum_of_labels'] = label_sums
# Filter dataframe for federal documents with similar tweets, and vice versa
df_filtered = doc_label_df[doc_label_df['sum_of_labels'] != 0][doc_label_df['sum_of_labels'] != 5].copy().reset_index()
# Look at the ones that have all top 5 documents with a cosine similarity score of 0.9 or above.
#The sum of scores need to be 4.6 or higher
similar_score_min = 4.6
highly_similar = df_filtered[df_filtered.similarity_score_sum >= similar_score_min]
# create a list of all the group lists
doc_groups = []
for doc_id in highly_similar.ID:
doc_groups.append(sorted(list(similarity_dict[doc_id][0])))
# make the interior lists tuples, then make a set of them
unique_groups = list(set([tuple(x) for x in doc_groups]))
# make a list of groups that are similar
similar_groups = []
for num1 in range(len(unique_groups)):
for num2 in range(len(unique_groups)):
crossover_count = len((set(unique_groups[num1]) & set(unique_groups[num2])))
if crossover_count == 4:
lst = [num1, num2]
lst.sort(key=int)
if lst not in similar_groups:
similar_groups.append(lst)
# create list of document ID's of similar documents
similar_docs = []
for group1, group2 in similar_groups:
combo = list(set(unique_groups[group1]) & set(unique_groups[group2]))
if combo not in similar_docs:
similar_docs.append(combo)
similar_docs
# print document IDs grouped together
print(comb_text.texts.loc[1471])
print(comb_text.texts.loc[111])
print(comb_text.texts.loc[2991])
print(comb_text.texts.loc[2997])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Clean-up data types and identify hashtags and @mentions
Step3: Save clean twitter data as pickle
Step4: Scrape and format Data from Federal Register
Step5: Create a dataframe with the published date and text for each pdf
Step6: Save federal register data as a pickle file to use in later analysis
Step7: Part 2
Step8: Most used hastags
Step9: Most used @tags
Step10: Top used words for the twitter data and the federal document data
Step11: Frequency plots for most used words in the federal data and twitter data
Step12: Part 2 Conclusions
Step13: Identify nonsense words for both twitter and federal data
Step14: Part 3B
Step15: Topic analysis using NMF
Step16: Find the top 5 topic words for each component
Step17: Results of part 3 A and B
Step18: Part 4
Step19: Part 4b
Step20: Group Again
Step21: Part 5
|
9,034
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
objective = np.poly1d([1.3, 4.0, 0.6])
print objective
import scipy.optimize as opt
x_ = opt.fmin(objective, [3])
print "solved: x={}".format(x_)
%matplotlib inline
x = np.linspace(-4,1,101.)
import matplotlib.pylab as mpl
mpl.plot(x, objective(x))
mpl.plot(x_, objective(x_), 'ro')
import scipy.special as ss
import scipy.optimize as opt
import numpy as np
import matplotlib.pylab as mpl
x = np.linspace(2, 7, 200)
# 1st order Bessel
j1x = ss.j1(x)
mpl.plot(x, j1x)
# use scipy.optimize's more modern "results object" interface
result = opt.minimize_scalar(ss.j1, method="bounded", bounds=[2, 4])
j1_min = ss.j1(result.x)
mpl.plot(result.x, j1_min,'ro')
import mystic.models as models
print(models.rosen.__doc__)
import mystic
mystic.model_plotter(mystic.models.rosen, fill=True, depth=True, scale=1, bounds="-3:3:.1, -1:5:.1, 1")
import scipy.optimize as opt
import numpy as np
# initial guess
x0 = [1.3, 1.6, -0.5, -1.8, 0.8]
result = opt.minimize(opt.rosen, x0)
print result.x
# number of function evaluations
print result.nfev
# again, but this time provide the derivative
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print result.x
# number of function evaluations and derivative evaluations
print result.nfev, result.njev
print ''
# however, note for a different x0...
for i in range(5):
x0 = np.random.randint(-20,20,5)
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print "{} @ {} evals".format(result.x, result.nfev)
# http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#tutorial-sqlsp
'''
Maximize: f(x) = 2*x0*x1 + 2*x0 - x0**2 - 2*x1**2
Subject to: x0**3 - x1 == 0
x1 >= 1
'''
import numpy as np
def objective(x, sign=1.0):
return sign*(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)
def derivative(x, sign=1.0):
dfdx0 = sign*(-2*x[0] + 2*x[1] + 2)
dfdx1 = sign*(2*x[0] - 4*x[1])
return np.array([ dfdx0, dfdx1 ])
# unconstrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,),
jac=derivative, method='SLSQP', options={'disp': True})
print("unconstrained: {}".format(result.x))
cons = ({'type': 'eq',
'fun' : lambda x: np.array([x[0]**3 - x[1]]),
'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([x[1] - 1]),
'jac' : lambda x: np.array([0.0, 1.0])})
# constrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,), jac=derivative,
constraints=cons, method='SLSQP', options={'disp': True})
print("constrained: {}".format(result.x))
import scipy.optimize as opt
# constrained: linear (i.e. A*x + b)
print opt.cobyla.fmin_cobyla
print opt.linprog
# constrained: quadratic programming (i.e. up to x**2)
print opt.fmin_slsqp
# http://cvxopt.org/examples/tutorial/lp.html
'''
minimize: f = 2*x0 + x1
subject to:
-x0 + x1 <= 1
x0 + x1 >= 2
x1 >= 0
x0 - 2*x1 <= 4
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
A = cvx.matrix([ [-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0] ])
b = cvx.matrix([ 1.0, -2.0, 0.0, 4.0 ])
cost = cvx.matrix([ 2.0, 1.0 ])
sol = cvx_solvers.lp(cost, A, b)
print(sol['x'])
# http://cvxopt.org/examples/tutorial/qp.html
'''
minimize: f = 2*x1**2 + x2**2 + x1*x2 + x1 + x2
subject to:
x1 >= 0
x2 >= 0
x1 + x2 == 1
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
Q = 2*cvx.matrix([ [2, .5], [.5, 1] ])
p = cvx.matrix([1.0, 1.0])
G = cvx.matrix([[-1.0,0.0],[0.0,-1.0]])
h = cvx.matrix([0.0,0.0])
A = cvx.matrix([1.0, 1.0], (1,2))
b = cvx.matrix(1.0)
sol = cvx_solvers.qp(Q, p, G, h, A, b)
print(sol['x'])
import scipy.optimize as opt
# probabilstic solvers, that use random hopping/mutations
print opt.differential_evolution
print opt.basinhopping
print opt.anneal
import scipy.optimize as opt
# bounds instead of an initial guess
bounds = [(-10., 10)]*5
for i in range(10):
result = opt.differential_evolution(opt.rosen, bounds)
print result.x,
# number of function evaluations
print '@ {} evals'.format(result.nfev)
import scipy.optimize as opt
import scipy.stats as stats
import numpy as np
# Define the function to fit.
def function(x, a, b, f, phi):
result = a * np.exp(-b * np.sin(f * x + phi))
return result
# Create a noisy data set around the actual parameters
true_params = [3, 2, 1, np.pi/4]
print "target parameters: {}".format(true_params)
x = np.linspace(0, 2*np.pi, 25)
exact = function(x, *true_params)
noisy = exact + 0.3*stats.norm.rvs(size=len(x))
# Use curve_fit to estimate the function parameters from the noisy data.
initial_guess = [1,1,1,1]
estimated_params, err_est = opt.curve_fit(function, x, noisy, p0=initial_guess)
print "solved parameters: {}".format(estimated_params)
# err_est is an estimate of the covariance matrix of the estimates
print "covarance: {}".format(err_est.diagonal())
import matplotlib.pylab as mpl
mpl.plot(x, noisy, 'ro')
mpl.plot(x, function(x, *estimated_params))
import numpy as np
import scipy.optimize as opt
def system(x,a,b,c):
x0, x1, x2 = x
eqs= [
3 * x0 - np.cos(x1*x2) + a, # == 0
x0**2 - 81*(x1+0.1)**2 + np.sin(x2) + b, # == 0
np.exp(-x0*x1) + 20*x2 + c # == 0
]
return eqs
# coefficients
a = -0.5
b = 1.06
c = (10 * np.pi - 3.0) / 3
# initial guess
x0 = [0.1, 0.1, -0.1]
# Solve the system of non-linear equations.
result = opt.root(system, x0, args=(a, b, c))
print "root:", result.x
print "solution:", result.fun
import numpy as np
import scipy.stats as stats
# Create clean data.
x = np.linspace(0, 4.0, 100)
y = 1.5 * np.exp(-0.2 * x) + 0.3
# Add a bit of noise.
noise = 0.1 * stats.norm.rvs(size=100)
noisy_y = y + noise
# Fit noisy data with a linear model.
linear_coef = np.polyfit(x, noisy_y, 1)
linear_poly = np.poly1d(linear_coef)
linear_y = linear_poly(x)
# Fit noisy data with a quadratic model.
quad_coef = np.polyfit(x, noisy_y, 2)
quad_poly = np.poly1d(quad_coef)
quad_y = quad_poly(x)
import matplotlib.pylab as mpl
mpl.plot(x, noisy_y, 'ro')
mpl.plot(x, linear_y)
mpl.plot(x, quad_y)
#mpl.plot(x, y)
import mystic.models as models
print models.zimmermann.__doc__
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The "optimizer"
Step2: Additional components
Step3: The gradient and/or hessian
Step4: The penalty functions
Step5: Optimizer classifications
Step6: Notice how much nicer it is to see the optimizer "trajectory". Now, instead of a single number, we have the path the optimizer took. scipy.optimize has a version of this, with options={'retall'
Step7: Gradient descent and steepest descent
Step8: Not Covered
Step9: Parameter estimation
Step10: Standard diagnostic tools
|
9,035
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd # data handeling
import numpy as np # numeriacal computing
import matplotlib.pyplot as plt # plotting core
import seaborn as sns # higher level plotting tools
%matplotlib inline
sns.set()
def h(X,a) : # model h(X) = Xa
h = np.dot(X,a)
return h
def a_opt(X,y) : # linear regression solution a = (X'X)^-1 X'y = pinv(X)y
a_opt = np.dot( np.linalg.pinv(X), y)
return a_opt
def J(a,X,y) : # Cost function
m = y.size
J = 1/(2*m) * np.dot( ( h(X,a) - y ).T, ( h(X,a) - y ) )
return J
def R2(a,X,y) : # R squared
y_mean = y.mean()
R2 = 1 - ( np.dot( ( h(X,a) - y ).T, ( h(X,a) - y ) ) / np.dot( ( y - y_mean ).T, ( y - y_mean ) ) )
return R2
x = np.array([1, 1.2, 1.4, 1.7, 2, 2.5, 3, 4, 5, 6, 7, 8, 9, 10])
y = np.array([7, 6 , 5, 4, 3, 2, 1, .5, .4, .35, .3, .25 ,.2,.15])
plt.plot( x,y, "o", color='g', label='training data'); plt.legend();
X = np.column_stack( (np.ones((x.size,1)) , x )) # construct the augmented matrix X
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
xt = np.linspace(1,10,100)
ax.plot(xt, a[0] + a[1]*xt , label='h(x) = %.2f + %.2f x' %(a[0], a[1])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('J =', J(a,X,y)) # value of Cost function
print("R2 =", R2(a,X,y)) # R squared
X = np.column_stack( (np.ones((x.size,1)) , x**2 )) # construct the augmented matrix X
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
xt = np.linspace(1,10,100)
ax.plot(xt, a[0] + a[1]*xt**2 , label='h(x) = %.2f + %.2f x**2' %(a[0], a[1])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('J =', J(a,X,y)) # value of Cost function
print("R2 =", R2(a,X,y)) # R squared
X = np.column_stack( (np.ones((x.size,1)) , x, x**2 )) # construct the augmented matrix X
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
xt = np.linspace(1,11,100)
ax.plot(xt, a[0] + a[1]*xt + a[2]*xt**2 , label='h(x) = %.2f + %.2f x + %.2f x**2' %(a[0], a[1], a[2])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('J =', J(a,X,y)) # value of Cost function
print("R2 =", R2(a,X,y)) # R squared
X = np.column_stack( (np.ones((x.size,1)) , 1/x )) # construct the augmented matrix X
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
xt = np.linspace(1,11,100)
ax.plot(xt, a[0] + a[1]/xt , label='h(x) = %.2f + %.2f * 1/x' %(a[0], a[1])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('J =', J(a,X,y)) # value of Cost function
print("R2 =", R2(a,X,y)) # R squared
X = np.column_stack( (np.ones((x.size,1)) , np.exp(-x) )) # construct the augmented matrix X
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
xt = np.linspace(1,11,100)
ax.plot(xt, a[0] + a[1]*np.exp(-xt) , label='h(x) = %.2f + %.2f * exp(-x)' %(a[0], a[1])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('J =', J(a,X,y)) # value of Cost function
print("R2 =", R2(a,X,y)) # R squared
def f(x) : # sourse of the data
f = x*np.sin(x/2)
return f
np.random.seed(22)
noise = np.random.randn(12) # some random "normal" noise
x = np.linspace(0,10,6) # x for training-set
y = f(x) + noise[0:6] # y for training-set
xtest = np.linspace(-1,9,6) # x for test-set
ytest = f(xtest) + noise[6:12] # y for test-set
fig, ax = plt.subplots()
ax.plot(x,y, 'o', color='g', label='training data' )
ax.plot(xtest,ytest, 'o', color='r',label='testset data')
xt = np.linspace(-2,11,100)
ax.plot(xt, f(xt), '.', color='c')
plt.legend()
X = np.column_stack( (np.ones((x.size,1)) , x )) # construct the augmented matrix X
Xtest = np.column_stack( (np.ones((x.size,1)) , xtest )) # needed for error analysis
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
ax.plot(xtest,ytest, 'o', color='r', label='testset data')
xt = np.linspace(-1,10,100)
ax.plot(xt, f(xt), '.', color='c', label='source function')
ax.plot(xt, a[0] + a[1]*xt ,
label='h(x) = %.2f + %.2f*x'
%(a[0],a[1])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('training-set: ','J =', J(a,X,y), " R2 =", R2(a,X,y))
print('test-set: ','J =', J(a,Xtest,ytest), " R2 =", R2(a,Xtest,ytest))
X = np.column_stack( (np.ones((x.size,1)) , x, x**2 )) # construct the augmented matrix X
Xtest = np.column_stack( (np.ones((x.size,1)) , xtest, xtest**2 )) # needed for error analysis
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
ax.plot(xtest,ytest, 'o', color='r', label='testset data')
xt = np.linspace(-1,10,100)
ax.plot(xt, f(xt), '.', color='c', label='source function')
ax.plot(xt, a[0] + a[1]*xt + a[2]*xt**2 ,
label='h(x) = %.2f + %.2f*x + %.2f*x**2'
%(a[0],a[1],a[2])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('training-set: ','J =', J(a,X,y), " R2 =", R2(a,X,y))
print('test-set: ','J =', J(a,Xtest,ytest), " R2 =", R2(a,Xtest,ytest))
X = np.column_stack( (np.ones((x.size,1)) , x, x**2, x**3 )) # construct the augmented matrix X
Xtest = np.column_stack( (np.ones((x.size,1)) , xtest, xtest**2, xtest**3 )) # needed for error analysis
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
ax.plot(xtest,ytest, 'o', color='r', label='testset data')
xt = np.linspace(-1,10,100)
ax.plot(xt, f(xt), '.', color='c', label='source function')
ax.plot(xt, a[0] + a[1]*xt + a[2]*xt**2 + a[3]*xt**3,
label='h(x) = %.2f + %.2f*x + %.2f*x**2 + %.2f*x**3'
%(a[0],a[1],a[2],a[3])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('training-set: ','J =', J(a,X,y), " R2 =", R2(a,X,y))
print('test-set: ','J =', J(a,Xtest,ytest), " R2 =", R2(a,Xtest,ytest))
X = np.column_stack( (np.ones((x.size,1)) , x, x**2, x**3, x**4 )) # construct the augmented matrix X
Xtest = np.column_stack( (np.ones((x.size,1)) , xtest, xtest**2, xtest**3, xtest**4 )) # needed for error analysis
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
ax.plot(xtest,ytest, 'o', color='r', label='testset data')
xt = np.linspace(-1,10,100)
ax.plot(xt, f(xt), '.', color='c', label='source function')
ax.plot(xt, a[0] + a[1]*xt + a[2]*xt**2 + a[3]*xt**3 + a[4]*xt**4,
label='h(x) = %.2f + %.2f*x + %.2f*x**2 + %.2f*x**3 + %.2f*x**4'
%(a[0],a[1],a[2],a[3],a[4])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('training-set: ','J =', J(a,X,y), " R2 =", R2(a,X,y))
print('test-set: ','J =', J(a,Xtest,ytest), " R2 =", R2(a,Xtest,ytest))
X = np.column_stack( (np.ones((x.size,1)) , x, x**2, x**3, x**4, x**5 )) # construct the augmented matrix X
Xtest = np.column_stack( (np.ones((x.size,1)) , xtest, xtest**2, xtest**3, xtest**4, xtest**5 )) # needed for error analysis
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
ax.plot(xtest,ytest, 'o', color='r', label='testset data')
xt = np.linspace(-1,10,100)
ax.plot(xt, f(xt), '.', color='c', label='source function')
ax.plot(xt, a[0] + a[1]*xt + a[2]*xt**2 + a[3]*xt**3 + a[4]*xt**4 + a[5]*xt**5,
label='h(x) = %.2f + %.2f*x + %.2f*x**2 + %.2f*x**3 + %.2f*x**4 + %.2f*x**5'
%(a[0],a[1],a[2],a[3],a[4],a[5])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('training-set: ','J =', J(a,X,y), " R2 =", R2(a,X,y))
print('test-set: ','J =', J(a,Xtest,ytest), " R2 =", R2(a,Xtest,ytest))
X = np.column_stack( (np.ones((x.size,1)) , x, x**2, x**3, x**4, x**5, x**6, x**7, x**8 )) # construct the augmented matrix X
Xtest = np.column_stack( (np.ones((x.size,1)) , xtest, xtest**2, xtest**3, xtest**4, xtest**5, xtest**6, xtest**7, xtest**8 )) # needed for error analysis
a = a_opt(X,y) # solve the regression
fig, ax = plt.subplots()
ax.plot( x,y, "o", color='g', label='training data') # plot the test data
ax.plot(xtest,ytest, 'o', color='r', label='testset data')
xt = np.linspace(-1,10,100)
ax.plot(xt, f(xt), '.', color='c', label='source function')
ax.plot(xt, a[0] + a[1]*xt + a[2]*xt**2 + a[3]*xt**3 + a[4]*xt**4 + a[5]*xt**5 + a[6]*xt**6 + a[7]*xt**7 + a[8]*xt**8,
label='h(x) = %.2f + %.2f*x + %.2f*x**2 + %.2f*x**3 + %.2f*x**4 + %.2f*x**5 + %.2f*x**6 + %.2f*x**7 + %.2f*x**8'
%(a[0],a[1],a[2],a[3],a[4],a[5],a[6],a[7],a[8])) # plot the model function
ax.legend()
print('a =', a) # print the values of a
print('training-set: ','J =', J(a,X,y), " R2 =", R2(a,X,y))
print('test-set: ','J =', J(a,Xtest,ytest), " R2 =", R2(a,Xtest,ytest))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here's the example data and a plot,
Step2: To start lets just try fitting a straight line $h(x) = a_0 + a_1x$. We'll construct the augmented matrix $X$, solve the regression equation to find the best values of $a$, then plot the result and look at the value of $J$ and $R^2$
Step3: That's not so good! No surprise! Next lets try changing $x$ to $x^2$ thinking that a simple parabola may be a better fit.
Step4: That's worse! It looks like the regression found a minimum for $a$ that has the curve opposite of what I expected.
Step5: That is a better fit. The cost, $J$ has gone down and $R^2$ is closer to 1, However, he model is not behaving the way we want (it's rising at the end of the data).
Step6: That is a significant improvement and now $R^2$ is getting closer to 1. It's still not converging to a constant value so lets try one more model function.
Step7: That is a very good fit and the model has the behavior that we want. The $R^2$ is also very close to 1 indication that the fit to the training data is very good and the cost, $J$ is near 0.
Step8: Here is a plot of all the data,
Step9: Now we will start generating models with increasing powers of $x$ and looking at the goodness of fit.
Step10: The fit is not very good as expected, but, note that $J$ and $R^2$ for training and test sets are similar.
Step11: The value of $J$ has come down and $R^2$ is better for the training and test set.
Step12: Notice that $R^2$ is improving for the training-set but getting worse for the test-set.
Step13: Notice now that the training-set fit is looking very good but the test-set is not a good fit.
Step14: Now the training-set fit is exact! The cost function $J$ is 0 and $R^2$ is 1. The test-set still has a poor fit. Even though the model function fits the training-set perfectly it is not a very good model overall. It's not as bad as it could be!
|
9,036
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'mri-esm2-0', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
9,037
|
<ASSISTANT_TASK:>
Python Code:
#from imp import *
#s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py')
#%pylab nbagg
import sys
import sygma as s
print (s.__file__)
s.__file__
#import matplotlib
#matplotlib.use('nbagg')
import matplotlib.pyplot as plt
#matplotlib.use('nbagg')
import numpy as np
from scipy.integrate import quad
from scipy.interpolate import UnivariateSpline
import os
# Trigger interactive or non-interactive depending on command line argument
__RUNIPY__ = sys.argv[0]
if __RUNIPY__:
%matplotlib inline
else:
%pylab nbagg
k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)
N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II)
print (N_tot)
Yield_tot=0.1*N_tot
print (Yield_tot/1e11)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,imf_type='salpeter',imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
#% matplotlib inline
import read_yields as ry
path = os.environ['SYGMADIR']+'/yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'
#path='/home/christian/NuGrid/SYGMA_PROJECT/NUPYCEE/new/nupycee.bitbucket.org/yield_tables/isotope_yield_table.txt'
ytables = ry.read_nugrid_yields(path,excludemass=[32,60])
zm_lifetime_grid=s1.zm_lifetime_grid_current #__interpolate_lifetimes_grid()
#return [[metallicities Z1,Z2,...], [masses], [[log10(lifetimesofZ1)],
# [log10(lifetimesofZ2)],..] ]
#s1.__find_lifetimes()
#minm1 = self.__find_lifetimes(round(self.zmetal,6),mass=[minm,maxm], lifetime=lifetimemax1)
print (Yield_tot_sim)
print (Yield_tot)
print ('ratio should be 1 : ',Yield_tot_sim/Yield_tot)
Yield_agb= ( k_N/1.35 * (1**-1.35 - 8.**-1.35) ) * 0.1
Yield_massive= ( k_N/1.35 * (8.**-1.35 - 30**-1.35) ) * 0.1
print ('Should be 1:',Yield_agb/s1.history.ism_iso_yield_agb[-1][0])
print ('Should be 1:',Yield_massive/s1.history.ism_iso_yield_massive[-1][0])
print ('Test total number of SNII agree with massive star yields: ',sum(s1.history.sn2_numbers)*0.1/Yield_massive)
print ( sum(s1.history.sn2_numbers))
s1.plot_totmasses(source='agb')
s1.plot_totmasses(source='massive')
s1.plot_totmasses(source='all')
s1.plot_totmasses(source='sn1a')
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\
imf_bdys=[1,30],iniZ=0,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
s1.plot_mass(specie='H',label='H, sim',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.35 * (m**-1.35 - 30.**-1.35) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=15,label='H, semi')
plt.legend(loc=4)
k_N=1e11*0.35/ (5**-0.35 - 20**-0.35)
N_tot=k_N/1.35 * (5**-1.35 - 20**-1.35)
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',\
imf_bdys=[5,20],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print ('Sould be 1:' ,Yield_tot_sim/Yield_tot)
k_N=1e11*0.35/ (1**-0.35 - 5**-0.35)
N_tot=k_N/1.35 * (1**-1.35 - 5**-1.35)
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\
imf_bdys=[1,5],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print ('Sould be 1: ',Yield_tot_sim/Yield_tot)
alphaimf = 1.5 #Set test alphaimf
k_N=1e11*(alphaimf-2)/ (-1**-(alphaimf-2) + 30**-(alphaimf-2))
N_tot=k_N/(alphaimf-1) * (-1**-(alphaimf-1) + 30**-(alphaimf-1))
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='alphaimf',alphaimf=1.5,imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print ('Should be 1 :',Yield_tot/Yield_tot_sim)
def imf_times_m(mass):
if mass<=1:
return 0.158 * np.exp( -np.log10(mass/0.079)**2 / (2.*0.69**2))
else:
return mass*0.0443*mass**(-2.3)
k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )
N_tot=k_N/1.3 * 0.0443* (1**-1.3 - 30**-1.3)
Yield_tot=N_tot * 0.1
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='chabrier',imf_bdys=[0.01,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print (Yield_tot)
print (Yield_tot_sim)
print ('Should be 1 :',Yield_tot/Yield_tot_sim)
plt.figure(11)
s1.plot_mass(fig=11,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.3 * 0.0443*(m**-1.3 - 30.**-1.3) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
def imf_times_m(mass):
p0=1.
p1=0.08**(-0.3+1.3)
p2=0.5**(-1.3+2.3)
p3= 1**(-2.3+2.3)
if mass<0.08:
return mass*p0*mass**(-0.3)
elif mass < 0.5:
return mass*p1*mass**(-1.3)
else: #mass>=0.5:
return mass*p1*p2*mass**(-2.3)
k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )
p1=0.08**(-0.3+1.3)
p2=0.5**(-1.3+2.3)
N_tot=k_N/1.3 * p1*p2*(1**-1.3 - 30**-1.3)
Yield_tot=N_tot * 0.1
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='kroupa',imf_bdys=[0.01,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print ('Should be 1: ',Yield_tot/Yield_tot_sim)
plt.figure(111)
s1.plot_mass(fig=111,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.3 *p1*p2* (m**-1.3 - 30.**-1.3) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=False,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print ((s1.history.ism_elem_yield_1a[0]),(s1.history.ism_elem_yield_1a[-1]))
print ((s1.history.ism_elem_yield[0]),(s1.history.ism_elem_yield[-1]))
print ((s2.history.ism_elem_yield_1a[0]),(s2.history.ism_elem_yield_1a[-1]))
print ((s2.history.ism_elem_yield[0]),(s2.history.ism_elem_yield[-1]))
print ((s1.history.ism_elem_yield[-1][0] + s2.history.ism_elem_yield_1a[-1][0])/s2.history.ism_elem_yield[-1][0])
s2.plot_mass(fig=33,specie='H-1',source='sn1a') #plot s1 data (without sn) cannot be plotted -> error, maybe change plot function?
plt.figure(99)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print ('time ',t)
#print ('mass ',m)
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= 2e9
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print ('IMF test',norm*m**-2.35)
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print (Yield_tot_sim)
print (Yield_tot)
print ('Should be : ', Yield_tot_sim/Yield_tot)
s1.plot_mass(specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
print (sum(s1.wd_sn1a_range1)/sum(s1.wd_sn1a_range))
s1.plot_sn_distr(xaxis='time',fraction=False)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print ('time ',t)
#print ('mass ',m)
mlim=10**spline(np.log10(t))
#print ('mlim',mlim)
if mlim>8.:
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
tau= 1e9 #3.3e9 #characteristic delay time
sigma=0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print (Yield_tot_sim)
print (Yield_tot)
print ('Should be 1: ', Yield_tot_sim/Yield_tot)
s2.plot_mass(fig=988,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
print (sum(s2.wd_sn1a_range1)/sum(s2.wd_sn1a_range))
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e8,tend=1.3e10,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
from scipy.interpolate import UnivariateSpline
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import quad
def spline1(t):
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
return max(minm_prog1a,10**spline_lifetime(np.log10(t)))
#funciton giving the total (accummulatitive) number of WDs at each timestep
def wd_number(m,t):
#print ('time ',t)
#print ('mass ',m)
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
if mlim>maxm_prog1a:
return 0
else:
mmin=0
mmax=0
inte=0
#normalized to 1msun!
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
return norm*m**-2.35 #self.__imf(mmin,mmax,inte,m)
def maoz_sn_rate(m,t):
return wd_number(m,t)* 4.0e-13 * (t/1.0e9)**-1
def maoz_sn_rate_int(t):
return quad( maoz_sn_rate,spline1(t),8,args=t)[0]
#in this formula, (paper) sum_sn1a_progenitors number of
maxm_prog1a=8
longtimefornormalization=1.3e10 #yrs
fIa=0.00147
fIa=1e-3
#A = (fIa*s2.number_stars_born[1]) / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
A = 1e-3 / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
print ('Norm. constant A:',A)
n1a= A* quad(maoz_sn_rate_int,0,1.3e10)[0]
Yield_tot=n1a*1e11*0.1 #specialfactor
print (Yield_tot_sim)
print (Yield_tot)
print ('Should be 1: ', Yield_tot_sim/Yield_tot)
s2.plot_mass(fig=44,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
#yields= a* dblquad(wdfrac,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1
yields= A*quad(maoz_sn_rate_int,0,t)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.legend(loc=3)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
stellar_param_on=False)
print ('Should be 0: ',s1.history.age[0])
print ('Should be 1: ',s1.history.age[-1]/1.3e10)
print ('Should be 1: ',s1.history.timesteps[0]/1e7)
print ('Should be 1: ',s1.history.timesteps[-1]/1e7)
print ('Should be 1: ',sum(s1.history.timesteps)/1.3e10)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.5e9,special_timesteps=200,imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print ('Should be 0: ',s2.history.age[0])
print ('Should be 1: ',s2.history.age[-1]/1.5e9)
print ('Should be 201: ',len(s2.history.age))
print ('Should be 1: ',s2.history.timesteps[0]/1e7)
#print ('in dt steps: ',s2.history.timesteps[1]/1e7,s1.history.timesteps[2]/1e7,'..; larger than 1e7 at step 91!')
print ('Should be 200: ',len(s2.history.timesteps))
print ('Should be 1: ',sum(s2.history.timesteps)/1.5e9)
plt.figure(55)
plt.plot(s1.history.age[1:],s1.history.timesteps,label='linear (constant) scaled',marker='+')
plt.plot(s2.history.age[1:],s2.history.timesteps,label='log scaled',marker='+')
plt.yscale('log');plt.xscale('log')
plt.xlabel('age/years');plt.ylabel('timesteps/years');plt.legend(loc=4)
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s4=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s5=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s6=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
#print (s3.history.ism_iso_yield[-1][0] == s4.history.ism_iso_yield[-1][0] why false?)
print ('should be 1 ',s3.history.ism_iso_yield[-1][0]/s4.history.ism_iso_yield[-1][0])
#print (s3.history.ism_iso_yield[-1][0],s4.history.ism_iso_yield[-1][0])
print ('should be 1',s5.history.ism_iso_yield[-1][0]/s6.history.ism_iso_yield[-1][0])
#print (s5.history.ism_iso_yield[-1][0],s6.history.ism_iso_yield[-1][0])
s1=s.sygma(iolevel=0,mgal=1e7,dt=1e7,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e8,dt=1e8,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s3=s.sygma(iolevel=0,mgal=1e9,dt=1e9,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print ('At timestep 0: ',sum(s1.history.ism_elem_yield[0])/1e7,sum(s2.history.ism_elem_yield[0])/1e8,sum(s3.history.ism_elem_yield[0])/1e9)
print ('At timestep 0: ',sum(s1.history.ism_iso_yield[0])/1e7,sum(s2.history.ism_iso_yield[0])/1e8,sum(s3.history.ism_iso_yield[0])/1e9)
print ('At last timestep, should be the same fraction: ',sum(s1.history.ism_elem_yield[-1])/1e7,sum(s2.history.ism_elem_yield[-1])/1e8,sum(s3.history.ism_elem_yield[-1])/1e9)
print ('At last timestep, should be the same fraction: ',sum(s1.history.ism_iso_yield[-1])/1e7,sum(s2.history.ism_iso_yield[-1])/1e8,sum(s3.history.ism_iso_yield[-1])/1e9)
s1=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s2=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
pop3_table='yield_tables/popIII_h1.txt')
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s4=s.sygma(iolevel=0,mgal=1e11,dt=3e7,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s1.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 1',label2='SNII, rate 1',marker1='o',marker2='s',shape2='-',markevery=1)
s2.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='p',markevery=1,shape2='-.')
s4.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='+',markevery=1,shape2=':',color2='y')
s3.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='x',markevery=1,shape2='--')
plt.xlim(6e6,7e7)
plt.vlines(7e6,1e2,1e9)
plt.ylim(1e2,1e4)
print (s1.history.sn2_numbers[1]/s1.history.timesteps[0])
print (s2.history.sn2_numbers[1]/s2.history.timesteps[0])
#print (s1.history.timesteps[:5])
#print (s2.history.timesteps[:5])
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt',
stellar_param_on=False)
s4=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
pop3_table='yield_tables/popIII_h1.txt',stellar_param_on=False)
s3.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s',markevery=1)
s4.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')
plt.xlim(3e7,1e10)
s1.plot_sn_distr(fig=77,rate=True,marker1='o',marker2='s',markevery=5)
s2.plot_sn_distr(fig=77,rate=True,marker1='x',marker2='^',markevery=1)
#s1.plot_sn_distr(rate=False)
#s2.plot_sn_distr(rate=True)
#s2.plot_sn_distr(rate=False)
plt.xlim(1e6,1.5e10)
#plt.ylim(1e2,1e4)
s1=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=8,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=10,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim_8=s1.history.ism_iso_yield_agb[-1][0]
Yield_tot_sim_10=s2.history.ism_iso_yield_agb[-1][0]
alphaimf=2.35
k_N=1e11*(alphaimf-2)/ (-1.65**-(alphaimf-2) + 30**-(alphaimf-2))
N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 8**-(alphaimf-1))
Yield_tot_8=0.1*N_tot
N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 10**-(alphaimf-1))
Yield_tot_10=0.1*N_tot
#N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 5**-(alphaimf-1))
#Yield_tot_5=0.1*N_tot
print ('1:',Yield_tot_sim_8/Yield_tot_8)
print ('1:',Yield_tot_sim_10/Yield_tot_10)
#print ('1:',Yield_tot_sim_5/Yield_tot_5)
s0=s.sygma(iolevel=0,iniZ=0.0001,imf_bdys=[0.01,100],imf_yields_range=[1,100],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IMF notes
Step2: The total number of stars $N_{tot}$ is then
Step3: With a yield ejected of $0.1 Msun$, the total amount ejected is
Step4: compared to the simulation
Step5: Compare both results
Step6: Test of distinguishing between massive and AGB sources
Step7: Calculating yield ejection over time
Step8: Simulation results in the plot above should agree with semi-analytical calculations.
Step9: Select imf_bdys=[1,5]
Step10: Results
Step11: Test of parameter imf_type
Step12: Chabrier
Step13: Simulation should agree with semi-analytical calculations for Chabrier IMF.
Step14: Simulation results compared with semi-analytical calculations for Kroupa IMF.
Step15: Test of parameter sn1a_rate (DTD)
Step16: Small test
Step17: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (exp) implementation.
Step18: Wiersmagauss
Step19: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (Gauss) implementation.
Step20: SNIa implementation
Step21: Check trend
Step22: Test of parameter tend, dt and special_timesteps
Step23: First timestep size of 1e7, then in log space to tend with a total number of steps of 200; Note
Step24: Choice of dt should not change final composition
Step25: Test of parameter mgal - the total mass of the SSP
Step26: Test of SN rate
Step27: Rate does not depend on timestep type
Step28: Test of parameter transitionmass
Step29: imf_yield_range - include yields only in this mass range
|
9,038
|
<ASSISTANT_TASK:>
Python Code:
import sys,os,glob
from collections import OrderedDict
import numpy as np
from utils.misc import readPickle, createIfAbsent
sys.path.append('../')
from optvaedatasets.load import loadDataset as loadDataset_OVAE
from sklearn.feature_extraction.text import TfidfTransformer
default_params = readPickle('../optvaeutils/default_settings.pkl')[0]
for k in default_params:
print '(',k,default_params[k],')',
print
default_params['opt_type'] = 'finopt' #set to finopt to optimize var. params, none otherwise
default_params['n_steps'] = 5
#temporary directory where checkpoints are saved
default_params['savedir'] = './tmp'
dset = loadDataset_OVAE('rcv2')
#Visualize structure of dataset dict
for k in dset:
print k, type(dset[k]),
if hasattr(dset[k],'shape'):
print dset[k].shape
elif type(dset[k]) is not list:
print dset[k]
else:
print
#Add parameters to default_params
for k in ['dim_observations','data_type']:
default_params[k] = dset[k]
default_params['max_word_count'] =dset['train'].max()
#Create IDF
additional_attrs = {}
tfidf = TfidfTransformer(norm=None)
tfidf.fit(dset['train'])
additional_attrs['idf'] = tfidf.idf_
from optvaemodels.vae import VAE as Model
import optvaemodels.vae_learn as Learn
import optvaemodels.vae_evaluate as Evaluate
default_params['savedir']+='-rcv2-'+default_params['opt_type']
createIfAbsent(default_params['savedir'])
pfile= default_params['savedir']+'/'+default_params['unique_id']+'-config.pkl'
print 'Training model from scratch. Parameters in: ',pfile
model = Model(default_params, paramFile = pfile, additional_attrs = additional_attrs)
savef = os.path.join(default_params['savedir'],default_params['unique_id']) #Prefix for saving in checkpoint directory
savedata = Learn.learn( model,
dataset = dset['train'],
epoch_start = 0 ,
epoch_end = 3, #epochs -- set w/ default_params['epochs']
batch_size = default_params['batch_size'], #batch size
savefreq = default_params['savefreq'], #frequency of saving
savefile = savef,
dataset_eval= dset['valid']
)
for k in savedata:
print k, type(savedata[k]), savedata[k].shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Parameters
Step2: For the moment, we will leave everything as is. Some worthwhile parameters to note
Step3: Load dataset
Step4: Setup
Step5: Training the model
|
9,039
|
<ASSISTANT_TASK:>
Python Code:
x = np.arange(-5.0, 5.0, 0.1)
y = np.array(x > 0, dtype=np.int)
plt.plot(x, y)
plt.show()
x = np.arange(-5.0, 5.0, 0.1)
y = 1 / (1 + np.exp(-x))
plt.plot(x, y)
plt.show()
x = np.arange(-5.0, 5.0, 0.1)
y = np.maximum(0, x)
plt.plot(x, y)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: シグモイド関数
Step2: ReLU関数
|
9,040
|
<ASSISTANT_TASK:>
Python Code:
query_url = 'https://data.sfgov.org/resource/wbb6-uh78.json?$order=close_dttm%20DESC&$offset={}&$limit={}'
# query_url = "https://data.sfgov.org/resource/wbb6-uh78.json?$where=alarm_dttm>='2013-02-12 04:52:17'&$order=close_dttm%20DESC"
# query_url = "https://data.sfgov.org/resource/wbb6-uh78.json?$where=alarm_dttm>='2013-02-12 04:52:17'"
offset = 0
limit = 1000000
df = pd.read_json(query_url.format(offset, limit))
# df = pd.read_json(query_url)
cols_to_drop = ["automatic_extinguishing_sytem_failure_reason",
"automatic_extinguishing_sytem_type",
"battalion",
"box",
"call_number",
"detector_effectiveness",
"detector_failure_reason",
"ems_personnel",
"ems_units",
"exposure_number",
"first_unit_on_scene",
"ignition_factor_secondary",
"mutual_aid",
"no_flame_spead",
"other_personnel",
"other_units",
"station_area",
"supervisor_district"]
df = df.drop(cols_to_drop, axis=1)
for col in df.columns:
if 'dttm' in col:
df[col] = pd.to_datetime(df[col])
df.alarm_dttm.min() # The earliest timestamp of this dataset is 2013-02-12 04:52:17
df.estimated_property_loss.value_counts(dropna=False)
df.shape
# So we have 100,000 rows of data, going all the way back to February 10, 2013
# There is thoughts that there's a correlation with year and cost, especially in the mission
df[df.estimated_property_loss.isnull()].__len__()
# of the 100,000 rows, 96,335 are null
96335 / float(df.shape[0])
# wow, so where are these companies getting their data about the costs associated with fires?
# it's not from the sfgov website. we'll need to table that and come back later.
df['year'] = df.alarm_dttm.apply(lambda x: x.year)
temp_df = df[df.estimated_property_loss.notnull()]
temp_df.shape
temp_df.groupby('year').sum()['estimated_property_loss']
mask = ((temp_df.zipcode.notnull()) & (temp_df.zipcode.isin([94103, 94110])))
temp_df[mask].groupby('year').sum()['estimated_property_loss']
# So based on the above data yes, the 2015 fires for those two zipcodes doubled,
# and we can look into why, but could it be a symptom of population growth?
# this article http://sf.curbed.com/2016/7/1/12073544/mission-fires-arson-campos
# said that there were 2,788 blazes... but that's wrong, it's 2,788 units impacted.
# One blaze could impact multiple units
#
# This infographic shows number of units impacted by fire by neighborhood,
# but isn't this seriously misleading? https://infogr.am/sf_fires_by_zip-3
#
# Ok, no seriously, I'm setting aside this mission research, because the upside for getting it right is low
# but the downside for getting it wrong is very impactful. Not the sort of press we want
# TODO: check this out and compare it to the data set
# https://celestelecomptedotcom.files.wordpress.com/2015/04/15-04-05_wfs-greater-alarms-01-01-01-04-05-15.pdf
mask = ((temp_df.zipcode.notnull()) &
(temp_df.zipcode.isin([94103, 94110])) &
(temp_df.year == 2014))
temp_df[mask].groupby('year').sum()['estimated_property_loss']
mask = ((df.estimated_property_loss.notnull()))
sns.df[mask].groupby('year').sum()['estimated_property_loss']
# So based on the above data yes, the 2015 fires for those two zipcodes doubled,
# and we can look into why, but could it be a symptom of population growth?
# according to the document mentioned above and the report, it says that the population size shrunk. OK...
# but the data that is being looked at is a HUGE period. There was a census report in 2000, and then another one
# that's a large bucket of 2009-2013. The change reported was a 9% decrease, not exactly a huge boom.
# My next theory is that the reason that the cost increased is simply that they got better about capturing records
# for certain areas
# Let's try a little experiment
# let's look at which fire areas are better at keeping records, shall we?
df['loss_recorded'] = 0
mask = ((df.estimated_property_loss.notnull()))
df.loc[mask, 'loss_recorded'] = 1
mask = ((df.zipcode.notnull()))
zipgroup = df[mask].groupby(['zipcode'])
zipgroup.mean()['loss_recorded'].plot(kind='barh')
# the above document shows the likelihood that the estimated_property_loss value
# is recorded based on zipcode.
# Mission District is within 94103, 94110 zipcodes
#
zipgroup.mean()['loss_recorded'][94103]
zipgroup.mean()['loss_recorded'][94110]
mask = ((df.estimated_property_loss.notnull()) &
(df.zipcode == 94110))
sns.distplot(df[mask].estimated_property_loss)
mask = ((df.estimated_property_loss.notnull()) &
(df.zipcode == 94103))
sns.distplot(df[mask].estimated_property_loss)
df['estimated_property_loss'] = pd.to_numeric(df['estimated_property_loss'])
df['estimated_property_loss'] = df['estimated_property_loss'].fillna(0)
df.info()
mask = ((df.estimated_property_loss.notnull()) &
(df.zipcode == 94103))
df[mask].estimated_property_loss.value_counts(dropna=False, normalize=True, bins=50)
df['month'] = df.alarm_dttm.apply(lambda x: x.month)
mask = ((df.month == 6) & (df.year == 2016))
df[mask].describe()
df.describe()
df.alarm_dttm.min()
df.alarm_dttm.max()
# what is odd is how the fire civilian fatalities have a max value of 1, which makes it concerning that the dataset
# is inaccurate and needs to be cleaned more carefully before we proceed.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: According to wikipeda, the mission district falls into two zipcodes, 94103, 94110
Step2: Initial Conclusions
Step3: Disclaimers from the Fire Marshal
|
9,041
|
<ASSISTANT_TASK:>
Python Code:
import os
os.mkdir("/tmp/park-python")
try:
os.rmdir("/tmp/park-python")
except IOError as err:
print(err)
path = "/tmp/park-python/lectures/04"
if not os.path.exists(path):
os.makedirs(path)
os.rmdir("/tmp/park-python")
import shutil
shutil.rmtree("/tmp/park-python")
import pprint
pprint.pprint(list(os.walk(os.curdir)))
# открываем дескриптор файла для записи
f = open("/tmp/example.txt", "w")
# записываем содержимое
f.write("Технопарк\n")
# обязательно закрываем
f.close()
# открываем дескриптор файла для чтения
f = open("/tmp/example.txt", "r")
# читаем содержимое полностью.
data = f.read()
# обязательно закрываем!
f.close()
print(data)
# используя context-manager
with open("/tmp/example.txt", "a") as f:
f.write("МГТУ\n")
with open("/tmp/example.txt", "r") as f:
print(f.readlines())
# читаем файл по строке, не загружая его полность в память
with open("/tmp/example.txt", "r") as f:
for line in f:
print(repr(line))
# Чтобы проверить целостность сохраненного файла
import hashlib
def hash_file(filename):
h = hashlib.sha1()
# открываем файл в бинарном виде.
with open(filename,'rb') as file:
chunk = 0
while chunk != b'':
# читаем кусочками по 1024 байта
chunk = file.read(1024)
h.update(chunk)
# hex-представление полученной суммы.
return h.hexdigest()
print(hash_file("/tmp/example.txt"))
print(hash_file("/tmp/example.txt"))
with open("/tmp/example.txt", "a") as f:
f.write("1")
print("После изменений:", hash_file("/tmp/example.txt"))
import sys
print(sys.stdin)
print(sys.stdout)
print(sys.stderr)
print(sys.stdin.fileno())
print(sys.stdout.fileno())
sys.stdout.write("where am I")
def get_max_length_word(sentence):
longest_word = None
words = sentence.split()
for word in words:
if not longest_word or len(word) > len(longest_word):
longest_word = word
return longest_word
import unittest
class LongestWordTestCase(unittest.TestCase):
def test_sentences(self):
sentences = [
["Beautiful is better than ugly.", "Beautiful"],
["Complex is better than complicated.", "complicated"]
]
for sentence, correct_word in sentences:
self.assertEqual(get_max_length_word(sentence), correct_word)
# Обычно в реальных проектах использует механизм автоматического нахождения тестов (discover).
suite = unittest.defaultTestLoader.loadTestsFromTestCase(LongestWordTestCase)
unittest.TextTestRunner().run(suite)
class BoomException(Exception):
pass
class Material:
def __init__(self, name, reacts_with=None):
self.name = name
self.reacts_with = reacts_with or []
def __repr__(self):
return self.name
class Alchemy:
def __init__(self):
self.materials = []
def add(self, material):
for existing_material in self.materials:
if material.name not in existing_material.reacts_with:
continue
self.materials = []
raise BoomException("{0} + {1}".format(existing_material.name, material.name))
self.materials.append(material)
def remove(self, material):
self.materials.remove(material)
# 2Na + 2H2O = 2NaOH + H2 (Не повторять дома!!! Щелочь чрезвычайно опасна!)
alchemy = Alchemy()
material_ca = Material("Ca", reacts_with=[])
material_h20 = Material("H2O", reacts_with=["Na"])
material_na = Item("Na", reacts_with=["H2O"])
alchemy.add(material_ca)
alchemy.add(material_h20)
try:
alchemy.add(material_na)
except BoomException:
print("We are alive! But all items lost!")
import unittest
class AlchemyTest(unittest.TestCase):
def setUp(self):
self.alchemy = Alchemy()
def test_add(self):
self.alchemy.add(Material("C"))
self.alchemy.add(Material("F"))
self.assertEqual(len(self.alchemy.materials), 2)
def test_remove(self):
material_c = Material("C")
self.alchemy.add(material_c)
self.assertEqual(len(self.alchemy.materials), 1)
self.alchemy.remove(material_c)
self.assertEqual(len(self.alchemy.materials), 0)
def test_boom(self):
material_na = Material("Na", reacts_with=["H2O"])
material_h20 = Material("H2O", reacts_with=["Na"])
self.alchemy.add(material_na)
self.assertRaises(BoomException, self.alchemy.add, material_h20)
self.assertEqual(len(self.alchemy.materials), 0)
# Обычно в реальных проектах использует механизм автоматического нахождения тестов (discover).
suite = unittest.defaultTestLoader.loadTestsFromTestCase(AlchemyTest)
unittest.TextTestRunner().run(suite)
import requests
def get_location_city(ip):
data = requests.get("https://freegeoip.net/json/{ip}".format(ip=ip)).json()
return data["city"]
def get_ip():
data = requests.get("https://httpbin.org/ip").json()
return data["origin"]
get_location_city(get_ip())
import math
def fake_sqrt(num):
return 42
original_sqrt = math.sqrt
math.sqrt = fake_sqrt
# вызываем ф-ю, которую мы запатчили.
print(math.sqrt(16))
math.sqrt = original_sqrt
math.sqrt(16)
import unittest
from unittest.mock import patch, Mock
class FakeIPResponse:
def json(self):
return {"origin": "127.0.0.1"}
class LongestWordTestCase(unittest.TestCase):
@patch('requests.get', Mock(return_value=FakeIPResponse()))
def test_get_ip(self):
self.assertEqual(get_ip(), "127.0.0.1")
suite = unittest.defaultTestLoader.loadTestsFromTestCase(LongestWordTestCase)
unittest.TextTestRunner().run(suite)
from unittest.mock import Mock
mock = Mock()
mock.method(1, 2, 3, test='wow')
mock.method.assert_called_with(1, 2, 3, test='wow')
mock.non_existing_method.assert_not_called()
STEPS = 50000000
# Простая программа, складывающая числа.
def worker(steps):
count = 0
for i in range(steps):
count += 1
return count
%timeit -n1 -r1 worker(STEPS)
print("Напомните преподавателю показать actvity monitor")
import threading
import queue
result_queue = queue.Queue()
STEPS = 50000000
NUM_THREADS = 2
def worker(steps):
count = 0
for i in range(steps):
count += 1
result_queue.put(count)
def get_count_threaded():
count = 0
threads = []
for i in range(NUM_THREADS):
t = threading.Thread(target=worker, args=(STEPS//NUM_THREADS,))
threads.append(t)
t.start()
for i in range(NUM_THREADS):
count += result_queue.get()
return count
%timeit -n1 -r1 get_count_threaded()
import multiprocessing
NUM_PROCESSES = 2
STEPS = 50000000
result_queue = multiprocessing.Queue()
def worker(steps):
count = 0
for i in range(steps):
count += 1
result_queue.put(count)
def get_count_in_processes():
count = 0
processes = []
for i in range(NUM_PROCESSES):
p = multiprocessing.Process(target=worker, args=(STEPS//NUM_PROCESSES,))
processes.append(p)
p.start()
for i in range(NUM_PROCESSES):
count += result_queue.get()
return count
%timeit -n1 -r1 get_count_in_processes()
import requests
STEPS = 100
def download():
requests.get("http://127.0.0.1:8000").text
# Простая программа, загружающая контент URL-странички. Типичная IO-bound задача.
def worker(steps):
for i in range(steps):
download()
%timeit -n1 -r1 worker(STEPS)
import threading
STEPS = 100
NUM_THREADS = 2
def worker(steps):
count = 0
for i in range(steps):
download()
def run_worker_threaded():
threads = []
for i in range(NUM_THREADS):
t = threading.Thread(target=worker, args=(STEPS//NUM_THREADS,))
threads.append(t)
t.start()
for t in threads:
t.join()
%timeit -n1 -r1 run_worker_threaded()
import multiprocessing
NUM_PROCESSES = 2
def worker(steps):
count = 0
for i in range(steps):
download()
def run_worker_in_processes():
processes = []
for i in range(NUM_PROCESSES):
p = multiprocessing.Process(target=worker, args=(STEPS//NUM_PROCESSES,))
processes.append(p)
p.start()
for p in processes:
p.join()
%timeit -n1 -r1 run_worker_in_processes()
import concurrent.futures
import requests
STEPS = 100
def download():
return requests.get("http://127.0.0.1:8000").text
def run_in_executor():
executor = concurrent.futures.ThreadPoolExecutor(max_workers=64)
future_to_url = {executor.submit(download): i for i in range(STEPS)}
for future in concurrent.futures.as_completed(future_to_url):
i = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%d generated an exception: %s' % (i, exc))
else:
pass
#print('%d page is %d bytes' % (i, len(data)))
executor.shutdown()
%timeit -n1 -r1 run_in_executor()
counter = 0
def worker(num):
global counter
for i in range(num):
counter += 1
worker(1000000)
print(counter)
import threading
counter = 0
def worker(num):
global counter
for i in range(num):
counter += 1
threads = []
for i in range(10):
t = threading.Thread(target=worker, args=(100000,))
threads.append(t)
t.start()
for t in threads:
t.join()
#print(counter)
import threading
counter = 0
lock = threading.Lock()
def worker(num):
global counter
for i in range(num):
lock.acquire()
counter += 1
lock.release()
threads = []
for i in range(10):
t = threading.Thread(target=worker, args=(100000,))
threads.append(t)
t.start()
for t in threads:
t.join()
print(counter)
# deadlock example
import threading
counter = 0
lock = threading.Lock()
def print_counter():
lock.acquire()
print(counter)
lock.release()
def worker():
global counter
lock.acquire()
print_counter()
counter += 1
lock.release()
worker()
import time
def request(i):
print(f"Sending request {i+1}")
time.sleep(1)
print(f"Got response from request {i+1}")
print()
for i in range(5):
request(i)
import time
def request(i):
print("Sending request %d" % i)
def on_data(data):
print("Got response from request %d" % i)
return on_data
callbacks = []
for i in range(5):
cb = request(i)
callbacks.append(cb)
time.sleep(1)
for cb in callbacks:
cb("data")
def simple_gen():
yield 1
yield 2
gen = simple_gen()
print(next(gen))
print(next(gen))
print(next(gen))
gen = simple_gen()
for i in gen:
print(i)
def fib():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
gen = fib()
for i in range(6):
print(next(gen))
def coro():
next_value = yield "Hello"
yield next_value
c = coro()
print(next(c))
print(c.send("World"))
import time
def request(i):
print("Sending request %d" % i)
data = yield
print("Got response from request %d" % i)
generators = []
for i in range(5):
gen = request(i)
generators.append(gen)
next(gen)
time.sleep(1)
for gen in generators:
try:
gen.send("data")
except StopIteration:
pass
import asyncio
loop = asyncio.get_event_loop()
loop.run_forever()
import asyncio
def cb():
print("callback called")
loop.stop()
loop = asyncio.get_event_loop()
loop.call_later(delay=3, callback=cb)
print("start event loop")
loop.run_forever()
import asyncio
@asyncio.coroutine
def return_after_delay():
yield from asyncio.sleep(3)
print("return called")
loop = asyncio.get_event_loop()
print("start event loop")
loop.run_until_complete(return_after_delay())
import asyncio
async def return_after_delay():
await asyncio.sleep(3)
print("return called")
loop = asyncio.get_event_loop()
print("start event loop")
loop.run_until_complete(return_after_delay())
import asyncio
async def get_data():
await asyncio.sleep(1)
return "boom"
async def request(i):
print(f"Sending request {i+1}")
data = await get_data()
print(f"Got response from request {i+1}: {data}")
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*[request(i) for i in range(5)]))
import asyncio
async def get_data():
await asyncio.sleep(1)
raise ValueError
async def request(i):
print("Sending request %d" % i)
try:
data = await get_data()
except ValueError:
print("Error in request %d" % i)
else:
print("Got response from request %d" % i)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*[request(i) for i in range(5)]))
import aiohttp
import asyncio
STEPS = 100
async def download(loop):
async with aiohttp.ClientSession(loop=loop) as session:
async with session.get("http://127.0.0.1:8000") as response:
return await response.text()
async def worker(steps, loop):
await asyncio.gather(*[download(loop) for x in range(steps)])
loop = asyncio.get_event_loop()
%timeit -n1 -r1 loop.run_until_complete(worker(STEPS, loop))
# future example.
import asyncio
async def slow_operation(future):
try:
await asyncio.wait_for(asyncio.sleep(1), 2)
except asyncio.TimeoutError:
future.set_exception(ValueError("Error sleeping"))
else:
future.set_result('Future is done!')
def got_result(future):
if future.exception():
print("Exception:", type(future.exception()))
else:
print(future.result())
loop.stop()
loop = asyncio.get_event_loop()
future = asyncio.Future()
future.add_done_callback(got_result)
asyncio.ensure_future(slow_operation(future))
loop.run_forever()
# выносим блокирующие вызовы в пул тредов
import asyncio
import requests
async def main():
loop = asyncio.get_event_loop()
future1 = loop.run_in_executor(None, requests.get, 'http://127.0.0.1:8000')
future2 = loop.run_in_executor(None, requests.get, 'http://127.0.0.1:8000')
response1 = await future1
response2 = await future2
print(response1.text)
print(response2.text)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
# asyncio + multiprocessing
import aiohttp
import asyncio
import multiprocessing
NUM_PROCESSES = 2
STEPS = 100
async def download(loop):
async with aiohttp.ClientSession(loop=loop) as session:
async with session.get("http://127.0.0.1:8000") as response:
return await response.text()
async def worker(steps, loop):
await asyncio.gather(*[download(loop) for x in range(steps)], loop=loop)
def run(steps):
loop = asyncio.new_event_loop()
loop.run_until_complete(worker(steps, loop))
def run_in_processes():
processes = []
for i in range(NUM_PROCESSES):
p = multiprocessing.Process(target=run, args=(STEPS//NUM_PROCESSES,))
processes.append(p)
p.start()
for p in processes:
p.join()
%timeit -n1 -r1 run_in_processes()
import subprocess
import os
result = subprocess.run(["ls", "-l", os.getcwd()], stdout=subprocess.PIPE)
print(result.stdout)
# используя shell
result = subprocess.run(
"ls -l " + os.getcwd() + "|grep debug",
stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True
)
print(result.stdout)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Работа с файлами
Step2: "r" – открытие на чтение (является значением по умолчанию).
Step3: stdin, stdout, stderr
Step4: Так как дескрипторы stdout и stderr переопределены в Jupyter notebook. Давайте посмотрим куда они ведут
Step5: А ведут они как раз в этот ноутбук
Step6: Что может пойти не так? Да все что угодно
Step7: <table border="1" class="docutils" align="left">
Step8: unittest.mock
Step9: Для начала посмотрим что такое monkey patching.
Step10: https
Step11: <div style="float
Step12: <div style="float
Step13: Зачем тогда нужны потоки?
Step14: <div style="float
Step15: <div style="float
Step16: Как мы видим - треды позволили получить лучший результат (Macbook Pro 2016 - 64 треда).
Step17: Аналогично можно использовать ProcessPoolExecutor, чтобы вынести работу в пул процессов.
Step18: Вернемся к примеру с загрузкой URL
Step19: На запрос тратится 1 секунда, и мы ждем 5 секунд на 5 запросов - а ведь могли бы отправить их друг за другом и через секунду получить результаты для всех и обработать.
Step20: Генераторы
Step21: Первый плюс
Step22: Второй плюс
Step23: Можно работать с бесконечным потоком данных. Можно обмениваться результатами между отдельными генераторами по мере готовности - то есть иметь дело с несколькими параллельными задачами. При этом не обязательно эти задачи зависят друг от друга.
Step24: В контексте лекции важно понять, что выполнение функции-генератора в Python можно приостановить, дождаться нужных данных, а затем продолжить выполнение с места прерывания. При этом сохраняется локальный контекст выполнения и пока мы ждем данных интерпретатор может заниматься другой полезной работой
Step25: <div style="float
Step26: В Python 3.4 вызов результата корутины выполнялся с помощью конструкции yield from (https
Step27: В версии 3.5 появились специальные ключевые слова, позволяющие программировать в асинхронном стиле
Step28: Чтобы программа стала работать асинхронно нужно использовать примитивы, которые есть в библиотеке asyncio
Step29: Исключения при работе с корутинами работают точно так же как и в синхронном коде
Step30: Примеры других реализаций Event Loop'ов
Step31:
Step32: Завершающий пример (asyncio + multiprocessing)
Step33: Subprocess. Если останется время...
|
9,042
|
<ASSISTANT_TASK:>
Python Code:
# Syntax error
x = 1; y = 2
b = x == y # Boolean variable that is true when x & y have the same value
b = 1 == 2 # Syntax error
b
# Exception - invalid operation
a = 0
5/a # Division by zero
# Exception - invalid operation
input = '40'
float(input)/11 # Incompatiable types for the operation
# Incorrect logic
import math
x = 55
math.sin(x)**2 + math.cos(x)**2 == 1 # Should be math.cos(x)**2
def entropy(ps):
items = ps * np.log(ps)
return -np.sum(items)
ps = [0.1, 0.3, 0.5, 0.7, 0.9]
entropy(ps)
def entropy(ps):
print ps
items = ps * np.log(ps)
return -np.sum(items)
ps = "0.1, 0.3, 0.5, 0.7, 0.9"
entropy(ps)
def entropy(ps):
print ps
items = ps * np.log(ps)
return -np.sum(items)
ps = [0.1, 0.3, 0.5, 0.7, 0.9]
entropy(ps)
# Create a vector of probabilities.
ps = np.arange(5.)
ps /= ps.sum()
ps
entropy(ps)
def entropy1(ps):
print("ps=%s" % str(ps))
items = ps * np.log(ps)
if np.isnan(items[0]):
print(items)
return -np.sum(items)
entropy1([.1, .2])
np.isnan(np.nan)
def entropy2(ps):
ps = np.asarray(ps) # convert p to array if necessary
print(ps)
items = []
for val in ps:
item = val * np.log(val)
if np.isnan(item):
print("%f makes a nan" % val)
items.append(item)
#items = ps * np.log(ps)
return -np.sum(items)
entropy2(ps)
def entropy(ps):
items = ps * np.log(ps)
if np.isnan(items[0]):
import pdb; pdb.set_trace()
return -np.sum(items)
ps = [0, .1, .1, .3]
entropy(ps)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question
Step2: What's the bug here and how do we resolve?
Step3: We should have documented the inputs to the function!
Step4: Now it works fine for the first set of inputs. Let's try other inputs.
Step5: We get nan, which stands for "Not a Number". What's going on here?
Step6: By printing some of the intermediate items, we see the problem
Step7: Using a Debugger
Step8: This can be a more convenient way to debug programs and step through the actual execution.
|
9,043
|
<ASSISTANT_TASK:>
Python Code:
# 加载必要的程序包
# PyTorch的程序包
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# 数值运算和绘图的程序包
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# 加载机器学习的软件包,主要为了词向量的二维可视化
from sklearn.decomposition import PCA
#加载Word2Vec的软件包
import gensim as gensim
from gensim.models import Word2Vec
from gensim.models.keyedvectors import KeyedVectors
from gensim.models.word2vec import LineSentence
#加载正则表达式处理的包
import re
#在Notebook界面能够直接显示图形
%matplotlib inline
# 加载中文词向量,下载地址为:链接:http://pan.baidu.com/s/1gePQAun 密码:kvtg
# 该中文词向量库是由尹相志提供,训练语料来源为:微博、人民日报、上海热线、汽车之家等,包含1366130个词向量
#word_vectors = KeyedVectors.load_word2vec_format('vectors.bin', binary=True, unicode_errors='ignore')
#len(word_vectors.vocab)
# 加载中文的词向量,下载地址为:http://nlp.stanford.edu/data/glove.6B.zip,解压后将glove.6B.100d.txt文件拷贝到与本notebook
# 文件一致的文件夹洗面。
#f = open('glove.6B.100d.txt', 'r')
#i = 1
# 将英文的词向量都存入如下的字典中
#word_vectors_en = {}
#with open('glove.6B.100d.txt') as f:
# for line in f:
# numbers = line.split()
# word = numbers[0]
# vectors = np.array([float(i) for i in numbers[1 : ]])
# word_vectors_en[word] = vectors
# i += 1
#print(len(word_vectors_en))
# 中文的一二三四五列表
#cn_list = {'一', '二', '三', '四', '五', '六', '七', '八', '九', '零'}
# 阿拉伯数字的12345列表
#en_list = {'1', '2', '3', '4', '5', '6', '7', '8', '9', '0'}
# 英文数字的列表
#en_list = {'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'zero'}
# 对应词向量都存入到列表中
#cn_vectors = [] #中文的词向量列表
#en_vectors = [] #英文的词向量列表
#for w in cn_list:
# cn_vectors.append(word_vectors[w])
#for w in en_list:
# en_vectors.append(word_vectors_en[w])
# 将这些词向量统一转化为矩阵
#cn_vectors = np.array(cn_vectors)
#en_vectors = np.array(en_vectors)
# 降维实现可视化
#X_reduced = PCA(n_components=2).fit_transform(cn_vectors)
#Y_reduced = PCA(n_components = 2).fit_transform(en_vectors)
# 绘制所有单词向量的二维空间投影
#f, (ax1, ax2) = plt.subplots(1, 2, figsize = (10, 8))
#ax1.plot(X_reduced[:, 0], X_reduced[:, 1], 'o')
#ax2.plot(Y_reduced[:, 0], Y_reduced[:, 1], 'o')
#zhfont1 = matplotlib.font_manager.FontProperties(fname='/Library/Fonts/华文仿宋.ttf', size=16)
#for i, w in enumerate(cn_list):
# ax1.text(X_reduced[i, 0], X_reduced[i, 1], w, fontproperties = zhfont1, alpha = 1)
#for i, w in enumerate(en_list):
# ax2.text(Y_reduced[i, 0], Y_reduced[i, 1], w, alpha = 1)
#original_words = []
#with open('dictionary.txt', 'r') as f:
# dataset = []
# for line in f:
# itm = line.split('\t')
# eng = itm[0]
# chn = itm[1].strip()
# if eng in word_vectors_en and chn in word_vectors:
# data = word_vectors_en[eng]
# target = word_vectors[chn]
# 将中英文词对做成数据集
# dataset.append([data, target])
# original_words.append([eng, chn])
#print(len(dataset)) # 共有4962个单词做为总的数据集合
# 建立训练集、测试集和校验集
# 训练集用来训练神经网络,更改网络的参数;校验集用来判断网络模型是否过拟合:当校验集的损失数值超过训练集的时候,即为过拟合
# 测试集用来检验模型的好坏
#indx = np.random.permutation(range(len(dataset)))
#dataset = [dataset[i] for i in indx]
#original_words = [original_words[i] for i in indx]
#train_size = 500
#train_data = dataset[train_size:]
#valid_data = dataset[train_size // 2 : train_size]
#test_data = dataset[: train_size // 2]
#test_words = original_words[: train_size // 2]
# 开始训练一个多层神经网络,将一个100维度的英文向量映射为200维度的中文词向量,隐含层节点为30
#input_size = 100
#output_size = 200
#hidden_size = 30
# 新建一个神经网络,包含一个隐含层
#model = nn.Sequential(nn.Linear(input_size, hidden_size)
# nn.Tanh()
# nn.Linear(hidden_size, output_size)
# )
# 构造损失函数
#criterion = torch.nn.MSELoss()
# 构造优化器
#optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)
# 总的循环周期
#num_epoch = 100
#开始训练500次,每次对所有的数据都做循环
#results = []
#for epoch in range(num_epoch):
# train_loss = []
# for data in train_data:
# 读入数据
# x = Var(torch.FloatTensor(data[0])).unsqueeze(0)
# y = Var(torch.FloatTensor(data[1])).unsqueeze(0)
# 模型预测
# output = model(x)
# 反向传播算法训练
# optimizer.zero()
# loss = criteri(output, y)
# train_loss.append(loss.data.numpy()[0])
# loss.backwerd()
# optimizer.step()
# 在校验集上测试一下效果
# valid_loss = []
# for data in valid_data:
# x = Var(torch.FloatTensor(data[0])).unsqueeze(0)
# y = Var(torch.FloatTensor(data[1])).unsqueeze(0)
# output = model(x)
# loss = criterion(output, y)
# valid_loss.append(loss.data.numpy()[0])
# results.append([np.mean(train_loss), np.mean(valid_loss)])
# print('{}轮,训练Loss: {:.2f}, 校验Loss: {:.2f}'.format(epoch, np.mean(train_loss), np.mean(valid_loss)))
# 绘制图形
#a = [i[0] for i in results]
#b = [i[1] for i in results]
#plt.plot(a, 'o', label = 'Training Loss')
#plt.plot(b, 's', label = 'Validation Loss')
#plt.xlabel('Epoch')
#plt.ylabel('Loss Function')
#plt.legend()
# 在测试集上验证准确度
# 检验标准有两个:一个是直接用预测的词和标准答案做全词匹配;另一个是做单字的匹配
#exact_same = 0 #全词匹配数量
#one_same = 0 #单字匹配数量
#results = []
#for i, data in enumerate(test_data):
# x = Var(torch.FloatTensor(data[0])).unsqueeze(0)
# 给出模型的输出
# output = model(x)
# output = output.squeeze().data.numpy()
# 从中文词向量中找到与输出向量最相似的向量
# most_similar = word_vectors.wv.similar_by_vector(output, 1)
# 将标准答案中的词与最相似的向量所对应的词打印出来
# results.append([original_words[i][1], most_similar[0][0]])
# 全词匹配
# if original_words[i][1] == most_similar[0][0]:
# exact_same += 1
# 某一个字匹配
# if list(set(list(original_words[i][1])) & set(list(most_similar[0][0]))) != []:
# one_same += 1
#print("精确匹配率:{:.2f}".format(1.0 * exact_same / len(test_data)))
#print('一字匹配率:{:.2f}'.format(1.0 * one_same / len(test_data)))
#print(results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 第一步:加载词向量
Step2: 第二步:可视化同一组意思词在两种不同语言的词向量中的相互位置关系
Step3: 结论:可以看出,中文的一、二、等数字彼此之间的关系与英文的数字彼此之间的关系很类似
|
9,044
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql mysql://steinam:steinam@localhost/celko
%%sql
select * from Register;
%%sql
SELECT R1.course_nbr, R1.student_name,
MIN(R1.teacher_name) as Teacher_1, NULL
FROM Register AS R1
GROUP BY R1.course_nbr, R1.student_name
HAVING COUNT(*) = 1
UNION
SELECT R1.course_nbr, R1.student_name,
MIN(R1.teacher_name) as Teacher_1,
MAX(R1.teacher_name) as Teacher_2
FROM Register AS R1
GROUP BY R1.course_nbr, R1.student_name
HAVING COUNT(*) = 2
UNION
SELECT R1.course_nbr, R1.student_name,
MIN(R1.teacher_name) as Teacher_1, '--More--' as Teacher_2
FROM Register AS R1
GROUP BY R1.course_nbr, R1.student_name
HAVING COUNT(*) > 2;
%%sql
SELECT course_nbr, student_name, MIN(teacher_name) as Teacher_1,
CASE COUNT(*) WHEN 1 THEN NULL
WHEN 2 THEN MAX(teacher_name)
ELSE '--More--' END as Teacher_2
FROM Register
GROUP BY course_nbr, student_name;
%%sql
-- andere Syntax, evtl verständlicher
SELECT course_nbr, student_name, MIN(teacher_name) as Teacher_1,
CASE WHEN COUNT(*) = 1 THEN NULL
WHEN COUNT(*) = 2 THEN MAX(teacher_name)
ELSE '--More--' END as Teacher_2
FROM Register
GROUP BY course_nbr, student_name;
%%sql
create table seats
(seat integer)
insert into seats(seat) values(0);
insert into seats(seat) values(1001);
insert into seats(seat) values(101);
CREATE VIEW Firstseat (seat)
AS SELECT (seat + 1)
FROM seats
WHERE (seat + 1) NOT IN
(SELECT seat FROM seats)
AND (seat + 1) < 1001;
CREATE VIEW Lastseat (seat)
AS SELECT (seat - 1)
FROM seats
WHERE (seat - 1) NOT IN
(SELECT seat FROM seats)
AND (seat - 1) > 0;
-- nutzt die beiden Views
SELECT F1.seat AS start, L1.seat AS finish,
((L1.seat - F1.seat) + 1) AS available
FROM Firstseat F1, Lastseat L1
WHERE L1.seat = (SELECT MIN(L2.seat)
FROM Lastseat AS L2
WHERE F1.seat <= L2.seat)
order by start;
-- braucht keinen view
SELECT (R1.seat + 1) AS start,
(MIN(R2.seat) - 1) AS finish,
abs((R1.seat + 1) - (MIN(R2.seat))) as free
FROM seats AS R1
INNER JOIN
seats AS R2
ON R2.seat > R1.seat
GROUP BY R1.seat
HAVING (R1.seat + 1) < MIN(R2.seat);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lösung 1
Step2: Lösung 2
Step3: Lösung 3
|
9,045
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import cv2
import sys
import os
sys.path.insert(0, os.path.abspath('..'))
import salientregions as sr
%pylab inline
#Load the image
path_to_image = 'images/graffiti.jpg'
img = cv2.imread(path_to_image)
sr.show_image(img)
det = sr.SalientDetector(SE_size_factor=0.20,
lam_factor=4)
regions = det.detect(img,
find_holes=True,
find_islands=True,
find_indentations=True,
find_protrusions=True,
visualize=True)
print(regions.keys())
num_regions, features_standard, features_poly = sr.binary_mask2ellipse_features(regions, min_square=False)
print("number of features per saliency type: ", num_regions)
sr.visualize_ellipses(regions["holes"], features_standard["holes"])
sr.visualize_ellipses(regions["islands"], features_standard["islands"])
sr.visualize_ellipses(regions["indentations"], features_standard["indentations"])
sr.visualize_ellipses(regions["protrusions"], features_standard["protrusions"])
#print "Elliptic polynomial features:", features_poly
sr.visualize_elements_ellipses(img, features_standard);
total_num_regions = sr.save_ellipse_features2file(num_regions, features_poly, 'poly_features.txt')
print("total_num_regions", total_num_regions)
import sys, os
sys.path.insert(0, os.path.abspath('..'))
import salientregions as sr
total_num_regions, num_regions, features = sr.load_ellipse_features_from_file('poly_features.txt')
print("total_num_regions: ", total_num_regions)
print("number of features per saliency type: ", num_regions)
#print "features: ", features
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we load the image and show it.
Step2: Now we create a SalientDetector object, with some parameters.
Step3: We ask the SalientDetector to detect all types of regions
Step4: We can also output the regions as ellipses
Step5: We can also save the elliptic parameters in text files. Below is an example of saving the polynomial coefficients of all regions represented as ellipses.
Step6: To load the saved features from file, use the loading funciton
|
9,046
|
<ASSISTANT_TASK:>
Python Code:
print hash_obj([0, 1, 2])
bits = 64*2
n_elements = 200
np.log10(2*2**bits/(n_elements*(n_elements-1)))
l = list(['zero', 'one', 'two'])
l.__getitem__(0)
[x for x in l.__iter__()]
l.__setitem__(1, 1)
print l
l.__getslice__(1,3)
l.__setslice__(1, 3, ('b', 'c'))
print l
from functools import total_ordering
from collections import OrderedDict
from pisa.core.param import Param, ParamSet
p0 = Param(name='c', value=1.5, prior=None, range=[1,2],
is_fixed=False, is_discrete=False, tex=r'\int{\rm c}')
p1 = Param(name='a', value=2.5, prior=None, range=[1,5],
is_fixed=False, is_discrete=False, tex=r'{\rm a}')
p2 = Param(name='b', value=1.5, prior=None, range=[1,2],
is_fixed=False, is_discrete=False, tex=r'{\rm b}')
c = ParamSet(p0, p1, p2)
print c.values
print c[0]
c[0].value = 1
print c.values
print c.tex
c.values = [3, 2, 1]
print c.values
print c.values[0]
print c[0].value
print 'priors:', c.priors
print 'names:', c.names
print c['a']
print c['a'].value
c['a'].value = 33
print c['a'].value
print c['c'].is_fixed
c['c'].is_fixed = True
print c['c'].is_fixed
print c.are_fixed
c.fix('c')
print c.are_fixed
c.unfix('a')
print c.are_fixed
c.unfix([0,1,2])
print c.are_fixed
fixed_params = c.fixed
print fixed_params.are_fixed
free_params = c.free
print free_params.are_fixed
print c.free.values
print c.values_hash
print c.fixed.values_hash
print c.free.values_hash
print c[0].state
print c.state_hash
print c.fixed.state_hash
print c.free.state_hash
print 'fixed:', c.fixed.names
print 'fixed, discrete:', c.fixed.discrete.names
print 'fixed, continuous:', c.fixed.continuous.names
print 'free:', c.free.names
print 'free, discrete:', c.free.discrete.names
print 'free, continuous:', c.free.continuous.names
print 'continuous, free:', c.continuous.free.names
print 'free, continuous hash:', c.free.continuous.values_hash
print 'continuous, free hash:', c.continuous.free.values_hash
print c.b.prior
print c.priors_llh
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Testing out a container class but that
|
9,047
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from math import log
from sklearn import linear_model
#comment below if not using ipython notebook
%matplotlib inline
#read csv
anscombe_i = pd.read_csv('../datasets/anscombe_i.csv')
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
regr_i = linear_model.LinearRegression()
#We need to reshape the data to be a matrix
# with only one column
X = anscombe_i.x.reshape((len(anscombe_i.x), 1))
y = anscombe_i.y.reshape((len(anscombe_i.y), 1))
#Fit a line
regr_i.fit(X,y)
# The coefficients
print('Coefficients: \n', regr_i.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr_i.predict(X) - y) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr_i.score(X, y))
plt.plot(X,regr_i.predict(X), color='green',
linewidth=3)
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.ylabel("X")
plt.xlabel("y")
from pylab import *
# determine the line-fit
k,d = polyfit(anscombe_i.x,y,1)
yfit = k*anscombe_i.x+d
# plot the data
figure(1)
scatter(anscombe_i.x,y, color='black')
plot(anscombe_i.x, yfit, 'green')
#plot line from point to regression line
for ii in range(len(X)):
plot([anscombe_i.x[ii], anscombe_i.x[ii]], [yfit[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
import pylab as P
figure(1)
scatter(anscombe_i.x,y, color='black')
plot(anscombe_i.x, yfit, 'green')
#plot line from point to regression line
for ii in range(len(X)):
plot([anscombe_i.x[ii], anscombe_i.x[ii]], [yfit[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
residual_error= anscombe_i.y - yfit
error_mean = np.mean(residual_error)
error_sigma = np.std(residual_error)
plt.figure(2)
plt.scatter(anscombe_i.x,residual_error,label='residual error')
plt.xlabel("X")
plt.ylabel("residual error")
plt.figure(3)
n, bins, patches = plt.hist(residual_error, 10, normed=1, facecolor='blue', alpha=0.75)
y_pdf = P.normpdf( bins, error_mean, error_sigma)
l = P.plot(bins, y_pdf, 'k--', linewidth=1.5)
plt.xlabel("residual error in y")
plt.title("Residual Distribution")
# load statsmodels as alias ``sm``
import statsmodels.api as sm
y = anscombe_i.y
X = anscombe_i.x
# Adds a constant term to the predictor
# y = mx +b
X = sm.add_constant(X)
#fit ordinary least squares
est = sm.OLS(y, X)
est = est.fit()
est.summary()
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
X_prime = np.linspace(min(anscombe_i.x), max(anscombe_i.x), 100)[:, np.newaxis]
# add constant as we did before
X_prime = sm.add_constant(X_prime)
y_hat = est.predict(X_prime)
# Add the regression line (provides same as above)
plt.plot(X_prime[:, 1], y_hat, 'r')
import seaborn as sns
#this just makes the plots pretty (in my opion)
sns.set(style="darkgrid", color_codes=True)
g = sns.jointplot("x", "y", data=anscombe_i, kind="reg",
xlim=(0, 20), ylim=(0, 12), color="r", size=7)
X = anscombe_i.x.reshape((len(anscombe_i.x), 1))
y = anscombe_i.y.reshape((len(anscombe_i.y), 1))
k,d = polyfit(anscombe_i.y,anscombe_i.x,1)
xfit = k*y+d
figure(2)
# plot the data
scatter(anscombe_i.x,y, color='black')
plot(xfit, y, 'blue')
for ii in range(len(y)):
plot([xfit[ii], anscombe_i.x[ii]], [y[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
from scipy.odr import Model, Data, ODR
from scipy.stats import linregress
import numpy as np
def orthoregress(x, y):
# get initial guess by first running linear regression
linregression = linregress(x, y)
model = Model(fit_function)
data = Data(x, y)
od = ODR(data, model, beta0=linregression[0:2])
out = od.run()
return list(out.beta)
def fit_function(p, x):
#return y = m x + b
return (p[0] * x) + p[1]
m, b = orthoregress(anscombe_i.x, anscombe_i.y)
# determine the line-fit
y_ortho_fit = m*anscombe_i.x+b
# plot the data
scatter(anscombe_i.x,anscombe_i.y, color = 'black')
plot(anscombe_i.x, y_ortho_fit, 'r')
xlabel('X')
ylabel('Y')
scatter(anscombe_i.x,anscombe_i.y,color = 'black')
plot(xfit, anscombe_i.y, 'b', label= "horizontal residuals")
plot(anscombe_i.x, yfit, 'g', label= "vertical residuals")
plot(anscombe_i.x, y_ortho_fit, 'r', label = "perpendicular residuals" )
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now lets read the first set of data, and make a simple scatter plot.
Step2: Luckly for us, we do not need to implement linear regression, since scikit learn already has a very efficient implementation. The straight line can be seen in the plot below, showing how linear regression attempts to draw a straight line that will best minimize the residual sum of squares between the observed responses in the dataset, and the responses predicted by the linear approximation.
Step3: Residuals
Step4: Now let us plot the residual (y - y predicted) vs x.
Step5: As seen the the histogram, the residual error should be (somewhat) normally distributed and centered around zero. This post explains why.
Step6: The important parts of the summary are the
Step7: If we want to be even more fancier, we can use the seaborn library to plot Linear regression with marginal distributions which also states the pearsonr and p value on the plot. Using the statsmodels approach is more rigourous, but sns provides quick visualizations.
Step8: Usually we calculate the (vertical) residual, or the difference in the observed and predicted in the y. This is because "the use of the least squares method to calculate the best-fitting line through a two-dimensional scatter plot typically requires the user to assume that one of the variables depends on the other. (We caculate the difference in the y) However, in many cases the relationship between the two variables is more complex, and it is not valid to say that one variable is independent and the other is dependent. When analysing such data researchers should consider plotting the three regression lines that can be calculated for any two-dimensional scatter plot."
Step9: Total Least Squares Regression
Step10: Plotting all three regression lines gives a fuller picture of the data, and comparing their slopes provides a simple graphical assessment of the correlation coefficient. Plotting the orthogonal regression line (red) provides additional information because it makes no assumptions about the dependence or independence of the variables; as such, it appears to more accurately describe the trend in the data compared to either of the ordinary least squares regression lines.
|
9,048
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import ipywidgets as widgets
import bqplot.pyplot as plt
y = np.random.randn(100).cumsum() # simple random walk
# create a button
update_btn = widgets.Button(description='Update', button_style='success')
# create a figure widget
fig1 = plt.figure(animation_duration=750)
line = plt.plot(y)
# define an on_click function
def on_btn_click(btn):
# update the y attribute of line mark
line.y = np.random.randn(100).cumsum() # another random walk
# register the on_click function
update_btn.on_click(on_btn_click)
# stack button and figure using VBox
widgets.VBox([fig1, update_btn])
import pandas as pd
# create a dummy time series for 5 dummy stock tickers
dates = pd.date_range(start='20180101', end='20181231')
n = len(dates)
tickers = list('ABCDE')
prices = pd.DataFrame(np.random.randn(n, 5).cumsum(axis=0), columns=tickers)
# create a dropdown menu for tickers
dropdown = widgets.Dropdown(description='Ticker', options=tickers)
# create figure for plotting time series
current_ticker = dropdown.value
fig_title_tmpl = '"{}" Time Series' # string template for title of the figure
fig2 = plt.figure(title=fig_title_tmpl.format(current_ticker))
fig2.layout.width = '900px'
time_series = plt.plot(dates, prices[current_ticker])
plt.xlabel('Date')
plt.ylabel('Price')
# 1. create a callback which updates the plot when dropdown item is selected
def update_plot(*args):
selected_ticker = dropdown.value
# update the y attribute of the mark by selecting
# the column from the price data frame
time_series.y = prices[selected_ticker]
# update the title of the figure
fig2.title = fig_title_tmpl.format(selected_ticker)
# 2. register the callback by using the 'observe' method
dropdown.observe(update_plot, 'value')
# stack the dropdown and fig widgets using VBox
widgets.VBox([dropdown, fig2])
# create two dropdown menus for X and Y attributes of scatter
x_dropdown = widgets.Dropdown(description='X', options=tickers, value='A')
y_dropdown = widgets.Dropdown(description='Y', options=tickers, value='B')
# create figure for plotting the scatter
x_ticker = x_dropdown.value
y_ticker = y_dropdown.value
# set up fig_margin to allow space to display color bar
fig_margin = dict(top=20, bottom=40, left=60, right=80)
fig3 = plt.figure(animation_duration=1000, fig_margin=fig_margin)
# custom axis options for color data
axes_options = {'color': {'tick_format': '%m/%y',
'side': 'right',
'num_ticks': 5}}
scatter = plt.scatter(x=prices[x_ticker],
y=prices[y_ticker],
color=dates, # represent chronology using color scale
stroke='black',
colors=['red'],
default_size=32,
axes_options=axes_options)
plt.xlabel(x_ticker)
plt.ylabel(y_ticker)
# 1. create a callback which updates the plot when dropdown item is selected
def update_scatter(*args):
x_ticker = x_dropdown.value
y_ticker = y_dropdown.value
# update the x and y attributes of the mark by selecting
# the column from the price data frame
with scatter.hold_sync():
scatter.x = prices[x_ticker]
scatter.y = prices[y_ticker]
# update the title of the figure
plt.xlabel(x_ticker)
plt.ylabel(y_ticker)
# 2. register the callback by using the 'observe' method
x_dropdown.observe(update_scatter, 'value')
y_dropdown.observe(update_scatter, 'value')
# stack the dropdown and fig widgets using VBox
widgets.VBox([widgets.HBox([x_dropdown, y_dropdown]), fig3])
funcs = dict(sin=np.sin, cos=np.cos, tan=np.tan, sinh=np.sinh, tanh=np.tanh)
dropdown = widgets.Dropdown(options=funcs, description='Function')
fig = plt.figure(title='sin(x)', animation_duration=1000)
# create x and y data attributes for the line chart
x = np.arange(-10, 10, .1)
y = np.sin(x)
line = plt.plot(x, y ,'m')
def update_line(*args):
f = dropdown.value
fig.title = f'{f.__name__}(x)'
line.y = f(line.x)
dropdown.observe(update_line, 'value')
widgets.VBox([dropdown, fig])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Update the plot on a button click
Step2: Let's look at an example where we link a plot to a dropdown menu
Step3: Let's now create a scatter plot where we select X and Y data from the two dropdown menus
Step4: In the example below, we'll look at plots of trigonometic functions
|
9,049
|
<ASSISTANT_TASK:>
Python Code:
from nipype import DataGrabber, Node
# Create DataGrabber node
dg = Node(DataGrabber(infields=['subject_id', 'task_id'],
outfields=['anat', 'func']),
name='datagrabber')
# Location of the dataset folder
dg.inputs.base_directory = '/data/ds102'
# Necessary default parameters
dg.inputs.template = '*'
dg.inputs.sort_filelist = True
dg.inputs.template_args = {'anat': [['subject_id']],
'func': [['subject_id', 'task_id']]}
dg.inputs.field_template = {'anat': '%s/anat/*_T1w.nii.gz',
'func': '%s/func/*run-%d_bold.nii.gz'}
# Using the IdentityInterface
from nipype import IdentityInterface
infosource = Node(IdentityInterface(fields=['subject_id', 'contrasts']),
name="infosource")
infosource.inputs.contrasts = 1
subject_list = ['sub-01',
'sub-02',
'sub-03',
'sub-04',
'sub-05']
infosource.iterables = [('subject_id', subject_list)]
# Specifying the input fields of DataGrabber directly
dg.inputs.subject_id = 'sub-01'
dg.inputs.task_id = 1
print dg.run().outputs
from nipype import SelectFiles, Node
# String template with {}-based strings
templates = {'anat': '{subject_id}/anat/{subject_id}_T1w.nii.gz',
'func': '{subject_id}/func/{subject_id}_task-flanker_run-{task_id}_bold.nii.gz'}
# Create SelectFiles node
sf = Node(SelectFiles(templates),
name='selectfiles')
# Location of the dataset folder
sf.inputs.base_directory = '/data/ds102'
# Feed {}-based placeholder strings with values
sf.inputs.subject_id = 'sub-01'
sf.inputs.task_id = '1'
print sf.run().outputs
from nipype import SelectFiles, Node
from os.path import abspath as opap
# String template with {}-based strings
templates = {'anat': '{subject_id}/anat/{subject_id}_T1w.nii.gz',
'func': '{subject_id}/func/{subject_id}_task-flanker_run-[1,2]_bold.nii.gz'}
# Create SelectFiles node
sf = Node(SelectFiles(templates),
name='selectfiles')
# Location of the dataset folder
sf.inputs.base_directory = '/data/ds102'
# Feed {}-based placeholder strings with values
sf.inputs.subject_id = 'sub-01'
# Print SelectFiles output
print sf.run().outputs
from nipype.interfaces.freesurfer import FSCommand
from os.path import abspath as opap
# Path to your freesurfer output folder
fs_dir = opap('/data/ds102/freesurfer')
# Set SUBJECTS_DIR
FSCommand.set_default_subjects_dir(fs_dir)
from nipype import Node
from nipype.interfaces.io import FreeSurferSource
# Create FreeSurferSource node
fssource = Node(FreeSurferSource(subjects_dir=fs_dir),
name='fssource')
fssource.inputs.subject_id = 'sub001'
result = fssource.run()
print 'aparc_aseg: %s\n' % result.outputs.aparc_aseg
print 'brainmask: %s\n' % result.outputs.brainmask
print 'inflated: %s\n' % result.outputs.inflated
fssource.inputs.hemi = 'lh'
result = fssource.run()
result.outputs.inflated
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Second, we know that the two files we desire are the the following location
Step2: Now, comes the most important part of DataGrabber. We need to specify the template structure to find the specific data. This can be done as follows.
Step3: You'll notice that we use %s, %02d and * for placeholders in the data paths. %s is a placeholder for a string and is filled out by subject_id. %02d is a placeholder for a integer number and is filled out by task_id. * is used as a wild card, e.g. a placeholder for any possible string combination. This is all to set up the DataGrabber node.
Step4: Now you only have to connect infosource with your DataGrabber and run the workflow to iterate over subjects 1, 2 and 3.
Step5: Now let's run the DataGrabber node and let's look at the output
Step6: SelectFiles
Step7: Let's check if we get what we wanted.
Step8: Perfect! But why is SelectFiles more flexible than DataGrabber? First, you perhaps noticed that with the {}-based string, we can reuse the same input (e.g. subject_id) multiple time in the same string, without feeding it multiple times into the template.
Step9: As you can see, now func contains two file paths, one for the first and one for the second run. As a side node, you could have also gotten them same thing with the wild card *
Step10: To create the FreeSurferSource node, do as follows
Step11: Let's now run it for a specific subject.
Step12: Did it work? Let's try to access multiple FreeSurfer outputs
Step13: It seems to be working as it should. But as you can see, the inflated output actually contains the file location for both hemispheres. With FreeSurferSource we can also restrict the file selection to a single hemisphere. To do this, we use the hemi input filed
Step14: Let's take a look again at the inflated output.
|
9,050
|
<ASSISTANT_TASK:>
Python Code:
def example1(x_1, x_2):
z = x_1**0.5*x_2*0.5
return z
fig = pl.figure()
ax = Axes3D(fig)
X = np.arange(0, 1, 0.1)
Y = np.arange(0, 1, 0.1)
X, Y = np.meshgrid(X, Y)
Z = example1(X, Y)
ax.plot_surface(X, Y, Z, rstride=1, cstride=1)
pl.show()
nn = NN()
x_1 = Symbol('x_1')
x_2 = Symbol('x_2')
f = x_1**0.5*x_2*0.5
nn.set_input_layer(2)
nn.set_hidden_layer(2)
nn.set_output_layer(2)
nn.setup()
nn.initialize()
idata = [1, 2]
nn.supervised_function(f, idata)
nn.simulate(1, 0.1)
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
print X, Y
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
a = np.array([])
b = np.array([])
c = np.array([])
nn = NN()
nn.set_network()
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
for i in range(100):
l = np.random.choice([i for i in range(len(a))])
m = nn.main(1, f, [a[l], b[l]], 0.5)
for x in X:
for y in Y:
idata = [x, y]
c = np.append(c, nn.realize(f, idata))
a
b
c
fig = pl.figure()
ax = Axes3D(fig)
ax.scatter(a, b, c)
pl.show()
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
a = np.array([])
b = np.array([])
c = np.array([])
nn = NN()
nn.set_network()
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
for i in range(10000):
l = np.random.choice([i for i in range(len(a))])
m = nn.main(1, f, [a[l], b[l]], 0.5)
for x in X:
for y in Y:
idata = [x, y]
c = np.append(c, nn.realize(f, idata))
fig = pl.figure()
ax = Axes3D(fig)
ax.scatter(a, b, c)
pl.show()
ここで、
nn.hidden_layer
x_1 = Symbol('x_1')
x_2 = Symbol('x_2')
f = (1+sin(4*math.pi*x_1))*x_2*1/2
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
a = np.array([])
b = np.array([])
c = np.array([])
nn = NN()
nn.set_network()
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
for i in range(1000):
l = np.random.choice([i for i in range(len(a))])
m = nn.main(1, f, [a[l], b[l]], 0.5)
for x in X:
for y in Y:
idata = [x, y]
c = np.append(c, nn.realize(f, idata))
fig = pl.figure()
ax = Axes3D(fig)
ax.scatter(a, b, c)
pl.show()
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
a = np.array([])
b = np.array([])
c = np.array([])
nn = NN()
nn.set_network(h=5)
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
for i in range(1000):
l = np.random.choice([i for i in range(len(a))])
m = nn.main(1, f, [a[l], b[l]], 0.5)
for x in X:
for y in Y:
idata = [x, y]
c = np.append(c, nn.realize(f, idata))
fig = pl.figure()
ax = Axes3D(fig)
ax.scatter(a, b, c)
pl.show()
X = np.arange(-1, 1, 0.1)
Y = np.arange(-1, 1, 0.1)
print X, Y
fig = pl.figure()
ax = Axes3D(fig)
f = (1+sin(4*math.pi*x_1))*x_2*1/2
X = np.arange(-1, 1, 0.1)
Y = np.arange(-1, 1, 0.1)
a = np.array([])
b = np.array([])
c = np.array([])
fig = plt.figure()
nn = NN()
nn.set_network()
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
c = np.append(c, nn.main2(50, f, [x, y], 0.8))
for i in range(50):
l = np.random.choice([i for i in range(len(a))])
m = nn.main2(20, f, [a[l], b[l]], 0.5)
c[l] = m
a = np.array([])
b = np.array([])
c = np.array([])
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
c = np.append(c, nn.realize(f, [x, y]))
ax.scatter(a, b, c)
ax.set_zlim(0, 1)
pl.show()
def example2(x_1, x_2):
z = (1+np.sin(4*math.pi*x_1))*x_2*1/2
return z
fig = pl.figure()
ax = Axes3D(fig)
X = np.arange(-1, 1, 0.1)
Y = np.arange(-1, 1, 0.1)
X, Y = np.meshgrid(X, Y)
Z = example2(X, Y)
ax.plot_surface(X, Y, Z, rstride=1, cstride=1)
ax.set_zlim(-1, 1)
pl.show()
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
a = np.array([])
b = np.array([])
c = np.array([])
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
evl = np.array([])
for i in range(len(a)):
nn = NN()
nn.set_network()
for j in range(1):
l = np.random.choice([i for i in range(len(a))])
if l != i:
m = nn.main(1, f, [a[l], b[l]], 0.5)
idata = [a[i], b[i]]
est = nn.realize(f, idata)
evl = np.append(evl, math.fabs(est - nn.supervised_data))
np.average(evl)
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
a = np.array([])
b = np.array([])
c = np.array([])
nn = NN()
nn.set_network(h=7)
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
evl = np.array([])
for i in range(len(a)):
for j in range(10000):
nn = NN()
nn.set_network()
l = np.random.choice([i for i in range(len(a))])
if l != i:
m = nn.main(1, f, [a[l], b[l]], 0.5)
idata = [a[i], b[i]]
evl = np.append(evl, math.fabs(nn.realize(f, idata) - nn.supervised_data))
evl
np.average(evl)
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
a = np.array([])
b = np.array([])
c = np.array([])
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
evl = np.array([])
for i in range(len(a)):
for j in range(100):
nn = NN()
nn.set_network()
l = np.random.choice([i for i in range(len(a))])
if l != i:
m = nn.main(1, f, [a[l], b[l]], 0.5)
idata = [a[i], b[i]]
est = nn.realize(f, idata)
evl = np.append(evl, math.fabs(est - nn.supervised_data))
np.average(evl)
X = np.arange(0, 1, 0.2)
Y = np.arange(0, 1, 0.2)
a = np.array([])
b = np.array([])
c = np.array([])
for x in X:
for y in Y:
a = np.append(a, x)
b = np.append(b, y)
evl = np.array([])
for i in range(len(a)):
for j in range(100):
nn = NN()
nn.set_network(h=5)
l = np.random.choice([i for i in range(len(a))])
if l != i:
m = nn.main(1, f, [a[l], b[l]], 0.5)
idata = [a[i], b[i]]
est = nn.realize(f, idata)
evl = np.append(evl, math.fabs(est - nn.supervised_data))
np.average(evl)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <P>課題の例で使われた関数は以下の通りである。</P>
Step2: 以下に使い方を説明する。
Step3: 入力層、中間層、出力層を作る関数を実行する。引数には層の数を用いる。
Step4: <p>nn.set_hidden_layer()は同時にシグモイド関数で変換する前の中間層も作る。</p>
Step5: nn.supervised_function(f, idata)は教師データを作成する。引数は関数とサンプルデータをとる。
Step6: nn.simulate(N, eta)は引数に更新回数と学習率をとる。普通はN=1で行うべきかもしれないが、工夫として作成してみた。N回学習した後に出力層を返す。
Step7: nn.calculation()は学習せずに入力層から出力層の計算を行う。nn.simulate()内にも用いられている。
Step8: の組み合わせである。
Step9: 例えば(0, 0)を入力すると0.52328635を返している(つまりa[0]とb[0]を入力して、c[0]の値を返している)。
Step10: 確率的勾配降下法を100回繰り返したが見た感じから近づいている。回数を10000回に増やしてみる。
Step11: 見た感じ随分近づいているように見える。
Step12: 同様のことを課題の例で使われた関数でも試してみる。
Step13: 上手く近似できないので中間層の数を変えてみる。5層にしてみる。
Step14: 目標と比べると大きく異なる。
Step15: で取り、学習の際にランダムに一個選ばれたサンプルを何十回も学習させてみた。
Step16: 本来ならば下のような形になるべきであるので上手くいっているとは言い難い。
Step17: 同じ方法でコブ・ダグラス型生産関数を学習させた様子をアニメーションにしてみた。この方法が何の意味を持つかは分からないが学習はまあまよくできていた。
Step18: 次に十分大きく(1000回に)してみる。
Step19: 誤差の平均であるので小さい方よい。
|
9,051
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from random import random, randint, choice
from itertools import cycle, ifilter, imap, islice, izip, starmap, tee
from collections import defaultdict
from operator import add, mul
from pymonad.Maybe import *
from pymonad.Reader import *
a = range(0, 9)
zip(*[iter(a)]*4)
def zip_with_(f, a, b):
return (f(*z) for z in izip(a, b))
def zip_with(f, *a):
return starmap(f, izip(*a))
b = range(10, 20)
list(zip_with(add, a, reversed(b)))
@curry
def take(n, a):
return islice(a, None, n)
@curry
def drop(n, a):
return islice(a, n, None)
def double_every_other(a):
return zip_with(mul, a, cycle([1,2]))
x = cycle([1, 2, 3])
take15 = take(15)
list(take15(x))
y = double_every_other(x)
list(take15(y))
def rotate(n, a):
return (x for x, _ in izip(islice(cycle(a), n , None), a))
h = "_hello_lazy_world"
r = rotate(6, h)
''.join(list(r))
def const(x, y):
return x
def rotate1(n, a):
return zip_with(const, drop(n, cycle(a)), a)
r1 = rotate1(6, h)
''.join(list(r1))
class Stream(object):
def __init__(self, data):
self.data = iter(data)
def __iter__(self):
class iterator(object):
def __init__(self, it):
self.it = it.data
def next(self):
return next(self.it)
return iterator(self)
def filter(self, pred):
return Stream(ifilter(pred, self))
def map(self, fn):
return Stream(imap(fn, self))
s = Stream(range(20)). \
filter(lambda x: x % 7 == 0). \
map(lambda x: x * x)
list(s)
def make_stream(data):
try:
return Just(iter(data))
except TypeError:
return Nothing
@curry
def filter_stream(pred, stream):
return Just(ifilter(pred, stream))
@curry
def map_stream(fn, stream):
return Just(imap(fn, stream))
def eval_stream(stream):
if isinstance(stream, Just):
return stream.value
else:
return ()
my_filter = filter_stream(lambda x: x % 7 == 0)
my_map = map_stream(lambda x: x * x)
st = make_stream(range(20))
st1 = st >> my_filter >> my_map
list(eval_stream(st1))
@curry
def filter_stream1(pred, stream):
return ifilter(pred, stream)
@curry
def map_stream1(fn, stream):
return imap(fn, stream)
my_filter1 = filter_stream1(lambda x: x % 7 == 0)
my_map1 = map_stream1(lambda x: x * x)
my_transform = my_filter1 * my_map1
st2 = make_stream(range(20))
st3 = my_transform * st2
list(eval_stream(st3))
def repeatedly(f, *args, **kwargs):
return (f(*args, **kwargs) for _ in iter(int, 1))
def generate_data():
return (choice(['a', 'b', 'c']),
float("{:.2f}".format(random())),
randint(1, 20))
data = take(100, repeatedly(generate_data))
sample = take(10, repeatedly(generate_data))
list(sample)
def bin_labels(bins, label_format=None):
a, b = tee(bins)
next(b)
label_format = label_format or '>= %.2f, < %.2f'
return [label_format % b for b in izip(a, b)]
def bin_data(x, y, bin_edges):
data_to_bins = izip(np.digitize(x, bin_edges), y)
bin_sums = defaultdict(int)
for index, count in data_to_bins:
bin_sums[index] += count
return [bin_sums.get(index, 0) for index in xrange(len(bin_edges))]
by_label = defaultdict(list)
d1, d2 = tee(data)
for label, score, count in d1:
by_label[label].append([score, count])
num_bins = 20
_, score, _ = izip(*d2)
bins = np.linspace(min(score), max(score), num_bins)
bin_lbl = bin_labels(bins)
series = []
for label, records in by_label.iteritems():
_, count = izip(*records)
series.append({'label': label, 'data': bin_data(score, count, bins)})
result = {'series': series,
'bins': bin_lbl}
fig, ax = plt.subplots(figsize=(18,6))
bar_width = 0.2
colors = ['r', 'g', 'b']
for k, item in enumerate(result['series']):
index = np.arange(len(item['data']))
plt.bar(index + k * bar_width,
item['data'],
color = colors[k],
width = bar_width,
label = item['label'])
plt.xlabel('Scores')
plt.ylabel('Count')
plt.title('Count by scores')
plt.xticks(index + bar_width, result['bins'], rotation=70)
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The ubiquitous zip
Step2: In fact, it is mentioned in the documentation in the section on built-in functions. I guess, it is one of those bits that are easily skipped on first reading. The reason this example works the way it does, namely, partitioning the iterable into chunks of the given length, is that zip evaluates its arguments strictly left-to-right. Hence, all the copies of the iterator get advanced together and dereferenced one by one.
Step3: Another, more generic, implementation is possible using the starmap function.
Step4: Using zip_with we can express operations on sequences in a more functional way
Step5: Before we proceed, let's introduce another bit of syntactical convenience
Step6: @curry will be explained later, for now just think of it as a fancy way of applying the idea behind functools.partial to forge partial application in Python.
Step7: Lazy evaluation allows us to work with infinite lists without much trouble
Step8: In another amazing example which I first came across in a SO answer zip is used to rotate a (potentially empty or infinite) sequence. In order to emphasize the Haskell influence, let us first write it without all of the convenience functions defined above
Step9: Now we rewrite the same with more Haskell flavour
Step10: Contextual iterators and monads
Step11: We may notice that the resulting flow has certain traits of a composable contextual computation - something that monads were introduced to deal with. Indeed, we start by putting the original data in a "streaming context". Each public method then extracts the actual data, applies some transformation and wraps it back into the context before passing on. This document is not going to descend into yet another tutorial on monads. Instead we will use the PyMonad in a somewhat crude manner to demonstrate how the same goal can be achieved in a more functional way.
Step12: Now express the operations performed by the methods as monadic functions.
Step13: At the end of the chain we will also need a way out of the context to continue working with the data. We do not intend to be 100% pure after all.
Step14: Now we can also partially apply our function for better readability and potential reuse.
Step15: Finally, execute the flow.
Step16: We can take a different approach and work with ordinary functions instead
Step17: We can partially apply them, as before, and then use the overloaded * operator to denote curried function composition.
Step18: Finally the transformation is applied to the "contextualized" data using the applicative style instead of the bind operator.
Step19: Splitting and unpacking
Step20: Now let's produce some mock data. Although using effectful functions in comprehensions is generally frowned upon in Python, here it allows us to achieve a clean separation of the generating (repeatedly) and consuming (take) parts of the logic.
Step21: Now we define a function formatting labels for the plot. This is a typical example of using tee to traverse a sequence in a pairwise manner.
Step22: Here we aggregate data in the bins
Step23: Finally we put together a dictionary containing all the data for the plot
|
9,052
|
<ASSISTANT_TASK:>
Python Code:
# Install the SDK
!pip3 install 'kfp>=0.1.31.2' --quiet
import kfp.deprecated as kfp
import kfp.deprecated.components as components
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
add_op = components.create_component_from_func(add)
#Advanced function
#Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
from tensorflow.python.lib.io import file_io
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
my_divmod(100, 7)
divmod_op = components.create_component_from_func(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
import kfp.deprecated.dsl as dsl
@dsl.pipeline(
name='calculation-pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
def calc_pipeline(
a=7,
b=8,
c=17,
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
#Specify pipeline argument values
arguments = {'a': 7, 'b': 8}
#Submit a pipeline run
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple function that just add two numbers
Step2: Convert the function to a pipeline operation
Step3: A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
Step4: Test running the python function directly
Step5: Convert the function to a pipeline operation
Step6: Define the pipeline
Step7: Submit the pipeline for execution
|
9,053
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
events_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude='bads') # Pick MEG channels
raw.filter(1., 30., fir_design='firwin') # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
tmin = -0.050
tmax = 0.400
# decimate to make the example faster to run, but then use verbose='error' in
# the Epochs constructor to suppress warning about decimation causing aliasing
decim = 2
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
proj=True, picks=picks, baseline=None, preload=True,
reject=dict(mag=5e-12), decim=decim, verbose='error')
clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))
time_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=1,
verbose=True)
# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs['Left'].get_data(),
y=epochs['Left'].events[:, 2] > 2)
scores = time_gen.score(X=epochs['Right'].get_data(),
y=epochs['Right'].events[:, 2] > 2)
fig, ax = plt.subplots(1)
im = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower',
extent=epochs.times[[0, -1, 0, -1]])
ax.axhline(0., color='k')
ax.axvline(0., color='k')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Generalization across time and condition')
plt.colorbar(im, ax=ax)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will train the classifier on all left visual vs auditory trials
Step2: Score on the epochs where the stimulus was presented to the right.
Step3: Plot
|
9,054
|
<ASSISTANT_TASK:>
Python Code:
!hybridizer-cuda ./01-vector-add/01-vector-add.cs -o ./01-vector-add/vectoradd.exe -run
!hybridizer-cuda ./01-vector-add/01-vector-add.cs -o ./01-vector-add/parallel-vectoradd.exe -run
!hybridizer-cuda ./02-gpu-vector-add/02-gpu-vector-add.cs -o ./02-gpu-vector-add/gpu-vectoradd.exe -run
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduce Parallelism
Step2: Run Code on the GPU
|
9,055
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from modshogun import *
#Needed lists for the final plot
classifiers_linear = []*10
classifiers_non_linear = []*10
classifiers_names = []*10
fadings = []*10
shogun_feats_linear = RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat')))
shogun_labels_linear = BinaryLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat')))
shogun_feats_non_linear = RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_features_train.dat')))
shogun_labels_non_linear = BinaryLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_labels_train.dat')))
feats_linear = shogun_feats_linear.get_feature_matrix()
labels_linear = shogun_labels_linear.get_labels()
feats_non_linear = shogun_feats_non_linear.get_feature_matrix()
labels_non_linear = shogun_labels_non_linear.get_labels()
def plot_binary_data(plot,X_train, y_train):
This function plots 2D binary data with different colors for different labels.
plot.xlabel(r"$x$")
plot.ylabel(r"$y$")
plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plot.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
def compute_plot_isolines(classifier,features,size=200,fading=True):
This function computes the classification of points on the grid
to get the decision boundaries used in plotting
x1 = np.linspace(1.2*min(features[0]), 1.2*max(features[0]), size)
x2 = np.linspace(1.2*min(features[1]), 1.2*max(features[1]), size)
x, y = np.meshgrid(x1, x2)
plot_features=RealFeatures(np.array((np.ravel(x), np.ravel(y))))
if fading == True:
plot_labels = classifier.apply(plot_features).get_values()
else:
plot_labels = classifier.apply(plot_features).get_labels()
z = plot_labels.reshape((size, size))
return x,y,z
def plot_model(plot,classifier,features,labels,fading=True):
This function plots an input classification model
x,y,z = compute_plot_isolines(classifier,features,fading=fading)
plot.pcolor(x,y,z,cmap='RdBu_r')
plot.contour(x, y, z, linewidths=1, colors='black')
plot_binary_data(plot,features, labels)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Linear Features")
plot_binary_data(plt,feats_linear, labels_linear)
plt.subplot(122)
plt.title("Non Linear Features")
plot_binary_data(plt,feats_non_linear, labels_non_linear)
plt.figure(figsize=(15,5))
c = 0.5
epsilon =1e-3
svm_linear = LibLinear(c,shogun_feats_linear,shogun_labels_linear)
svm_linear.set_liblinear_solver_type(L2R_L2LOSS_SVC)
svm_linear.set_epsilon(epsilon)
svm_linear.train()
classifiers_linear.append(svm_linear)
classifiers_names.append("SVM Linear")
fadings.append(True)
plt.subplot(121)
plt.title("Linear SVM - Linear Features")
plot_model(plt,svm_linear,feats_linear,labels_linear)
svm_non_linear = LibLinear(c,shogun_feats_non_linear,shogun_labels_non_linear)
svm_non_linear.set_liblinear_solver_type(L2R_L2LOSS_SVC)
svm_non_linear.set_epsilon(epsilon)
svm_non_linear.train()
classifiers_non_linear.append(svm_non_linear)
plt.subplot(122)
plt.title("Linear SVM - Non Linear Features")
plot_model(plt,svm_non_linear,feats_non_linear,labels_non_linear)
gaussian_c=0.7
gaussian_kernel_linear=GaussianKernel(shogun_feats_linear, shogun_feats_linear, 100)
gaussian_svm_linear=LibSVM(gaussian_c, gaussian_kernel_linear, shogun_labels_linear)
gaussian_svm_linear.train()
classifiers_linear.append(gaussian_svm_linear)
fadings.append(True)
gaussian_kernel_non_linear=GaussianKernel(shogun_feats_non_linear, shogun_feats_non_linear, 100)
gaussian_svm_non_linear=LibSVM(gaussian_c, gaussian_kernel_non_linear, shogun_labels_non_linear)
gaussian_svm_non_linear.train()
classifiers_non_linear.append(gaussian_svm_non_linear)
classifiers_names.append("SVM Gaussian Kernel")
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Gaussian Kernel - Linear Features")
plot_model(plt,gaussian_svm_linear,feats_linear,labels_linear)
plt.subplot(122)
plt.title("SVM Gaussian Kernel - Non Linear Features")
plot_model(plt,gaussian_svm_non_linear,feats_non_linear,labels_non_linear)
sigmoid_c = 0.9
sigmoid_kernel_linear = SigmoidKernel(shogun_feats_linear,shogun_feats_linear,200,1,0.5)
sigmoid_svm_linear = LibSVM(sigmoid_c, sigmoid_kernel_linear, shogun_labels_linear)
sigmoid_svm_linear.train()
classifiers_linear.append(sigmoid_svm_linear)
classifiers_names.append("SVM Sigmoid Kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Sigmoid Kernel - Linear Features")
plot_model(plt,sigmoid_svm_linear,feats_linear,labels_linear)
sigmoid_kernel_non_linear = SigmoidKernel(shogun_feats_non_linear,shogun_feats_non_linear,400,2.5,2)
sigmoid_svm_non_linear = LibSVM(sigmoid_c, sigmoid_kernel_non_linear, shogun_labels_non_linear)
sigmoid_svm_non_linear.train()
classifiers_non_linear.append(sigmoid_svm_non_linear)
plt.subplot(122)
plt.title("SVM Sigmoid Kernel - Non Linear Features")
plot_model(plt,sigmoid_svm_non_linear,feats_non_linear,labels_non_linear)
poly_c = 0.5
degree = 4
poly_kernel_linear = PolyKernel(shogun_feats_linear, shogun_feats_linear, degree, True)
poly_svm_linear = LibSVM(poly_c, poly_kernel_linear, shogun_labels_linear)
poly_svm_linear.train()
classifiers_linear.append(poly_svm_linear)
classifiers_names.append("SVM Polynomial kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Polynomial Kernel - Linear Features")
plot_model(plt,poly_svm_linear,feats_linear,labels_linear)
poly_kernel_non_linear=PolyKernel(shogun_feats_non_linear, shogun_feats_non_linear, degree, True)
poly_svm_non_linear = LibSVM(poly_c, poly_kernel_non_linear, shogun_labels_non_linear)
poly_svm_non_linear.train()
classifiers_non_linear.append(poly_svm_non_linear)
plt.subplot(122)
plt.title("SVM Polynomial Kernel - Non Linear Features")
plot_model(plt,poly_svm_non_linear,feats_non_linear,labels_non_linear)
multiclass_labels_linear = shogun_labels_linear.get_labels()
for i in range(0,len(multiclass_labels_linear)):
if multiclass_labels_linear[i] == -1:
multiclass_labels_linear[i] = 0
multiclass_labels_non_linear = shogun_labels_non_linear.get_labels()
for i in range(0,len(multiclass_labels_non_linear)):
if multiclass_labels_non_linear[i] == -1:
multiclass_labels_non_linear[i] = 0
shogun_multiclass_labels_linear = MulticlassLabels(multiclass_labels_linear)
shogun_multiclass_labels_non_linear = MulticlassLabels(multiclass_labels_non_linear)
naive_bayes_linear = GaussianNaiveBayes()
naive_bayes_linear.set_features(shogun_feats_linear)
naive_bayes_linear.set_labels(shogun_multiclass_labels_linear)
naive_bayes_linear.train()
classifiers_linear.append(naive_bayes_linear)
classifiers_names.append("Naive Bayes")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Naive Bayes - Linear Features")
plot_model(plt,naive_bayes_linear,feats_linear,labels_linear,fading=False)
naive_bayes_non_linear = GaussianNaiveBayes()
naive_bayes_non_linear.set_features(shogun_feats_non_linear)
naive_bayes_non_linear.set_labels(shogun_multiclass_labels_non_linear)
naive_bayes_non_linear.train()
classifiers_non_linear.append(naive_bayes_non_linear)
plt.subplot(122)
plt.title("Naive Bayes - Non Linear Features")
plot_model(plt,naive_bayes_non_linear,feats_non_linear,labels_non_linear,fading=False)
number_of_neighbors = 10
distances_linear = EuclideanDistance(shogun_feats_linear, shogun_feats_linear)
knn_linear = KNN(number_of_neighbors,distances_linear,shogun_labels_linear)
knn_linear.train()
classifiers_linear.append(knn_linear)
classifiers_names.append("Nearest Neighbors")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Nearest Neighbors - Linear Features")
plot_model(plt,knn_linear,feats_linear,labels_linear,fading=False)
distances_non_linear = EuclideanDistance(shogun_feats_non_linear, shogun_feats_non_linear)
knn_non_linear = KNN(number_of_neighbors,distances_non_linear,shogun_labels_non_linear)
knn_non_linear.train()
classifiers_non_linear.append(knn_non_linear)
plt.subplot(122)
plt.title("Nearest Neighbors - Non Linear Features")
plot_model(plt,knn_non_linear,feats_non_linear,labels_non_linear,fading=False)
gamma = 0.1
lda_linear = LDA(gamma, shogun_feats_linear, shogun_labels_linear)
lda_linear.train()
classifiers_linear.append(lda_linear)
classifiers_names.append("LDA")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("LDA - Linear Features")
plot_model(plt,lda_linear,feats_linear,labels_linear)
lda_non_linear = LDA(gamma, shogun_feats_non_linear, shogun_labels_non_linear)
lda_non_linear.train()
classifiers_non_linear.append(lda_non_linear)
plt.subplot(122)
plt.title("LDA - Non Linear Features")
plot_model(plt,lda_non_linear,feats_non_linear,labels_non_linear)
qda_linear = QDA(shogun_feats_linear, shogun_multiclass_labels_linear)
qda_linear.train()
classifiers_linear.append(qda_linear)
classifiers_names.append("QDA")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("QDA - Linear Features")
plot_model(plt,qda_linear,feats_linear,labels_linear,fading=False)
qda_non_linear = QDA(shogun_feats_non_linear, shogun_multiclass_labels_non_linear)
qda_non_linear.train()
classifiers_non_linear.append(qda_non_linear)
plt.subplot(122)
plt.title("QDA - Non Linear Features")
plot_model(plt,qda_non_linear,feats_non_linear,labels_non_linear,fading=False)
# create Gaussian kernel with width = 2.0
kernel = GaussianKernel(10, 2)
# create zero mean function
zero_mean = ZeroMean()
# create logit likelihood model
likelihood = LogitLikelihood()
# specify EP approximation inference method
inference_model_linear = EPInferenceMethod(kernel, shogun_feats_linear, zero_mean, shogun_labels_linear, likelihood)
# create and train GP classifier, which uses Laplace approximation
gaussian_logit_linear = GaussianProcessClassification(inference_model_linear)
gaussian_logit_linear.train()
classifiers_linear.append(gaussian_logit_linear)
classifiers_names.append("Gaussian Process Logit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Logit - Linear Features")
plot_model(plt,gaussian_logit_linear,feats_linear,labels_linear)
inference_model_non_linear = EPInferenceMethod(kernel, shogun_feats_non_linear, zero_mean,
shogun_labels_non_linear, likelihood)
gaussian_logit_non_linear = GaussianProcessClassification(inference_model_non_linear)
gaussian_logit_non_linear.train()
classifiers_non_linear.append(gaussian_logit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Logit - Non Linear Features")
plot_model(plt,gaussian_logit_non_linear,feats_non_linear,labels_non_linear)
likelihood = ProbitLikelihood()
inference_model_linear = EPInferenceMethod(kernel, shogun_feats_linear, zero_mean, shogun_labels_linear, likelihood)
gaussian_probit_linear = GaussianProcessClassification(inference_model_linear)
gaussian_probit_linear.train()
classifiers_linear.append(gaussian_probit_linear)
classifiers_names.append("Gaussian Process Probit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Probit - Linear Features")
plot_model(plt,gaussian_probit_linear,feats_linear,labels_linear)
inference_model_non_linear = EPInferenceMethod(kernel, shogun_feats_non_linear,
zero_mean, shogun_labels_non_linear, likelihood)
gaussian_probit_non_linear = GaussianProcessClassification(inference_model_non_linear)
gaussian_probit_non_linear.train()
classifiers_non_linear.append(gaussian_probit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Probit - Non Linear Features")
plot_model(plt,gaussian_probit_non_linear,feats_non_linear,labels_non_linear)
figure = plt.figure(figsize=(30,9))
plt.subplot(2,11,1)
plot_binary_data(plt,feats_linear, labels_linear)
for i in range(0,10):
plt.subplot(2,11,i+2)
plt.title(classifiers_names[i])
plot_model(plt,classifiers_linear[i],feats_linear,labels_linear,fading=fadings[i])
plt.subplot(2,11,12)
plot_binary_data(plt,feats_non_linear, labels_non_linear)
for i in range(0,10):
plt.subplot(2,11,13+i)
plot_model(plt,classifiers_non_linear[i],feats_non_linear,labels_non_linear,fading=fadings[i])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id = "section1">Data Generation and Visualization</a>
Step5: Data visualization methods.
Step6: <a id="section2" href="http
Step7: SVM - Kernels
Step8: <a id ="section2c" href="http
Step9: <a id ="section2d" href="http
Step10: <a id ="section3" href="http
Step11: <a id ="section4" href="http
Step12: <a id ="section5" href="http
Step13: <a id ="section6" href="http
Step14: <a id ="section7" href="http
Step15: <a id ="section7b">Probit Likelihood model</a>
Step16: <a id="section8">Putting It All Together</a>
|
9,056
|
<ASSISTANT_TASK:>
Python Code:
import pickle
dataset = pickle.load(open('data/cafe.pkl','r')) # or 'pofa.pkl'
# This is the neural network class, for your information.
from sklearn.decomposition import PCA
from sklearn.learning_curve import learning_curve
from sklearn.cross_validation import train_test_split
from numpy.linalg import pinv
from numpy.random import random_sample
class NeuralNetwork(object):
def __init__(self, dataset, target_name):
self.dataset = dataset
self.targets = self.dataset[target_name]
def _activation(self, x):
Funny tanh function.
z = x*2/3
y = (exp(z) - exp(-z)) / (exp(z) + exp(-z))
return 1.7159*y
def _da(self, x):
return (1.7159 - multiply(x, x) / 1.7159) * 2/3
def _inverse_activation(self, x):
z = x / 1.7159
return z
#z[z<-.999] = -.999; z[z>.999] = .999
#return arctanh(z)*3/2
def feed_forward(self, X):
From the input X, calculate the activations at the hidden layer and the output layer.
Z = self._activation(dot(c_[X, ones((X.shape[0], 1))], self.W_hidden))
return self._activation(dot(c_[Z, ones((X.shape[0], 1))], self.W_output)), Z
def back_propagate(self, inputs, hidden, output, errors):
Back-propagate the errors and update the weights.
d_output = self._da(output) * errors
d_hidden = self._da(hidden) * dot(d_output, self.W_output[:-1].T)
n_samples = inputs.shape[0]
bias = ones((n_samples, 1))
# Update momentum and weights
self.V_output = self.output_learning_rate * dot(c_[hidden, bias].T, d_output) / n_samples
self.W_output+= self.V_output
self.V_hidden = self.hidden_learning_rate * dot(c_[inputs, bias].T, d_hidden) / n_samples
self.W_hidden+= self.V_hidden
def train(self, epochs = 100, n_input = 10, n_hidden = 40,
hidden_learning_rate = 0.1, output_learning_rate = 0.1, momentum_learning_rate = 0.9,
test_size=0.2, cv=3):
Initialize the network and start training.
# Initialize variables
self.n_input = n_input
self.n_hidden = n_hidden
self.hidden_learning_rate = hidden_learning_rate
self.output_learning_rate = output_learning_rate
self.momentum_learning_rate = momentum_learning_rate
self.n_output = len(unique(self.targets))
self.V_hidden = zeros((self.n_input + 1, self.n_hidden))
self.W_hidden = random_sample(self.V_hidden.shape)
self.V_output = zeros((self.n_hidden + 1, self.n_output))
self.W_output = random_sample(self.V_output.shape)
data = self.dataset.data
# Preprocess the data using PCA
self.pca = PCA(n_components = self.n_input, whiten=True, copy=True)
X = self.pca.fit_transform(data)
self.X = X
# Start the training
rmse=zeros((epochs,2))
cerr=zeros((epochs,2))
for t in arange(epochs):
# Test then Train, since we'll use the training errors
for i, (inputs, y) in enumerate([[X_test, y_test], [X_train, y_train]]):
outputs, hidden = self.feed_forward(inputs)
target=ones(outputs.shape)*(-1.0)
target[arange(target.shape[0]),y-1]=1.0
errors = target - outputs
rmse[t, i] = sqrt((errors**2).mean()) # RMSE
cerr[t, i] = (y == argmax(outputs,axis=1)).mean()
yield rmse, cerr, t, epochs
# Update weights using backpropagation
self.back_propagate(inputs, hidden, outputs, errors)
def predict(self, n):
Returns the prediction and the reconstruction for the sample n.
X = self.X[n:n+1]
outputs, hidden = self.feed_forward(X)
pca_reconstruction = self.pca.inverse_transform(X)
hidden_expected = dot(self._inverse_activation(outputs), pinv(self.W_output))[:,:-1]
hidden_reconstruction = self.pca.inverse_transform(dot(self._inverse_activation(hidden_expected), pinv(self.W_hidden))[:,:-1])
return (argmax(outputs) + 1,
pca_reconstruction.reshape(self.dataset.images.shape[1:]),
hidden_reconstruction.reshape(self.dataset.images.shape[1:]))
def plot_training(axs, rmse, cerr, t, epochs):
Draw the plot to the specified axis.
axs[0].set_title("RMSE")
axs[0].set_xlabel("Training epoch")
axs[0].set_ylabel("RMSE")
axs[0].grid()
axs[0].plot(arange(t), rmse[:t])
axs[0].set_xlim([0, epochs])
axs[0].set_ylim([0, 2.0])
axs[0].legend(['Test', 'Training'], loc="best")
axs[1].set_title("Classification Error")
axs[1].set_xlabel("Training epoch")
axs[1].set_ylabel("Classification Error Rate [\%]")
axs[1].grid()
axs[1].plot(arange(t), cerr[:t]*100.0)
axs[1].set_xlim([0, epochs])
axs[1].set_ylim([0, 100.0])
axs[1].legend(['Test', 'Training'], loc="best")
def train_network(dataset, target_name, **kwargs):
global net
net = NeuralNetwork(dataset, target_name)
fig, axs = plt.subplots(1,2,figsize=(10,5))
for rmse, cerr, t, epochs in net.train(**kwargs):
if mod(t, 10) != 0:
continue
plot_training(axs, rmse, cerr, t, epochs)
clear_output(wait=True)
display(fig)
axs[0].cla()
axs[1].cla()
plt.close()
plot_training(axs, rmse, cerr, t, epochs)
clear_output(wait=True)
display(fig)
net = None # Global
interact_manual(train_network,
dataset=fixed(dataset),
target_name={'Identity': 'target', 'Gender': 'gender', 'Expression': 'expression'},
epochs= IntSlider(min=10, max=500, step=10, value=100),
n_input = IntSlider(min=1, max=(dataset.data.shape[0]-1), value=40),
n_hidden = IntSlider(min=1, max=40, value=10),
hidden_learning_rate = FloatSlider(min=0.01, max=0.3, value=0.1),
output_learning_rate = FloatSlider(min=0.01, max=0.3, value=0.1),
momentum_learning_rate = FloatSlider(min=0.5, max=0.99, value=0.9),
test_size=FloatSlider(min=0.1, max=0.5, value=0.2))
@interact
def plot_classification(image_id=(0, dataset.images.shape[0]-1)):
global net
if net is None:
print("Please train the network first!")
return
f, axs = plt.subplots(1,3, figsize=(10,5))
axs[0].imshow(dataset.images[image_id],cmap='gray')
axs[0].set_title('Id = %d, Target = %d' % (dataset.target[image_id], net.targets[image_id]))
axs[0].axis('off')
prediction, pca_r, hidden_r = net.predict(image_id)
axs[1].imshow(pca_r,cmap='gray')
axs[1].set_title('PCA reconstruction')
axs[1].axis('off')
axs[2].imshow(hidden_r,cmap='gray')
axs[2].set_title('Reconstruction (predicted %d)' % prediction)
axs[2].axis('off')
outputs, hidden = net.feed_forward(net.X)
model=PCA(n_components=2)
outputs_2d = model.fit_transform(outputs)
hidden_2d = model.fit_transform(hidden)
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].scatter(hidden_2d[:,0],hidden_2d[:,1],c=net.targets,lw=0)
axs[0].set_title('Hidden Nodes')
axs[1].scatter(outputs_2d[:,0],outputs_2d[:,1],c=net.targets,lw=0)
axs[1].set_title('Output Nodes')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: Backpropagation Demo
Step7: Then train the net using the controls here
Step8: After the network is trained, use it to classify test images.
Step9: Let's plot the output node representation in 2D
|
9,057
|
<ASSISTANT_TASK:>
Python Code:
# Load a useful Python libraries for handling data
import pandas as pd
from IPython.display import Markdown, display
# Read the data
data_filename = r'gapminder.csv'
data = pd.read_csv(data_filename, low_memory=False)
data = data.set_index('country')
display(Markdown("General information on the Gapminder data"))
display(Markdown("Number of countries: {0}".format(len(data))))
display(Markdown("Number of variables: {0}".format(len(data.columns))))
display(Markdown("The first records of the data."))
data.head()
# Convert interesting variables in numeric format
for variable in ('internetuserate', 'suicideper100th', 'employrate'):
data[variable] = pd.to_numeric(data[variable], errors='coerce')
data['internetuserate'].value_counts(sort=False, dropna=False)
import numpy as np
display(Markdown("Internet Use Rate (min, max) = ({0:.2f}, {1:.2f})".format(data['internetuserate'].min(), data['internetuserate'].max())))
internetuserate_bins = pd.cut(data['internetuserate'], bins=np.linspace(0, 100., num=21))
internetuserate_bins.value_counts(sort=False, dropna=False)
internetuserate_bins.value_counts(sort=False, normalize=True, dropna=False)
display(Markdown("Cumulative sum for Internet use rate percentages"))
internetuserate_bins.value_counts(sort=False, normalize=True, dropna=False).cumsum()
display(Markdown("Suicide rate (min, max) = ({0:.2f}, {1:.2f})".format(data['suicideper100th'].min(), data['suicideper100th'].max())))
suiciderate_bins = pd.cut(data['suicideper100th'], bins=np.linspace(0, 40., num=21))
suiciderate_bins.value_counts(sort=False, dropna=False)
suiciderate_bins.value_counts(sort=False, normalize=True, dropna=False)
display(Markdown("Employment rate (min, max) = ({0:.2f}, {1:.2f})".format(data['employrate'].min(), data['employrate'].max())))
employment_bins = pd.cut(data['employrate'], bins=np.linspace(0, 100., num=21))
employment_bins.value_counts(sort=False, dropna=False)
employment_bins.value_counts(sort=False, normalize=True, dropna=False)
high_employment_set = data[data['employrate'] > 60.0]
high_employment_set.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will now have a look at the frequencies of the variables.
Step2: This is useless as the variable does not take discrete values. So before researching the frequency count on the data, I will group the data in intervals of 5% using the cut function.
Step3: Counts of Internet Use Rate
Step4: Percentages of Internet Use Rate
Step5: Suicide rate per 100,000 frequencies
Step6: Counts of Suicide Rate
Step7: Percentages of Suicide Rate
Step8: Employment rate frequencies
Step9: Counts of Employment Rate
Step10: Percentages of Employment Rate
Step11: Summary
|
9,058
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%pylab inline
pylab.style.use('ggplot')
import tensorflow as tf
X_val = numpy.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y_val = np.atleast_2d(np.array([0, 0, 0, 1])).T
X_val
y_val
tf.reset_default_graph()
n_iter = 500
threshold = 0.5
with tf.variable_scope('inputs'):
X = tf.placeholder(name='X', shape=(4, 2), dtype=np.float64)
y = tf.placeholder(name='y', shape=y_val.shape, dtype=np.float64)
with tf.variable_scope('weights'):
w = tf.get_variable(name='w', shape=(2, 1), dtype=np.float64, initializer=tf.truncated_normal_initializer())
b = tf.get_variable(name='b', shape=(1, 1), dtype=np.float64, initializer=tf.constant_initializer(1.0))
with tf.variable_scope('train'):
output = tf.matmul(X, w) + b
loss_func = tf.reduce_mean(tf.squared_difference(y, output))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.2)
train_op = optimizer.minimize(loss_func)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
feed_dict = {X: X_val, y: y_val}
for i in range(1, n_iter+1):
_, out_val = sess.run([train_op, output], feed_dict=feed_dict)
out_val = np.where(out_val > threshold, 1, 0)
if i % 50 == 0:
result = np.column_stack([X_val, y_val, out_val])
result_df = pd.DataFrame(result, columns=['x1', 'x2', 'x1 and x2', 'output'])
print('loss_function: {}'.format(loss_func.eval(session=sess, feed_dict=feed_dict)))
print('iteration {}\n{}'.format(i, result_df))
X_val = numpy.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y_val = np.atleast_2d(np.array([0, 1, 1, 0])).T
X_val
y_val
tf.reset_default_graph()
n_iter = 500
threshold = 0.5
def make_layer(name, x):
with tf.variable_scope(name, reuse=None):
if name == 'hidden':
w_shape = (2, 3)
b_shape = (4, 3)
elif name == 'output':
w_shape = (3, 1)
b_shape = (1, 1)
else:
assert False
w = tf.get_variable(name='w', shape=w_shape, dtype=np.float64, initializer=tf.truncated_normal_initializer())
b = tf.get_variable(name='b', shape=b_shape, dtype=np.float64, initializer=tf.constant_initializer(1.0))
mm = tf.matmul(x, w) + b
return tf.sigmoid(mm) if name == 'hidden' else mm
with tf.variable_scope('inputs'):
X = tf.placeholder(name='X', shape=(4, 2), dtype=np.float64)
y = tf.placeholder(name='y', shape=(4, 1), dtype=np.float64)
hidden = make_layer('hidden', X)
output = make_layer('output', hidden)
with tf.variable_scope('train'):
loss_func = tf.reduce_mean(tf.squared_difference(y, output))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.2)
train_op = optimizer.minimize(loss_func)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
feed_dict = {X: X_val, y: y_val}
for i in range(1, n_iter+1):
_, out_val = sess.run([train_op, output], feed_dict=feed_dict)
out_val = np.where(out_val > threshold, 1, 0)
if i % 50 == 0:
result = np.column_stack([X_val, y_val, out_val])
result_df = pd.DataFrame(result, columns=['x1', 'x2', 'x1 XOR x2', 'output'])
print('loss_function: {}'.format(loss_func.eval(session=sess, feed_dict=feed_dict)))
print('iteration {}\n{}'.format(i, result_df))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The idea of variable scoping in TensorFlow is to be able to organize the names and initializations of variables that play the same role in a multilayer network. For example, consider an ANN with multiple hidden layers. All of them have a weight matrix $w$. Using variable scoping allows us to structure and initialize them in a systematic way.
Step2: AND Gate with TensorFlow
Step3: XOR Gate with TensorFlow
|
9,059
|
<ASSISTANT_TASK:>
Python Code:
from fretbursts import *
sns = init_notebook(apionly=True)
print('seaborn version: ', sns.__version__)
# Tweak here matplotlib style
import matplotlib as mpl
mpl.rcParams['font.sans-serif'].insert(0, 'Arial')
mpl.rcParams['font.size'] = 12
%config InlineBackend.figure_format = 'retina'
url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
download_file(url, save_dir='./data')
full_fname = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
d = loader.photon_hdf5(full_fname)
loader.alex_apply_period(d)
d.calc_bg(bg.exp_fit, time_s=1000, tail_min_us=(800, 4000, 1500, 1000, 3000))
d.burst_search(L=10, m=10, F=6)
ds = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
alex_jointplot(ds)
alex_jointplot(ds, vmax_fret=False)
alex_jointplot(ds, vmax_fret=False, marginal_color=8)
alex_jointplot(ds, vmax_fret=False, marginal_color=7)
alex_jointplot(ds, kind='kde')
alex_jointplot(ds, vmax=40)
alex_jointplot(ds, cmap='plasma')
alex_jointplot(ds, cmap='plasma', marginal_color=83)
alex_jointplot(ds, kind='scatter')
alex_jointplot(ds, kind='kde')
dsf = ds.select_bursts(select_bursts.naa, th1=40)
alex_jointplot(dsf, kind='kde',
joint_kws={'shade': False, 'n_levels': 12, 'bw': 0.04})
plt.figure(figsize=(5,5))
hexbin_alex(ds)
g = alex_jointplot(ds)
g.ax_marg_x.grid(False)
g.ax_marg_y.grid(False)
g.ax_joint.set_xlim(-0.1, 1.1)
g.ax_joint.set_ylim(-0.1, 1.1)
g = alex_jointplot(ds)
g.ax_marg_x.grid(False)
g.ax_marg_y.grid(False)
g.ax_joint.set_xlim(-0.19, 1.19)
g.ax_joint.set_ylim(-0.19, 1.19)
plt.subplots_adjust(wspace=0, hspace=0)
g.ax_marg_y.spines['bottom'].set_visible(True)
g.ax_marg_x.spines['left'].set_visible(True)
g.ax_marg_y.tick_params(reset=True, bottom=True, top=False, right=False, labelleft=False)
g.ax_marg_x.tick_params(reset=True, left=True, top=False, right=False, labelbottom=False)
g = alex_jointplot(ds)
g.ax_marg_x.grid(False)
g.ax_marg_y.grid(False)
g.ax_joint.set_xlim(-0.19, 1.19)
g.ax_joint.set_ylim(-0.19, 1.19)
plt.subplots_adjust(wspace=0, hspace=0)
g.ax_marg_y.tick_params(reset=True, bottom=True, top=False, right=False, labelleft=False)
g.ax_marg_x.tick_params(reset=True, left=True, top=False, right=False, labelbottom=False)
g = alex_jointplot(ds)
g.ax_marg_x.grid(False, axis='x')
g.ax_marg_y.grid(False, axis='y')
g.ax_joint.set_xlim(-0.19, 1.19)
g.ax_joint.set_ylim(-0.19, 1.19)
plt.subplots_adjust(wspace=0, hspace=0)
alex_jointplot(ds, marginal_kws={'show_kde': False})
from ipywidgets import widgets, interact, interactive, fixed
from IPython.display import display, display_png, display_svg, clear_output
from IPython.core.pylabtools import print_figure
cmaps = ['viridis', 'plasma', 'inferno', 'magma',
'afmhot', 'Blues', 'BuGn', 'BuPu', 'GnBu', 'YlGnBu',
'coolwarm', 'RdYlBu', 'RdYlGn', 'Spectral',]# 'icefire'] uncomment if using seaborn 0.8
@interact(overlay = widgets.RadioButtons(options=['fit model', 'KDE'], value='KDE'),
binwidth = widgets.FloatText(value=0.03, min=0.01, max=1),
bandwidth = widgets.FloatText(value=0.03, min=0.01, max=1),
gridsize = (10, 100),
min_size=(10, 500, 5),
cmap=widgets.Dropdown(value='Spectral', options=cmaps),
reverse_cmap = True,
vmax_fret = True,
)
def plot_(min_size=50, overlay='KDE', binwidth=0.03, bandwidth=0.03,
gridsize=50, cmap='Spectral', reverse_cmap=False,
vmax_fret=True):
dx = d.select_bursts(select_bursts.size, add_naa=True, th1=min_size)
bext.bursts_fitter(dx, 'E', binwidth=binwidth, bandwidth=bandwidth,
model=mfit.factory_three_gaussians())
bext.bursts_fitter(dx, 'S', binwidth=binwidth, bandwidth=bandwidth,
model=mfit.factory_two_gaussians())
if reverse_cmap: cmap += '_r'
if binwidth < 0.01: binwidth = 0.01
if bandwidth < 0.01: bandwidth = 0.01
if overlay == 'fit model':
marginal_kws = dict(binwidth=binwidth, show_model=True, pdf=True,
show_kde=False)
else:
marginal_kws = dict(binwidth=binwidth, show_kde=True,
bandwidth=bandwidth)
alex_jointplot(dx, cmap=cmap, gridsize=gridsize, vmax_fret=vmax_fret,
marginal_kws=marginal_kws,)
fig = gcf()
plt.close()
display(fig)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get and process data
Step2: ALEX joint plot
Step3: The inner plot in an hexbin plot, basically a 2D histogram with hexagonal bins.
Step4: Or you can manually choose the max value mapped by the colormap (vmax)
Step5: Changing the colormap will affect both inner and marginal plots
Step6: To pick a different color from the colormap for the marginal histograms use histcolor_id
Step7: Kinds of joint-plots
Step8: No marginals
Step9: Figure layout
Step10: alex_jointplot returns g which contains the axis handles (g.ax_join, g.ax_marg_x, g.ax_marg_y).
Step11: Arguments of inner plots
Step12: Interactive plot
|
9,060
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p tensorflow
import tensorflow as tf
##########################
### WRAPPER FUNCTIONS
##########################
def fc_layer(input_tensor, n_output_units, name,
activation_fn=None, seed=None,
weight_params=None, bias_params=None):
with tf.variable_scope(name):
if weight_params is not None:
weights = tf.constant(weight_params, name='weights',
dtype=tf.float32)
else:
weights = tf.Variable(tf.truncated_normal(
shape=[input_tensor.get_shape().as_list()[-1], n_output_units],
mean=0.0,
stddev=0.1,
dtype=tf.float32,
seed=seed),
name='weights',)
if bias_params is not None:
biases = tf.constant(bias_params, name='biases',
dtype=tf.float32)
else:
biases = tf.Variable(tf.zeros(shape=[n_output_units]),
name='biases',
dtype=tf.float32)
act = tf.matmul(input_tensor, weights) + biases
if activation_fn is not None:
act = activation_fn(act)
return act
def mlp_graph(n_input=784, n_classes=10, n_hidden_1=128, n_hidden_2=256,
learning_rate=0.1,
fixed_params=None):
# fixed_params to allow loading weights & biases
# from NumPy npz archives and defining a fixed, non-trainable
# TensorFlow classifier
if not fixed_params:
var_names = ['fc1/weights:0', 'fc1/biases:0',
'fc2/weights:0', 'fc2/biases:0',
'logits/weights:0', 'logits/biases:0',]
fixed_params = {v: None for v in var_names}
found_params = False
else:
found_params = True
# Input data
tf_x = tf.placeholder(tf.float32, [None, n_input], name='features')
tf_y = tf.placeholder(tf.int32, [None], name='targets')
tf_y_onehot = tf.one_hot(tf_y, depth=n_classes, name='onehot_targets')
# Multilayer perceptron
fc1 = fc_layer(input_tensor=tf_x,
n_output_units=n_hidden_1,
name='fc1',
weight_params=fixed_params['fc1/weights:0'],
bias_params=fixed_params['fc1/biases:0'],
activation_fn=tf.nn.relu)
fc2 = fc_layer(input_tensor=fc1,
n_output_units=n_hidden_2,
name='fc2',
weight_params=fixed_params['fc2/weights:0'],
bias_params=fixed_params['fc2/biases:0'],
activation_fn=tf.nn.relu)
logits = fc_layer(input_tensor=fc2,
n_output_units=n_classes,
name='logits',
weight_params=fixed_params['logits/weights:0'],
bias_params=fixed_params['logits/biases:0'],
activation_fn=tf.nn.relu)
# Loss and optimizer
### Only necessary if no existing params are found
### and a trainable graph has to be initialized
if not found_params:
loss = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=tf_y_onehot)
cost = tf.reduce_mean(loss, name='cost')
optimizer = tf.train.GradientDescentOptimizer(
learning_rate=learning_rate)
train = optimizer.minimize(cost, name='train')
# Prediction
probabilities = tf.nn.softmax(logits, name='probabilities')
labels = tf.cast(tf.argmax(logits, 1), tf.int32, name='labels')
correct_prediction = tf.equal(labels,
tf_y, name='correct_predictions')
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32),
name='accuracy')
from tensorflow.examples.tutorials.mnist import input_data
##########################
### SETTINGS
##########################
# Hyperparameters
learning_rate = 0.1
training_epochs = 10
batch_size = 64
##########################
### GRAPH DEFINITION
##########################
g = tf.Graph()
with g.as_default():
mlp_graph()
##########################
### DATASET
##########################
mnist = input_data.read_data_sets("./", one_hot=False)
##########################
### TRAINING & EVALUATION
##########################
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
saver0 = tf.train.Saver()
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = mnist.train.num_examples // batch_size
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
_, c = sess.run(['train', 'cost:0'], feed_dict={'features:0': batch_x,
'targets:0': batch_y})
avg_cost += c
train_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.train.images,
'targets:0': mnist.train.labels})
valid_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.validation.images,
'targets:0': mnist.validation.labels})
print("Epoch: %03d | AvgCost: %.3f" % (epoch + 1, avg_cost / (i + 1)), end="")
print(" | Train/Valid ACC: %.3f/%.3f" % (train_acc, valid_acc))
test_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.test.images,
'targets:0': mnist.test.labels})
print('Test ACC: %.3f' % test_acc)
##########################
### SAVE TRAINED MODEL
##########################
saver0.save(sess, save_path='./mlp')
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./", one_hot=False)
with tf.Session() as sess:
saver1 = tf.train.import_meta_graph('./mlp.meta')
saver1.restore(sess, save_path='./mlp')
test_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.test.images,
'targets:0': mnist.test.labels})
print('Test ACC: %.3f' % test_acc)
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
with tf.Session() as sess:
saver1 = tf.train.import_meta_graph('./mlp.meta')
saver1.restore(sess, save_path='./mlp')
var_names = [v.name for v in
tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)]
params = {}
print('Found variables:')
for v in var_names:
print(v)
ary = sess.run(v)
params[v] = ary
np.savez('mlp', **params)
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
###########################
### LOAD DATA AND PARAMS
###########################
mnist = input_data.read_data_sets("./", one_hot=False)
param_dict = np.load('mlp.npz')
##########################
### GRAPH DEFINITION
##########################
g = tf.Graph()
with g.as_default():
# here: constructs a non-trainable graph
# due to the provided fixed_params argument
mlp_graph(fixed_params=param_dict)
with tf.Session(graph=g) as sess:
test_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.test.images,
'targets:0': mnist.test.labels})
print('Test ACC: %.3f' % test_acc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Zoo -- Saving and Loading Trained Models
Step2: Train and Save Multilayer Perceptron
Step3: Reload Model from Meta and Checkpoint Files
Step4: Working with NumPy Archive Files and Creating Non-Trainable Graphs
Step5: Load NumPy .npz files into the mlp_graph
|
9,061
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import graphlab
products = graphlab.SFrame('amazon_baby_subset.gl/')
import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# Remote punctuation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products
train_data, validation_data = products.random_split(.9, seed=1)
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data)
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1. / (1.+np.exp(-score))
return predictions
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
return derivative
def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)
return lp
j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0"
j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0"
print len(sentiment_train)
from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = predict_probability(feature_matrix[i:i+batch_size,:], coefficients)
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = (sentiment[i:i+batch_size]==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])
# compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += (1./batch_size)*(step_size * derivative)
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all
sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),
step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed'
coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=1, max_iter=10)
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1,
batch_size = len(feature_matrix_train),
max_iter=200)
# number of passes is number to complete the whole dataset
# For each batch size, we update 1 gradient, so
2*(50000/100)
step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
import matplotlib.pyplot as plt
%matplotlib inline
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1')
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1')
step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1')
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and process review dataset
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: The SFrame products now contains one column for each of the 193 important_words.
Step4: Split data into training and validation sets
Step5: Convert SFrame to NumPy array
Step6: Note that we convert both the training and validation sets into NumPy arrays.
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
Step8: Derivative of log likelihood with respect to a single coefficient
Step9: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
Step10: Quiz Question
Step11: Quiz Question
Step12: Quiz Question
Step13: Averaging the gradient across a batch
Step14: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Step15: Compare convergence behavior of stochastic gradient ascent
Step16: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
Step17: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
Step18: Log likelihood plots for stochastic gradient ascent
Step19: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
Step20: Smoothing the stochastic gradient ascent curve
Step21: Checkpoint
Step22: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
Step23: Quiz Question
Step24: Plotting the log likelihood as a function of passes for each step size
Step25: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.
|
9,062
|
<ASSISTANT_TASK:>
Python Code:
# setup
import numpy as np
import sympy as sp
import pandas as pd
import scipy
from pprint import pprint
sp.init_printing(use_latex='mathjax')
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12, 8) # (width, height)
plt.rcParams['font.size'] = 14
plt.rcParams['legend.fontsize'] = 16
from matplotlib import patches
get_ipython().magic('matplotlib') # seperate window
get_ipython().magic('matplotlib inline') # inline plotting
import glob as gb
from matplotlib.pyplot import *
%matplotlib inline
csvdir='./examples/'
e=[]
y=[]
for s in specimen:
files = gb.glob(csvdir + '*.csv') # select all csv files
fig, ax = subplots()
title(s)
Pult = []
for f in files:
d1 = pd.read_csv(f, skiprows=1)
d1 = d1[1:] # remove first row of string
d1.columns = ['t', 'load', 'ext'] # rename columns
d1.head()
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
Pult.append(np.max(d1.load))
plot(d1.ext, d1.load)
ylabel('Pult, lbs')
xlabel('extension, in')
e.append(np.std(Pult))
y.append(np.average(Pult) )
show()
# bar chart
barwidth = 0.35 # the width of the bars
fig, ax = subplots()
x = np.arange(len(specimen))
ax.bar(x, y, width=barwidth, yerr=e)
#ax.set_xticks(x)
xticks(x+barwidth/2, specimen, rotation='vertical')
title('Pult with sample average and stdev of n=3')
ylabel('Pult, lbs')
margins(0.05)
show()
f = 'Aluminum_loops.txt'
d1 = pd.read_csv(f, skiprows=4,delimiter='\t')
d1 = d1[1:] # remove first row of string
d1.columns = ['time', 'load', 'cross','ext','strain','stress'] # rename columns
d1.head()
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
plot(d1.ext, d1.load)
ylabel('stress')
xlabel('strain')
d1.head()
f = 'al_MTS_test.csv'
d1 = pd.read_csv(f, skiprows=3,delimiter=',')
d1 = d1[1:] # remove first row of string
d1 = d1[['Time','Axial Force', 'Axial Fine Displacement', 'Axial Length']]
d1.columns = ['time', 'load', 'strain','cross'] # rename columns
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
plot(d1.strain, d1.load)
ylabel('stress')
xlabel('strain')
%matplotlib inline
from scipy import signal
from pylab import plot, xlabel, ylabel, title, rcParams, figure
import numpy as np
pltwidth = 16
pltheight = 8
rcParams['figure.figsize'] = (pltwidth, pltheight)
csv = np.genfromtxt('./stress_strain1.csv', delimiter=",")
disp = csv[:,0]
force = csv[:,1]
print('number of data points = %i' % len(disp))
def moving_average(x, window):
Moving average of 'x' with window size 'window'.
y = np.empty(len(x)-window+1)
for i in range(len(y)):
y[i] = np.sum(x[i:i+window])/window
return y
plt1 = plot(disp, force);
xlabel('displacement');
ylabel('force');
figure()
mywindow = 1000 # the larger the filter window, the more agressive the filtering
force2 = moving_average(force, mywindow)
x2 = range(len(force2))
plot(x2, force2);
title('Force smoothed with moving average filter');
# Find f' using diff to find the first intersection of the 0
# mvavgforce = mvavgforce[:len(mvavgforce)/2]
force2p = np.diff(force2)
x2p = range(len(force2p))
plot(x2p, force2p);
title('Slope of the smoothed curve')
i = np.argmax(force2p<0)
### or
# i = where(force2p<0)[0][0]
#### or
# for i, f in enumerate(force2p):
# if f < 0:
# break
plot(x2p, force2p, i,force2p[i],'o', markersize=15);
title('find the point at which the slope goes negative, indicating a switch in the slope direction');
plot(x2, force2, i,force2[i],'o',markersize=15);
title('using that index, plot on the force-displacement curve');
#Now, we need to find the next point from here that is 10 less.
delta = 1
i2 = np.argmax(force2[i]-delta > force2[i:])
# If that point does not exist on the immediate downward sloping path,
#then just choose the max point. In this case, 10 would exist very
#far away from the point and not be desireable
if i2 > i:
i2=0
plot(x2, force2, i,force2[i],'o', i2+i, force2[i2+i] ,'*', markersize=15);
disp
# remove nan
disp = disp[~np.isnan(force)]
force = force[~np.isnan(force)]
A = 0.1 # area
stress = force/A / 1e3
strain = disp/25.4 * 1e-3
plt.plot(strain, stress)
stress_range = np.array([5, 15])
PL = 0.0005
E_tan = stress/strain
assert(len(stress)==len(strain))
i = (stress > stress_range[0]) & (stress < stress_range[1])
stress_mod = stress[i]
strain_mod = strain[i]
fit = np.polyfit(strain_mod,stress_mod,1)
fit_fn = np.poly1d(fit)
fit_fn
PLi = np.argmax( (stress - (fit_fn(strain-PL)) < 0) )
PLi
# fit_fn is now a function which takes in x and returns an estimate for y
#plt.text(4,4,fit_fn)
plt.plot(strain ,stress, 'y')
plot(strain, fit_fn(strain-PL) , '--k', strain[PLi], stress[PLi],'o')
plt.xlim(0, np.max(strain))
plt.ylim(0, np.max(stress))
print('ultimate stress %f' % np.max(stress))
print('ultimate strain %f' % np.max(strain))
print('strain proportion limit %f' % strain[PLi])
print('stress proportion limit %f' % stress[PLi])
E_tan = E_tan[~np.isinf(E_tan)]
strainE = strain[1:]
plot(strainE, E_tan,'b', strainE[PLi], E_tan[PLi],'o')
plt.ylim([0,25000])
plt.title('Tangent Modulus')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading raw test data example 1
Step2: Reading test data - example 2
Step3: another example of plotting data
Step5: Finding the "first" peak and delta-10 threshhold limit on force-displacement data of an aluminum coupon
Step6: Modulus
|
9,063
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
from ipywidgets import interact
from ipywidgets import widgets, FloatSlider, Checkbox, RadioButtons, fixed
from exact_solvers import shallow_water
from exact_solvers import shallow_demos
from IPython.display import IFrame
g = 1.
interact(shallow_water.plot_hugoniot_loci, y_axis=widgets.fixed('hu'),
plot_1=widgets.Checkbox(description='Plot 1-loci',value=True),
plot_2=widgets.Checkbox(description='Plot 2-loci',value=False));
interact(shallow_water.plot_hugoniot_loci, y_axis=widgets.fixed('u'),
plot_1=widgets.Checkbox(description='Plot 1-loci',value=True),
plot_2=widgets.Checkbox(description='Plot 2-loci',value=False));
shallow_water.plot_riemann_SW(h_l=2, h_r=2, u_l=1, u_r=-1)
shallow_water.plot_riemann_SW(h_l=4, h_r=1, u_l=0, u_r=0, plot1=True,
force_waves='shock', particle_paths=False);
interact(shallow_demos.plot_int_curves, y_axis=widgets.fixed('hu'),
plot_1=widgets.Checkbox(description='1-wave curves',
value=True),
plot_2=widgets.Checkbox(description='2-wave curves',
value=False));
interact(shallow_demos.plot_int_curves, y_axis=widgets.fixed('u'),
plot_1=widgets.Checkbox(description='1-wave curves',
value=True),
plot_2=widgets.Checkbox(description='2-wave curves',
value=False));
interact(shallow_demos.compare_curves,
wave_family=RadioButtons(options=[1,2],
description='Wave family:'),
y_axis=RadioButtons(options=['u','hu'],
description='Vertical axis:'),
h0=FloatSlider(min=1.e-1,max=3.,value=1.,
description='$h_*$'),
u0=FloatSlider(min=-3,max=3,description='$u_*$'));
shallow_water.plot_riemann_SW(h_l=1, h_r=1, u_l=-1., u_r=1.)
shallow_water.plot_riemann_SW(h_l=4, h_r=1, u_l=0, u_r=0, plot2=True,
force_waves='raref', particle_paths=False)
shallow_water.plot_riemann_SW(h_l=4, h_r=1, u_l=0, u_r=0)
interact(shallow_demos.connect_states,
h_l=widgets.FloatSlider(min=0.001,max=2,value=1),
u_l=widgets.FloatSlider(min=-5,max=5,value=-1),
h_r=widgets.FloatSlider(min=0.001,max=2,value=1),
u_r=widgets.FloatSlider(min=-5,max=5,value=1));
shallow_water.plot_riemann_SW(h_l=0.5, h_r=0.5, u_l=-1.9, u_r=1.9)
shallow_water.plot_riemann_SW(h_l=1, h_r=0, u_l=0, u_r=0, particle_paths=False)
IFrame(src='phase_plane/shallow_water_verysmall.html',
width=600, height=530)
def plot_exact_riemann_solution(h_l=3.,u_l=0.,h_r=1.,u_r=0.,t=0.2, fig=0):
plot_function = shallow_water.make_demo_plot_function(h_l,h_r,u_l,u_r,
hlim=(0,6),ulim=(-3,3))
plot_function(t,fig)
interact(plot_exact_riemann_solution,
h_l=FloatSlider(min=0.,max=5.,step=0.1,value=3.,
description=r'$h_l$'),
u_l=FloatSlider(min=-2.5,max=2.5,step=0.1,value=0.,
description=r'$u_l$'),
h_r=FloatSlider(min=0.,max=5.,step=0.1,value=1.,
description=r'$h_r$'),
u_r=FloatSlider(min=-2.5,max=2.5,step=0.1,value=0.,
description=r'$u_r$'),
t=FloatSlider(min=0., max=0.6, step=0.1,value=0.),
fig=fixed(0));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you wish to examine the Python code for this chapter, see
Step2: The Riemann problem
Step3: The plot above shows the Hugoniot loci in the $h$-$hu$ plane, the natural phase plane in terms of the two conserved quantities. Of course, they all approach the origin as $h \rightarrow 0$. Alternatively, we can plot these same curves in the $h$-$u$ plane
Step4: Note that in this plane the curves in each family are simply vertical translates of one another, and all curves asymptote to $\pm \infty$ as $h \rightarrow 0$. This means that it is impossible to have a shock with an adjacent dry ($h=0$) state.
Step5: In the interactive notebook you can check the boxes to plot the characteristic families, and notice that the 1-characteristics impinge on the 1-shock; the 2-characteristics impinge on the 2-shock. Thus these shocks satisfy the entropy condition. You can also check a box to show the particle paths, which show how the water is decelerated to 0 speed as it goes through each shock.
Step6: Notice that the 1-characteristics (which are plotted as thin lines) don't impinge on the 1-shock; instead, characteristics are diverging away from it. This shock does not satisfy the entropy condition and should be replaced with a rarefaction. The corresponding part of the Hugoniot locus is plotted with a dashed line to remind us that it is unphysical.
Step7: We can also plot the integral curves in the $h$--$u$ plane
Step8: Note that in this plane the integral curves of each family are simply vertical translates of one another due to the form of the functions $w_1$ and $w_2$. Unlike the Hugoniot loci, the integral curves do not asymptote to $\pm\infty$ as $h \rightarrow 0$ and instead each approaches a finite value.
Step9: Near the point of intersection, the curves are very close; indeed, they must be tangent at this point since their direction is parallel to the corresponding eigenvector there (and in fact they also have the same curvature). Far from this point they diverge; for small depths they must diverge greatly, since the Hugoniot locus never reaches $h=0$ at any finite velocity.
Step10: Notice that the segment of each integral curve that connects to states with a smaller depth is plotted as a solid line, while the segment connecting to states with greater depth is plotted with a dashed line. This again is to remind us that states connected by a rarefaction through the solid part are physical (entropy-satisfying), while states connected by the dashed part would be unphysical (entropy-violating).
Step11: Notice that the 2-characteristics (plotted as thin lines) impinge on the 2-rarefaction; in fact they intersect at the left edge of the rarefaction. This means that the solution we constructed is triple-valued and nonsensical as a solution to this one-dimensional conservation law, and so this portion of the solution is omitted in the plots of depth and momentum. In this case a rarefaction wave is not physical and should be replaced with a shock; the corresponding part of the integral curve is hence shown as a dashed line.
Step12: In the interactive notebook you can confirm that 1-charactersitics spread out across the rarefaction fan while 2-characteristics converge on the shock. View the particle paths and note that between the two waves the fluid velocity is constant, with the fluid accelerating across both the rarefaction and shock to the same intermediate value $u_m$.
Step13: You should find that by making the initial states flow sufficiently fast away from each other, there is no intersection in the $h-u$ plane. In the $h-hu$ plane, the integral curves always intersect at the origin. The reason for this ambiguity is that for zero depth it isn't meaningful to assign a velocity. Thus in the $h-u$ plane we could think of the entire $u$-axis as being part of every integral curve. That means that we can always connect the left and right states via an intermediate state with depth $h=0$ (a dry state).
Step14: Dry initial states
Step15: Interactive phase plane
Step16: Interactive solution
|
9,064
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib.ticker import MultipleLocator
import scipy
from scipy import interpolate
from scipy.interpolate import interp1d
import scipy.io.wavfile as wf
import matplotlib.pyplot as plt
import numpy as np
def wav2file(fname, data, sr):
Write wave data to file `fname`.
Sample rate is `sr`.
#Write it to a file.
wf.write(fname, sr, data.astype(np.int16))
def plot_wav(sig, sr=16000, unit_time=False, **kwds):
Function for quickly plotting a WAV signal.
Returns the time series used to plot the signal.
num_samples = len(sig)
start = 0
if unit_time:
stop = 1.0
else:
stop = float(num_samples) / float(sr)
t = np.linspace(start, stop, num=num_samples)
figsize = kwds.get('figsize', None)
if figsize is None:
figsize = (float(stop)/3.5*17.0, 5.5)
fig = plt.figure(figsize=figsize)
ax = fig.gca()
ax.plot(t, sig)
ax.set_xlabel("Time (s)")
ax.set_ylabel("Amplitude")
return t
def move_legend(ax=None, rpad=0):
Move the legend outside the data area of the axes.
if ax is None:
ax = gca()
ax.legend(loc="center left", bbox_to_anchor=(1+rpad, 0.5))
def center_ylims(ax):
Set the Y-axis limits so the data is centered vertically.
l, u = ax.get_ylim()
b = max(abs(l), u)
ax.set_ylim(-b, b)
def draw_vlines(ax, xcoords, color='k', linestyle='--', linewidth=1):
Draw a series of vertical lines.
for x in xcoords:
ax.axvline(x, color=color, linestyle=linestyle, linewidth=linewidth)
def draw_tangent(ax, x,y,a, pointcolor='m', linecolor='r'):
Plot the line tangent to data series (x, y) at X-position `a`.
# interpolate the data with a spline
spl = interpolate.splrep(x,y)
small_t = arange(a-5,a+5)
fa = interpolate.splev(a,spl,der=0) # f(a)
fprime = interpolate.splev(a,spl,der=1) # f'(a)
tan = fa+fprime*(small_t-a) # tangent
ax.plot(a,fa,'o'+pointcolor)
ax.plot(small_t, tan, '--' + linecolor, label='tangent at %0.3f' % a)
def norm_series(s, max_magnitude=None):
if max_magnitude is None:
max_magnitude = max(abs(s.max()), abs(s.min()))
return s.astype(np.float64) / np.float64(max_magnitude)
def abs_series(s):
Make series `s` poisitive by inverting any negative values.
return np.abs(s)
def signal_to_freq_domain(signal, sr):
Take the Fourier Transform of a signal.
#Take the Fourier Transform of the signal.
sp = np.fft.fft(signal)
#Get frequencies.
freqs = np.fft.fftfreq(signal.shape[-1], d=(1.0 / sr))
#Get the total number of samples in the signal.
n = len(signal)
#Only take the positive half of the frequencies and signals.
freqs = freqs[:int(n / 2)]
signal = sp.real[:int(n / 2)]
signal = np.abs(signal)
return (signal, freqs)
def compute_tangent_slope(x, y, a):
Return the slope of the line tangent to the data series (x, y) at
X position `a`.
spl = interpolate.splrep(x,y)
small_t = arange(a-5,a+5)
fa = interpolate.splev(a,spl,der=0) # f(a)
fprime = interpolate.splev(a,spl,der=1) # f'(a)
tan = fa+fprime*(small_t-a) # tangent
return (tan[0]-tan[1])/(small_t[0]-small_t[1])
def compute_tangent_slopes(x, series):
Return an array of slopes of lines tangent to (x, series) at each position in `x`.
tans = np.array([compute_tangent_slope(x, series, a) for a in x])
return tans
%%bash
sox -c 1 -r 44100 -t alsa default ./wavs/recording.wav silence 1 0.1 5% 1 1.0 5%
#filename = "./wavs/A4-440.wav"
#filename = "./wavs/middle-c.wav"
filename = "./wavs/recording.wav"
sr, signal = wf.read(filename)
signal_norm = norm_series(signal)
with plt.style.context(('seaborn-whitegrid')):
x = plot_wav(signal_norm, sr, figsize=(14, 5))
ax = plt.gca()
rval = ax.set_xlim(0.0, x.max())
center_ylims(ax)
rval = plt.title("Normalized Signal (Time Domain)")
sr, signal = wf.read(filename)
freq_signal, freqs = signal_to_freq_domain(signal, sr)
with plt.style.context(('ggplot')):
fig = plt.figure(figsize=(14, 5))
ax = plt.gca()
rval = ax.plot(freqs, freq_signal)
max_freqs = freqs[freq_signal == freq_signal.max()]
max_freq_str = ', '.join(['{0:.2f}'.format(r) for r in max_freqs])
rval = plt.title("Frequency Domain of Signal - Max. Freq. {0} Hz".format(max_freq_str))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the required modules
Step3: Functions for working with signals
Step9: Plot signals
Step14: Signal processing functions
Step15: Analyze a Sound Wave
Step16: Plot the signal in the time domain
Step17: Plot the signal in the frequency domain
|
9,065
|
<ASSISTANT_TASK:>
Python Code:
import npfl103
import os
dpath = os.path.join('.', 'tutorial-assignment')
dlist = os.path.join(dpath, 'documents.list')
qlist = os.path.join(dpath, 'topics.list')
from npfl103.io import Collection
coll_docs = Collection(dlist)
from npfl103.io import Topic
coll_queries = Collection(qlist, document_cls=Topic)
from npfl103.io import BinaryVectorizer, TermFrequencyVectorizer
vectorizer = TermFrequencyVectorizer(field='lemma')
vdocs = (vectorizer.transform(d) for d in coll_docs)
cw_vectorizer = TermFrequencyVectorizer(field='lemma', token_filter=lambda t: t.pos in 'NADV')
vdocs = [vectorizer.transform(d) for d in coll_docs] # This actually parses the documents.
cw_docs = [cw_vectorizer.transform(d) for d in coll_docs]
d = vdocs[0]
cw_d = cw_docs[0]
# Print the top 10 most frequent tokens
import pprint, operator
print('All words:')
pprint.pprint({w: n for w, n in sorted(d.items(), key=operator.itemgetter(1), reverse=True)[:10]})
print('----------------------\nContent words:')
pprint.pprint({w: n for w, n in sorted(cw_d.items(), key=operator.itemgetter(1), reverse=True)[:10]})
from npfl103.transform import TransformCorpus
# This is the transformation we want to apply.
def normalize(vec):
doc_length = sum(vec.values())
return {k: v / doc_length for k, v in vec.items()}
normalized_docs = TransformCorpus(corpus=cw_docs, transform=normalize, name='normalized_docs')
cw_queries = (cw_vectorizer.transform(q) for q in coll_queries) # Generator, again
normalized_queries = TransformCorpus(corpus=cw_queries, transform=normalize, name='normalized_queries')
cw_docs = TransformCorpus(corpus=coll_docs, transform=cw_vectorizer.transform, name='vectorized_docs')
cw_queries = TransformCorpus(corpus=coll_queries, transform=cw_vectorizer.transform, name='vectorized_queries')
normalized_docs = TransformCorpus(corpus=cw_docs, transform=normalize, name='normalized_docs')
normalized_queries = TransformCorpus(corpus=cw_queries, transform=normalize, name='normalized_docs')
from npfl103.similarity import Similarity
# The similarity is initialized with the document corpus.
similarity = Similarity(corpus=normalized_docs, k=10) # Returning the top 10 documents. Use None for all docs.
similarity_corpus = TransformCorpus(corpus=normalized_queries, transform=similarity, name='sim')
import io # The system io, not npfl103.io
hdl = io.StringIO() # Technical workaround, so that the tutorial does not create files at this point.
# This is what writes the output. In practice, you'll probably use "with open(...) as hdl:" to write to a file.
Similarity.write_trec(similarity_corpus, similarity, hdl)
from npfl103.evaluation import do_eval, print_eval
results_file = 'tutorial-assignment/tutorial-output.dat'
with open(results_file, 'w') as outstream:
Similarity.write_trec(similarity_corpus, similarity, outstream)
qrels_file = 'tutorial-assignment/qrels.txt'
print_eval(qrels_file, results_file)
print_eval(qrels_file, results_file, results_by_query=True)
results = do_eval(qrels_file, results_file, results_by_query=True)
pprint.pprint([q for q in results])
pprint.pprint(results['10.2452/401-AH'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tutorial data
Step2: Loading documents and queries
Step3: Notice that creating the Collection was fast. This is because the whole pipeline in npfl103 is lazy
Step4: Caching
Step5: The purpose of a Vectorizer is to take a stream of a document's tokens and convert it into one vector representing this document in some vector space. Each token is used as a dimension of the vector space. If your tokens are just part of speech tags (noun, verb, etc.), then your space will have just some 10 dimensions; if your tokens are word forms, then there will be thousands of dimensions.
Step6: The vectorizer provides a transform method that does the processing.
Step7: (Notice that we're still using generator expressions, so nothing really gets computed so far.)
Step8: We can compare the results of the "plain" vectorizer and the content word vectorizer
Step9: We can see that token filtering can make a pretty large difference.
Step10: These "pipeline" components get two parameters
Step11: The corpus is an iterable that contains dictionary-like objects as sparse document vectors.
Step12: Vectorization as transformation
Step13: Chaining transformations
Step14: How would you implement TF-IDF in this system?
Step15: Recapitulation
Step16: Evaluation
Step17: Since trec_eval (which is called inside these functions) needs an input file, not a stream,
Step18: The tutorial assignment has its ground truth file
Step19: You can also break down the results by query, by setting results_by_query=True
Step20: If you want to do further processing with the results, use do_eval().
|
9,066
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 1000
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 10001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# Network Parameters
n_hidden_1 = 1024 # 1st layer number of features
#n_hidden_2 = 256 # 2nd layer number of features
#n_hidden_3 = 256 # 3nd layer number of features
n_input = 784 # notMNIST data input (img shape: 28*28)
n_classes = 10 # notMNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
# layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
# layer_2 = tf.nn.relu(layer_2)
# Hidden layer with RELU activation
# layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
# layer_3 = tf.nn.relu(layer_3)
# Output layer with linear activation
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
return out_layer
# Parameters
learning_rate = 0.001
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
# 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
# 'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
# 'b2': tf.Variable(tf.random_normal([n_hidden_2])),
# 'b3': tf.Variable(tf.random_normal([n_hidden_3])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
#Parameters
training_epochs = 20
batch_size = 100
display_step = 1
# Launch the graph
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(train_dataset.shape[0]/batch_size)
# Loop over all batches
for i in range(total_batch):
offset = (i * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_x = train_dataset[offset:(offset + batch_size), :]
batch_y = train_labels[offset:(offset + batch_size), :]
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch %d: cost=%.1f valid_accuracy=%.1f%% train_accuracy=%.1f%%" % (epoch+1, avg_cost, (accuracy.eval({x: valid_dataset, y: valid_labels})*100), (accuracy.eval({x: train_dataset, y: train_labels})*100)))
print("Optimization Finished!")
print("test_accuracy: %.1f%%" % (accuracy.eval({x: test_dataset, y: test_labels})*100))
#pickle_file = 'notMNIST_sanitized.pickle'
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
batch_size = 128
hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_1 = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_nodes]))
biases_1 = tf.Variable(tf.zeros([hidden_nodes]))
weights_2 = tf.Variable(
tf.truncated_normal([hidden_nodes, num_labels]))
biases_2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
def forward_prop(input):
h1 = tf.nn.relu(tf.matmul(input, weights_1) + biases_1)
return tf.matmul(h1, weights_2) + biases_2
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0])
logits = forward_prop(tf_train_dataset)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.AdamOptimizer(learning_rate=0.002).minimize(loss)
# optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset))
num_steps = 5001
with tf.Session(graph=graph) as session:
init = tf.global_variables_initializer()
session.run(init)
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
Step7: Problem
Step8: Problem - second version
|
9,067
|
<ASSISTANT_TASK:>
Python Code:
x = [1,3,5]
x.append(7)
x.insert(0,2)
x.pop(-2)
print(x)
x = [5,3,1]
y = [2,3]
z = x + y
z.sort()
print(z)
x = { 'a' : 'b', 'b' : 2, '2' : 6}
x['b']
x = { 'a' : 'b', 'b' : 2, '2' : 6}
x[2]
students = [
{ 'Name':'bob','GPA':3.4 },
{ 'Name':'sue','GPA':2.8 },
{ 'Name':'kent','GPA':4.0 }
]
#chaining
students[0]['Name']
s = [ { 'a':'bob','b':3.4 },
{ 'a':'sue','b':2.8 },
{ 'a':'kent','b':4.0 } ]
s[0]['c']
s = [ { 'a':'bob','b':3.4 },
{ 'a':'sue','b':2.8 },
{ 'a':'kent','b':4.0 } ]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A. [1,3,5,7]
Step2: A. [1,2,3,3,5]
Step3: A. 2
Step4: A. 2
Step5: Watch Me Code 3
Step6: A. 3.4
|
9,068
|
<ASSISTANT_TASK:>
Python Code:
import os
import discoursegraphs as dg
ddg = dg.corpora.pcc.get_document('maz-6728')
dg.DATA_ROOT_DIR
# dg.corpora.pcc.get_files_by_layer('syntax')
tdg = dg.read_tiger(os.path.join(
dg.DATA_ROOT_DIR,
'potsdam-commentary-corpus-2.0.0/syntax/maz-11766.xml'))
# dg.info(tdg)
# %load_ext gvmagic
# %dotstr dg.print_dot(tdg)
syntax_nodes = set(dg.select_nodes_by_layer(ddg, 'tiger:syntax'))
cat_nodes = set(dg.select_nodes_by_attribute(ddg, 'tiger:cat'))
# token_map = dg.util.TokenMapper(ddg)
# token_map.id2index['rst:rst:9_0']
token_map = dg.util.TokenMapper(ddg)
cat_spans = {}
for cat_node in cat_nodes:
# print ddg.node[cat_node]['tiger:cat'], dg.get_span_offsets(ddg, cat_node)
first_token_index = token_map.id2index[dg.get_span(ddg, cat_node)[0]]
last_token_index = token_map.id2index[dg.get_span(ddg, cat_node)[-1]]
# print ddg.node[cat_node]['tiger:cat'], token_map.id2index[first_token_id], token_map.id2index[last_token_id]
cat_spans[(first_token_index, last_token_index)] = {
'node': cat_node, 'cat': ddg.node[cat_node]['tiger:cat']}
def get_syntax_spans(docgraph):
token_map = dg.util.TokenMapper(docgraph)
cat_nodes = set(dg.select_nodes_by_attribute(docgraph, 'tiger:cat'))
syntax_spans = {}
for cat_node in cat_nodes:
first_token_index = token_map.id2index[dg.get_span(docgraph, cat_node)[0]]
last_token_index = token_map.id2index[dg.get_span(docgraph, cat_node)[-1]]
syntax_spans[(first_token_index, last_token_index)] = {
'node': cat_node, 'cat': docgraph.node[cat_node]['tiger:cat']}
return syntax_spans
# cat_spans
def get_rst_span_map(docgraph):
rst_spans = {}
for rel_id, span_type, rel_type, tok_onset, tok_offset in dg.readwrite.rst.rs3.get_rst_spans(docgraph):
rst_spans[(tok_onset, tok_offset)] = {
'rel_id': rel_id, 'span_type': span_type, 'rel_type': rel_type}
return rst_spans
for doc_id in dg.corpora.pcc.document_ids:
ddg = dg.corpora.pcc.get_document(doc_id)
syntax_spans = get_syntax_spans(ddg)
for syntax_span in syntax_spans:
if syntax_spans[syntax_span]['cat'] not in ('S', 'CS'):
if syntax_span in rst_spans:
print doc_id, syntax_span, syntax_spans[syntax_span]['cat']
from collections import Counter
cat_counter = Counter()
subord_counter = Counter()
for doc_id in dg.corpora.pcc.document_ids:
ddg = dg.corpora.pcc.get_document(doc_id)
syntax_spans = get_syntax_spans(ddg)
subord_nodes = get_subordinate_clauses(ddg)
for syntax_span in syntax_spans:
if syntax_span in rst_spans:
# print doc_id, syntax_span, syntax_spans[syntax_span]['cat']
cat = syntax_spans[syntax_span]['cat']
cat_counter[cat] += 1
if syntax_spans[syntax_span]['node'] in subord_nodes:
subord_counter[cat] += 1
cat_counter.most_common()
subord_counter
def get_subordinate_clauses(tiger_docgraph):
given a document graph of a TIGER syntax tree, return all
node IDs of nodes representing subordinate clause constituents.
Parameters
----------
tiger_docgraph : dg.DiscourseDocumentGraph
document graph from which subordinate clauses will be extracted
Returns
-------
subord_clause_nodes : list(str)
list of node IDs of nodes directly dominating subordinate clauses
subord_clause_rels = \
dg.select_edges_by_attribute(
tiger_docgraph, attribute='tiger:label',
value=['MO', 'RC', 'SB'])
subord_clause_nodes = []
for src_id, target_id in subord_clause_rels:
src_cat = tiger_docgraph.node[src_id].get('tiger:cat')
if src_cat == 'S' and not dg.istoken(tiger_docgraph, target_id):
subord_clause_nodes.append(target_id)
return subord_clause_nodes
for node_id in get_subordinate_clauses(ddg):
print node_id, ddg.node[node_id]
# rst_spans
for cat_span in cat_spans:
if cat_span in rst_spans:
print cat_span, cat_spans[cat_span][1], rst_spans[cat_span][1:]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are only very few RST spans that match to non-S/CS nodes
Step3: Are there any 'S'/'CS' that are not sentence root nodes?
|
9,069
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
from libs import utils
# utils.<tab>
files = utils.get_celeb_files()
img = plt.imread(files[50])
# img.<tab>
print(img)
# If nothing is drawn and you are using notebook, try uncommenting the next line:
#%matplotlib inline
plt.imshow(img)
img.shape
# (218, 178, 3)
plt.imshow(img[:, :, 0], cmap='gray')
plt.imshow(img[:, :, 1], cmap='gray')
plt.imshow(img[:, :, 2], cmap='gray')
imgs = utils.get_celeb_imgs()
plt.imshow(imgs[0])
imgs[0].shape
data = np.array(imgs)
data.shape
mean_img = np.mean(data, axis=0)
plt.imshow(mean_img.astype(np.uint8))
std_img = np.std(data, axis=0)
plt.imshow(std_img.astype(np.uint8))
plt.imshow(np.mean(std_img, axis=2).astype(np.uint8))
flattened = data.ravel()
print(data[:1])
print(flattened[:10])
plt.hist(flattened.ravel(), 255)
plt.hist(mean_img.ravel(), 255)
bins = 20
fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axs[0].hist((data[0]).ravel(), bins)
axs[0].set_title('img distribution')
axs[1].hist((mean_img).ravel(), bins)
axs[1].set_title('mean distribution')
axs[2].hist((data[0] - mean_img).ravel(), bins)
axs[2].set_title('(img - mean) distribution')
fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axs[0].hist((data[0] - mean_img).ravel(), bins)
axs[0].set_title('(img - mean) distribution')
axs[1].hist((std_img).ravel(), bins)
axs[1].set_title('std deviation distribution')
axs[2].hist(((data[0] - mean_img) / std_img).ravel(), bins)
axs[2].set_title('((img - mean) / std_dev) distribution')
axs[2].set_xlim([-150, 150])
axs[2].set_xlim([-100, 100])
axs[2].set_xlim([-50, 50])
axs[2].set_xlim([-10, 10])
axs[2].set_xlim([-5, 5])
import tensorflow as tf
x = np.linspace(-3.0, 3.0, 100)
# Immediately, the result is given to us. An array of 100 numbers equally spaced from -3.0 to 3.0.
print(x)
# We know from numpy arrays that they have a `shape`, in this case a 1-dimensional array of 100 values
print(x.shape)
# and a `dtype`, in this case float64, or 64 bit floating point values.
print(x.dtype)
x = tf.linspace(-3.0, 3.0, 100)
print(x)
g = tf.get_default_graph()
[op.name for op in g.get_operations()]
g.get_tensor_by_name('LinSpace' + ':0')
# We're first going to create a session:
sess = tf.Session()
# Now we tell our session to compute anything we've created in the tensorflow graph.
computed_x = sess.run(x)
print(computed_x)
# Alternatively, we could tell the previous Tensor to evaluate itself using this session:
computed_x = x.eval(session=sess)
print(computed_x)
# We can close the session after we're done like so:
sess.close()
sess = tf.Session(graph=g)
sess.close()
g2 = tf.Graph()
sess = tf.InteractiveSession()
x.eval()
# We can find out the shape of a tensor like so:
print(x.get_shape())
# %% Or in a more friendly format
print(x.get_shape().as_list())
# The 1 dimensional gaussian takes two parameters, the mean value, and the standard deviation, which is commonly denoted by the name sigma.
mean = 0.0
sigma = 1.0
# Don't worry about trying to learn or remember this formula. I always have to refer to textbooks or check online for the exact formula.
z = (tf.exp(tf.negative(tf.pow(x - mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0)))) *
(1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))
res = z.eval()
plt.plot(res)
# if nothing is drawn, and you are using ipython notebook, uncomment the next two lines:
#%matplotlib inline
#plt.plot(res)
# Let's store the number of values in our Gaussian curve.
ksize = z.get_shape().as_list()[0]
# Let's multiply the two to get a 2d gaussian
z_2d = tf.matmul(tf.reshape(z, [ksize, 1]), tf.reshape(z, [1, ksize]))
# Execute the graph
plt.imshow(z_2d.eval())
# Let's first load an image. We're going to need a grayscale image to begin with. skimage has some images we can play with. If you do not have the skimage module, you can load your own image, or get skimage by pip installing "scikit-image".
from skimage import data
img = data.camera().astype(np.float32)
plt.imshow(img, cmap='gray')
print(img.shape)
# We could use the numpy reshape function to reshape our numpy array
img_4d = img.reshape([1, img.shape[0], img.shape[1], 1])
print(img_4d.shape)
# but since we'll be using tensorflow, we can use the tensorflow reshape function:
img_4d = tf.reshape(img, [1, img.shape[0], img.shape[1], 1])
print(img_4d)
print(img_4d.get_shape())
print(img_4d.get_shape().as_list())
# Reshape the 2d kernel to tensorflow's required 4d format: H x W x I x O
z_4d = tf.reshape(z_2d, [ksize, ksize, 1, 1])
print(z_4d.get_shape().as_list())
convolved = tf.nn.conv2d(img_4d, z_4d, strides=[1, 1, 1, 1], padding='SAME')
res = convolved.eval()
print(res.shape)
# Matplotlib cannot handle plotting 4D images! We'll have to convert this back to the original shape. There are a few ways we could do this. We could plot by "squeezing" the singleton dimensions.
plt.imshow(np.squeeze(res), cmap='gray')
# Or we could specify the exact dimensions we want to visualize:
plt.imshow(res[0, :, :, 0], cmap='gray')
xs = tf.linspace(-3.0, 3.0, ksize)
ys = tf.sin(xs)
plt.figure()
plt.plot(ys.eval())
ys = tf.reshape(ys, [ksize, 1])
ones = tf.ones((1, ksize))
wave = tf.matmul(ys, ones)
plt.imshow(wave.eval(), cmap='gray')
gabor = tf.multiply(wave, z_2d)
plt.imshow(gabor.eval(), cmap='gray')
# This is a placeholder which will become part of the tensorflow graph, but
# which we have to later explicitly define whenever we run/evaluate the graph.
# Pretty much everything you do in tensorflow can have a name. If we don't
# specify the name, tensorflow will give a default one, like "Placeholder_0".
# Let's use a more useful name to help us understand what's happening.
img = tf.placeholder(tf.float32, shape=[None, None], name='img')
# We'll reshape the 2d image to a 3-d tensor just like before:
# Except now we'll make use of another tensorflow function, expand dims, which adds a singleton dimension at the axis we specify.
# We use it to reshape our H x W image to include a channel dimension of 1
# our new dimensions will end up being: H x W x 1
img_3d = tf.expand_dims(img, 2)
dims = img_3d.get_shape()
print(dims)
# And again to get: 1 x H x W x 1
img_4d = tf.expand_dims(img_3d, 0)
print(img_4d.get_shape().as_list())
# Let's create another set of placeholders for our Gabor's parameters:
mean = tf.placeholder(tf.float32, name='mean')
sigma = tf.placeholder(tf.float32, name='sigma')
ksize = tf.placeholder(tf.int32, name='ksize')
# Then finally redo the entire set of operations we've done to convolve our
# image, except with our placeholders
x = tf.linspace(-3.0, 3.0, ksize)
z = (tf.exp(tf.negative(tf.pow(x - mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0)))) *
(1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))
z_2d = tf.matmul(
tf.reshape(z, tf.stack([ksize, 1])),
tf.reshape(z, tf.stack([1, ksize])))
ys = tf.sin(x)
ys = tf.reshape(ys, tf.stack([ksize, 1]))
ones = tf.ones(tf.stack([1, ksize]))
wave = tf.matmul(ys, ones)
gabor = tf.multiply(wave, z_2d)
gabor_4d = tf.reshape(gabor, tf.stack([ksize, ksize, 1, 1]))
# And finally, convolve the two:
convolved = tf.nn.conv2d(img_4d, gabor_4d, strides=[1, 1, 1, 1], padding='SAME', name='convolved')
convolved_img = convolved[0, :, :, 0]
convolved_img.eval()
convolved_img.eval(feed_dict={img: data.camera()})
res = convolved_img.eval(feed_dict={
img: data.camera(), mean:0.0, sigma:1.0, ksize:100})
plt.imshow(res, cmap='gray')
res = convolved_img.eval(feed_dict={
img: data.camera(),
mean: 0.0,
sigma: 0.5,
ksize: 32
})
plt.imshow(res, cmap='gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I'll be using a popular image dataset for faces called the CelebFaces dataset. I've provided some helper functions which you can find on the resources page, which will just help us with manipulating images and loading this dataset.
Step2: Let's get the 50th image in this list of files, and then read the file at that location as an image, setting the result to a variable, img, and inspect a bit further what's going on
Step3: When I print out this image, I can see all the numbers that represent this image. We can use the function imshow to see this
Step4: <a name="understanding-image-shapes"></a>
Step5: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels.
Step6: We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we're seeing is the amount of Red, Green, or Blue contributing to the overall color image.
Step7: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets
Step8: <a name="the-batch-dimension"></a>
Step9: It turns out we'll often use another convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape will be exactly the same, except we'll stick on a new dimension on the beginning... giving us number of images x the height x the width x the number of color channels.
Step10: This will only work if every image in our list is exactly the same size. So if you have a wide image, short image, long image, forget about it. You'll need them all to be the same size. If you are unsure of how to get all of your images into the same size, then please please refer to the online resources for the notebook I've provided which shows you exactly how to take a bunch of images of different sizes, and crop and resize them the best we can to make them all the same size.
Step11: This is the first step towards building our robot overlords. We've reduced down our entire dataset to a single representation which describes what most of our dataset looks like. There is one other very useful statistic which we can look at very easily
Step12: So this is incredibly cool. We've just shown where changes are likely to be in our dataset of images. Or put another way, we're showing where and how much variance there is in our previous mean image representation.
Step13: This is showing us on average, how every color channel will vary as a heatmap. The more red, the more likely that our mean image is not the best representation. The more blue, the less likely that our mean image is far off from any other possible image.
Step14: We first convert our N x H x W x C dimensional array into a 1 dimensional array. The values of this array will be based on the last dimensions order. So we'll have
Step15: The last line is saying give me a histogram of every value in the vector, and use 255 bins. Each bin is grouping a range of values. The bars of each bin describe the frequency, or how many times anything within that range of values appears.In other words, it is telling us if there is something that seems to happen more than anything else. If there is, it is likely that a neural network will take advantage of that.
Step16: When we subtract an image by our mean image, we remove all of this information from it. And that means that the rest of the information is really what is important for describing what is unique about it.
Step17: What we can see from the histograms is the original image's distribution of values from 0 - 255. The mean image's data distribution is mostly centered around the value 100. When we look at the difference of the original image and the mean image as a histogram, we can see that the distribution is now centered around 0. What we are seeing is the distribution of values that were above the mean image's intensity, and which were below it. Let's take it one step further and complete the normalization by dividing by the standard deviation of our dataset
Step18: Now our data has been squished into a peak! We'll have to look at it on a different scale to see what's going on
Step19: What we can see is that the data is in the range of -3 to 3, with the bulk of the data centered around -1 to 1. This is the effect of normalizing our data
Step20: Let's take a look at how we might create a range of numbers. Using numpy, we could for instance use the linear space function
Step21: <a name="tensors"></a>
Step22: Instead of a numpy.array, we are returned a tf.Tensor. The name of it is "LinSpace
Step23: <a name="operations"></a>
Step24: So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace.
Step25: What I've done is asked for the tf.Tensor that comes from the operation "LinSpace". So remember, the result of a tf.Operation is a tf.Tensor. Remember that was the same name as the tensor x we created before.
Step26: We could also explicitly tell the session which graph we want to manage
Step27: By default, it grabs the default graph. But we could have created a new graph like so
Step28: And then used this graph only in our session.
Step29: Now we didn't have to explicitly tell the eval function about our session. We'll leave this session open for the rest of the lecture.
Step30: <a name="many-operations"></a>
Step31: Just like before, amazingly, we haven't actually computed anything. We *have just added a bunch of operations to Tensorflow's graph. Whenever we want the value or output of this operation, we'll have to explicitly ask for the part of the graph we're interested in before we can see its result. Since we've created an interactive session, we should just be able to say the name of the Tensor that we're interested in, and call the eval function
Step32: <a name="convolution"></a>
Step33: <a name="convolving-an-image-with-a-gaussian"></a>
Step34: Notice our img shape is 2-dimensional. For image convolution in Tensorflow, we need our images to be 4 dimensional. Remember that when we load many iamges and combine them in a single numpy array, the resulting shape has the number of images first.
Step35: Instead of getting a numpy array back, we get a tensorflow tensor. This means we can't access the shape parameter like we did with the numpy array. But instead, we can use get_shape(), and get_shape().as_list()
Step36: The H x W image is now part of a 4 dimensional array, where the other dimensions of N and C are 1. So there is only 1 image and only 1 channel.
Step37: <a name="convolvefilter-an-image-using-a-gaussian-kernel"></a>
Step38: There are two new parameters here
Step39: <a name="modulating-the-gaussian-with-a-sine-wave-to-create-gabor-kernel"></a>
Step40: We then calculate the sine of these values, which should give us a nice wave
Step41: And for multiplication, we'll need to convert this 1-dimensional vector to a matrix
Step42: We then repeat this wave across the matrix by using a multiplication of ones
Step43: We can directly multiply our old Gaussian kernel by this wave and get a gabor kernel
Step44: <a name="manipulating-an-image-with-this-gabor"></a>
Step45: What we've done is create an entire graph from our placeholders which is capable of convolving an image with a gabor kernel. In order to compute it, we have to specify all of the placeholders required for its computation.
Step46: It's saying that we didn't specify our placeholder for img. In order to "feed a value", we use the feed_dict parameter like so
Step47: But that's not the only placeholder in our graph! We also have placeholders for mean, sigma, and ksize. Once we specify all of them, we'll have our result
Step48: Now, instead of having to rewrite the entire graph, we can just specify the different placeholders.
|
9,070
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
incomes = np.random.normal(27000, 15000, 10000) # (center around, stdev, population )
np.mean(incomes) # calculate mean
%matplotlib inline
import matplotlib.pyplot as plt
# segment the income data into 50 buckets and plot as a histogram
plt.hist(incomes, 50)
plt.show()
np.median(incomes)
# append the value to the end of numpy array
incomes = np.append(incomes, [1000000000])
np.median(incomes)
np.mean(incomes) # mean has shifted to the rights
ages = np.random.randint(18, high=90, size=500)
ages
from scipy import stats
stats.mode(ages)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can segment the income data into 50 buckets, and plot it as a histogram
Step2: Now compute the median - since we have a nice, even distribution it too should be close to 27,000
Step3: Now we'll add Donald Trump into the mix. Darn income inequality!
Step4: The median won't change much, but the mean does
Step5: Mode
|
9,071
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import os
import sys
import pandas as pd
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import datetime
#set current working directory
os.chdir('D:/Practical Time Series')
#Read the dataset into a pandas.DataFrame
df = pd.read_csv('datasets/PRSA_data_2010.1.1-2014.12.31.csv')
print('Shape of the dataframe:', df.shape)
#Let's see the first five rows of the DataFrame
df.head()
Rows having NaN values in column pm2.5 are dropped.
df.dropna(subset=['pm2.5'], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(lambda row: datetime.datetime(year=row['year'], month=row['month'], day=row['day'],
hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
#Let us draw a box plot to visualize the central tendency and dispersion of PRES
plt.figure(figsize=(5.5, 5.5))
g = sns.boxplot(df['pm2.5'])
g.set_title('Box plot of pm2.5')
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['pm2.5'])
g.set_title('Time series of pm2.5')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
#Let's plot the series for six months to check if any pattern apparently exists.
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['pm2.5'].loc[df['datetime']<=datetime.datetime(year=2010,month=6,day=30)], color='g')
g.set_title('pm2.5 during 2010')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
#Let's zoom in on one month.
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['pm2.5'].loc[df['datetime']<=datetime.datetime(year=2010,month=1,day=31)], color='g')
g.set_title('pm2.5 during Jan 2010')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1))
Let's start by splitting the dataset into train and validation. The dataset's time period if from
Jan 1st, 2010 to Dec 31st, 2014. The first fours years - 2010 to 2013 is used as train and
2014 is kept for validation.
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
#First five rows of train
df_train.head()
#First five rows of validation
df_val.head()
#Reset the indices of the validation set
df_val.reset_index(drop=True, inplace=True)
The train and validation time series of scaled pm2.5 is also plotted.
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_train['scaled_pm2.5'], color='b')
g.set_title('Time series of scaled pm2.5 in train set')
g.set_xlabel('Index')
g.set_ylabel('Scaled pm2.5 readings')
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_val['scaled_pm2.5'], color='r')
g.set_title('Time series of scaled pm2.5 in validation set')
g.set_xlabel('Index')
g.set_ylabel('Scaled pm2.5 readings')
def makeXy(ts, nb_timesteps):
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
#X_train and X_val are reshaped to 3D arrays
X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)),\
X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of arrays after reshaping:', X_train.shape, X_val.shape)
from keras.layers import Dense
from keras.layers import Input
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import ZeroPadding1D
from keras.layers.convolutional import Conv1D
from keras.layers.pooling import AveragePooling1D
from keras.optimizers import SGD
from keras.models import Model
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
#Define input layer which has shape (None, 7) and of type float32. None indicates the number of instances
input_layer = Input(shape=(7,1), dtype='float32')
#Add zero padding
zeropadding_layer = ZeroPadding1D(padding=1)(input_layer)
#Add 1D convolution layers
conv1D_layer1 = Conv1D(64, 3, strides=1, use_bias=True)(zeropadding_layer)
conv1D_layer2 = Conv1D(32, 3, strides=1, use_bias=True)(conv1D_layer1)
#Add AveragePooling1D layer
avgpooling_layer = AveragePooling1D(pool_size=3, strides=1)(conv1D_layer2)
#Add Flatten layer
flatten_layer = Flatten()(avgpooling_layer)
#A couple of Dense layers are also added
dense_layer1 = Dense(32)(avgpooling_layer)
dense_layer2 = Dense(16)(dense_layer1)
dropout_layer = Dropout(0.2)(flatten_layer)
#Finally the output layer gives prediction for the next day's air pressure.
output_layer = Dense(1, activation='linear')(dropout_layer)
ts_model = Model(inputs=input_layer, outputs=output_layer)
ts_model.compile(loss='mean_absolute_error', optimizer='adam')#SGD(lr=0.001, decay=1e-5))
ts_model.summary()
save_weights_at = os.path.join('keras_models', 'PRSA_data_PM2.5_1DConv_weights.{epoch:02d}-{val_loss:.4f}.hdf5')
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
period=1)
ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
best_model = load_model(os.path.join('keras_models', 'PRSA_data_PM2.5_1DConv_weights.18-0.0128.hdf5'))
preds = best_model.predict(X_val)
pred_pm25 = scaler.inverse_transform(preds)
pred_pm25 = np.squeeze(pred_pm25)
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25)
print('MAE for the validation set:', round(mae, 4))
#Let's plot the first 50 actual and predicted values of pm2.5.
plt.figure(figsize=(5.5, 5.5))
plt.plot(range(50), df_val['pm2.5'].loc[7:56], linestyle='-', marker='*', color='r')
plt.plot(range(50), pred_pm25[:50], linestyle='-', marker='.', color='b')
plt.legend(['Actual','Predicted'], loc=2)
plt.title('Actual vs Predicted pm2.5')
plt.ylabel('pm2.5')
plt.xlabel('Index')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this notebook, we will use a multi-layer perceptron to develop time series forecasting models.
Step2: To make sure that the rows are in the right order of date and time of observations,
Step3: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The pm2.5 variable is mixmax scaled to bound the tranformed variable within [0,1].
Step6: Before training the model, the dataset is split in two parts - train set and validation set.
Step8: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm standardized_pm2.5 in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y.
Step9: The input to convolution layers must be of shape (number of samples, number of timesteps, number of features per timestep). In this case we are modeling only pm2.5 hence number of features per timestep is one. Number of timesteps is seven and number of samples is same as the number of samples in X_train and X_val, which are reshaped to 3D arrays.
Step10: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
Step11: ZeroPadding1D layer is added next to add zeros at the begining and end of each series. Zeropadding ensure that the downstream convolution layer does not reduce the dimension of the output sequences. Pooling layer, added after the convolution layer is used to downsampling the input.
Step12: The first argument of Conv1D is the number of filters, which determine the number of features in the output. Second argument indicates length of the 1D convolution window. The third argument is strides and represent the number of places to shift the convolution window. Lastly, setting use_bias as True, add a bias value during computation of an output feature. Here, the 1D convolution can be thought of as generating local AR models over rolling window of three time units.
Step13: AveragePooling1D is added next to downsample the input by taking average over pool size of three with stride of one timesteps. The average pooling in this case can be thought of as taking moving averages over a rolling window of three time units. We have used average pooling instead of max pooling to generate the moving averages.
Step14: The preceeding pooling layer returns 3D output. Hence before passing to the output layer, a Flatten layer is added. The Flatten layer reshapes the input to (number of samples, number of timesteps*number of features per timestep), which is then fed to the output layer
Step15: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making
Step16: The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training
Step17: Prediction are made for the pm2.5 from the best saved model. The model's predictions, which are on the standardized pm2.5, are inverse transformed to get predictions of original pm2.5.
|
9,072
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
codes_batch =
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
labels_vecs = # Your one-hot encoded labels array here
train_x, train_y =
val_x, val_y =
test_x, test_y =
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
logits = # output layer logits
cost = # cross entropy loss
optimizer = # training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
9,073
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'],crime_model.predict(sales),'-')
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',
sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load some house value vs. crime rate data
Step2: Exploring the data
Step3: Fit the regression model using crime as the feature
Step4: Let's see what our fit looks like
Step5: Above
Step6: Refit our simple regression model on this modified dataset
Step7: Look at the fit
Step8: Compare coefficients for full-data fit versus no-Center-City fit
Step9: Above
Step10: Do the coefficients change much?
|
9,074
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math
def velocity(radius, model='galaxy'):
describe the streaming velocity as function of radius in or around an object
such as a star or a galaxy. We usually define the velocity to be 1 at a radius of 1.
if model == 'star':
# A star has a keplerian rotation curve. The planets around our sun obey this law.
if radius == 0.0:
return 0.0
else:
return 1.0/np.sqrt(radius)
elif model == 'galaxy':
# Most disk galaxies have a flat rotation curve with a linear slope in the center.
if radius > 1.0:
# flat rotation curve outside radius 1.0
return 1.0
else:
# solid body inside radius 1.0, linearly rising rotation curve
return radius
elif model == 'plummer':
# A plummer sphere was an early 1900s description of clusters, and is also not
# a bad description for the inner portions of a galaxy. You can also view it
# as a hybrid and softened version of the 'star' and 'galaxy' described above.
# Note: not quite 1 at 1 yet
# return radius / (1+radius*radius)**0.75
return radius / (0.5+0.5*radius*radius)**0.75
else:
return 0.0
#model = 'star'
#model = 'galaxy'
model = 'plummer'
rad = np.arange(0.0,4.0,0.05)
vel = np.zeros(len(rad)) # this also works: vel = rad * 0.0
for i in range(len(rad)):
vel[i] = velocity(rad[i],model)
print("First, peak and Last value:",vel[0],vel.max(),vel[-1])
plt.plot(rad,vel)
plt.xlabel("Radius")
plt.ylabel("Velocity")
plt.title("Rotation Curve (%s)" % model);
# set the inclination of the disk with the line of sigh
inc = 60 # (0 means face-on, 90 means edge-on)
# some helper variables
cosi = math.cos(inc*math.pi/180.0)
sini = math.sin(inc*math.pi/180.0)
# radius of the disk, and steps in radius
r0 = 4.0
dr = 0.1
dr = 0.5
x = np.arange(-r0,r0,dr)
y = np.arange(-r0,r0,dr)
xx,yy = np.meshgrid(x,y) # helper variables for interpolations
rr = np.sqrt(xx*xx+(yy/cosi)**2)
if r0/dr < 20:
plt.scatter(xx,yy)
else:
print("not plotting too many gridpoints/dimension",r0/dr)
def velocity2d(rad2d, model):
convenient helper function to take a 2d array of radii
and return the same-shaped velocities
(ny,nx) = rad2d.shape
vel2d = rad2d.copy() # could also do np.zeros(nx*ny).reshape(ny,nx)
for y in range(ny):
for x in range(nx):
vel2d[y,x] = velocity(rad2d[y,x],model)
return vel2d
vv = velocity2d(rr,model)
vvmasked = np.ma.masked_where(rr>r0,vv)
vobs = vvmasked * xx / rr * sini
print("V_max:",vobs.max())
vmax = 1
vmax = vobs.max()
if vmax > 0:
plt.imshow(vobs,origin=['Lower'],vmin=-vmax, vmax=vmax)
#plt.matshow(vobs,origin=['Lower'],vmin=-vmax, vmax=vmax)
else:
plt.imshow(vobs,origin=['Lower'])
plt.colorbar()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Initialize the data
Step3: Plotting the Rotation Curve
Step4: This curve of velocity as function of radius is called a Rotation Curve, and extracting such a curve from an observation is crucial to understanding the mass distribution within a galaxy, or the mass of the young star at the center of the disk. We are assuming the gas is on circular orbits, which turns out is not always correct for galaxies. However, for this experiment we will keep that assumption.
Step5: Backwards Projection
Step7: Although we have defined a function velocity to compute the rotation velocity at any radius, this function cannot easily compute from a numpy array, as we just created on a grid on the sky. Thus we need a convenience function to do just that.
|
9,075
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
import tensorflow as tf
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
# Creates a graph.
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
with tf.Session() as sess:
print (sess.run(c))
import time
import tensorflow as tf
def tf_benchmark(a=None, shape_a=None, b=None, shape_b=None, enable_GPU = False):
device = 'cpu'
if enable_GPU:
device = 'gpu'
start_time = time.time()
with tf.device('/{}:0'.format(device)):
a = tf.constant(a, shape=shape_a, name = 'a')
b = tf.constant(b, shape=shape_b, name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
output = sess.run(c)
execution_time = time.time()-start_time
return {'output': output, 'execution time': execution_time}
a=[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
b=[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
shape_a=[2, 3]
shape_b=[3,2]
CPU_exec_time = tf_benchmark(a=a,
b=b,
shape_a=shape_a,
shape_b=shape_b,
enable_GPU=False)
GPU_exec_time = tf_benchmark(a=a,
b=b,
shape_a=shape_a,
shape_b=shape_b,
enable_GPU=True)
print ("CPU execution time: {}".format(CPU_exec_time['execution time']))
print ("GPU execution time: {}".format(GPU_exec_time['execution time']))
print ("GPU vs CPU execution time delta: {}".format(GPU_exec_time['execution time'] - CPU_exec_time['execution time']))
print ("GPU acceleration factor: {}".format(CPU_exec_time['execution time'] / GPU_exec_time['execution time']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 02 -- Manually specifying devices for running Tensorflow code
Step2: Setting up Tensorflow to run on CPU
Step3: Setting up Tensorflow to run on GPU
Step4: Part 03 -- Benchmarking Tensorflow GPU vs CPU
|
9,076
|
<ASSISTANT_TASK:>
Python Code:
database_path = os.path.join('..', 'data', 'Ana', 'database', 'Garmin-Ana-180226-1.csv')
print(os.path.abspath(database_path))
data = pd.read_csv(database_path)
data.columns = [s.replace('_', ' ') for s in data.columns]
data = data.set_index('file name')
data.head()
data.shape
data.groupby('sport').agg({'activity':'count'})
data = data[data['sport'].isin(['cycling', 'walking', 'running', 'hiking', 'driving', 'train', 'squash', 'yoga'])].copy()
data.groupby('sport').agg({'activity':'count'})
# these are the columns we want to use for the machine learning model
columns = ['avg speed', 'total distance', 'total elapsed time', 'avg heart rate', 'max heart rate',
'avg cadence', 'max cadence', 'total calories']
# for column in ['avg cadence', 'max cadence']:
# data[column] = data[column].fillna(data[column].mean()) # fill cadence NaNs with the average of that column
# data[column] = data[column].fillna(0) # fill NaN values with 0 (this is not ideal as cadence values are typically 50-60)
data = data.dropna(axis=0, subset=['avg cadence', 'max cadence']) # drop rows in which the cadence is NaN
# data = data[data['avg cadence'].isna()] # keep rows in which the cadence is NaN
# data = data.dropna(axis=1) # remove columns with NaN values (should be just the cadence columns)
data.groupby('sport').agg({'activity':'count'})
features = data[list(set(columns) & set(data.columns))].copy()
features.shape
features_scaler = preprocessing.MinMaxScaler()
# features_scaler = preprocessing.StandardScaler()
features_scaler.fit(features)
sports = data['sport'].copy()
sports_encoder = preprocessing.LabelEncoder()
sports_encoder.fit(sports)
features_train, features_test, sports_train, sports_test = model_selection.train_test_split(features, sports)
# sport_classifier = tree.DecisionTreeClassifier()
sport_classifier = ensemble.RandomForestClassifier(n_estimators=10)
sport_classifier.fit(features_scaler.transform(features_train), sports_encoder.transform(sports_train))
predictions = features_test.copy()
predictions['real sport'] = sports_test
predictions['predicted sport'] = sports_encoder.inverse_transform(
sport_classifier.predict(features_scaler.transform(features_test)))
# add columns indicating the probabilities for each sport
probablities = sport_classifier.predict_proba(features_scaler.transform(features_test))
for i, column in enumerate(['%s probability' % s for s in sports_encoder.classes_]):
predictions[column] = probablities[:, i]
# only keep rows in which the prediction was wrong
predictions = predictions[predictions['real sport'] != predictions['predicted sport']]
# predictions = predictions.sort_values('cycling probability', ascending=False)
# predictions = predictions[predictions['cycling probability'] < 1]
predictions.head(20)
x = 'avg speed'
y = 'avg heart rate'
sport_colours = {'walking': 'g', 'cycling': 'b', 'driving': 'k', 'yoga': 'm', 'mixed':'y',
'hiking': 'orange', 'running': 'r', 'train': 'c', 'other': 'purple', 'squash': 'grey',
'water': 'pink'}
fig, ax = plt.subplots(1, 1, figsize=[15,10])
for sport in sorted(predictions['real sport'].unique()):
ax.scatter(predictions[predictions['real sport'] == sport][x],
predictions[predictions['real sport'] == sport][y],
marker='o', s=100, alpha=0.5,
color=sport_colours[sport],
label="REAL - %s" % sport)
for sport in sorted(predictions['predicted sport'].unique()):
ax.scatter(predictions[predictions['predicted sport'] == sport][x],
predictions[predictions['predicted sport'] == sport][y],
marker='o', s=500, alpha=0.5, linewidth=5,
color=sport_colours[sport], facecolors='none',
label="PREDICTED - %s" % sport)
ax.set_xlabel(x)
ax.set_ylabel(y)
# ax.set_xlim([0,5])
# ax.set_ylim([0,5000])
ax.legend()
ax.grid()
sport_classifier.score(features_scaler.transform(features_test), sports_encoder.transform(sports_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filter the data
Step2: Scale and encode the data
Step3: Split the data
Step4: Train the model
Step5: Test the model
|
9,077
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
invocations = pd.read_csv("datasets/test_code_invocations.csv", sep=";")
invocations.head()
invocation_matrix = invocations.pivot_table(
index=['test_type', 'test_method'],
columns=['prod_type', 'prod_method'],
values='invocations',
fill_value=0
)
# show interesting parts of results
invocation_matrix.iloc[4:8,4:6]
from sklearn.metrics.pairwise import cosine_distances
distance_matrix = cosine_distances(invocation_matrix)
# show some interesting parts of results
distance_matrix[81:85,60:62]
distance_df = pd.DataFrame(distance_matrix, index=invocation_matrix.index, columns=invocation_matrix.index)
# show some interesting parts of results
distance_df.iloc[81:85,60:62]
invocations[
(invocations.test_method == "void readRoundtripWorksWithFullData()") |
(invocations.test_method == "void postCommentActuallyCreatesComment()")]
invocations[
(invocations.test_method == "void readRoundtripWorksWithFullData()") |
(invocations.test_method == "void postTwiceCreatesTwoElements()")]
from sklearn.manifold import MDS
model = MDS(dissimilarity='precomputed', random_state=10)
distance_df_2d = model.fit_transform(distance_df)
distance_df_2d[:5]
%matplotlib inline
from matplotlib import cm
import matplotlib.pyplot as plt
relative_index = distance_df.index.labels[0].values() / distance_df.index.labels[0].max()
colors = [x for x in cm.hsv(relative_index)]
plt.figure(figsize=(8,8))
x = distance_df_2d[:,0]
y = distance_df_2d[:,1]
plt.scatter(x, y, c=colors)
from sklearn.cluster import DBSCAN
dbscan = DBSCAN(eps=0.08, min_samples=10)
clustering_results = dbscan.fit(distance_df_2d)
plt.figure(figsize=(8,8))
cluster_members = clustering_results.components_
# plot all data points
plt.scatter(x, y, c='k', alpha=0.2)
# plot cluster members
plt.scatter(
cluster_members[:,0],
cluster_members[:,1],
c='r', s=100, alpha=0.1)
tests = pd.DataFrame(index=distance_df.index)
tests['cluster'] = clustering_results.labels_
cohesive_tests = tests[tests.cluster != -1]
cohesive_tests.head()
test_measures = cohesive_tests.reset_index().groupby("cluster").test_type.agg({"nunique", "count"})
test_measures
test_list = cohesive_tests.reset_index().groupby("cluster").test_type.apply(set)
test_list
test_analysis_result = test_measures.join(test_list)
test_analysis_result
test_analysis_result.iloc[0].test_type
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What we've got here are all names of our test types (test_type) and production types (prod_type) as well as the signatures of the test methods (test_method) and production methods (prod_method). We also have the amount of calls from the test methods to the production methods (invocations).
Step2: What we've got now is the information for each invocation (or non-invocation) of test methods to production methods. In mathematical words, we've got now a n-dimensional vector for each test method where n is the number of tested production methods in our code base! That means we've just transformed our software data to a representation that we can now work on with standard Data Science tools
Step3: From this data, we create a DataFrame to get a better representation. You can find the complete DataFrame here as excel file as well.
Step4: Visualization
Step5: Next, we plot the now two-dimensional matrix with matplotlib. We colorize all data points according to the name of the test types. We can achieve this by assigning each type a number within 0 and 1 (relative_index) and draw a color from a predefined color spectrum (cm.hsv) for each type. With this, each test class gets its own color. This enables us to quickly reason about test classes that belong together.
Step6: We now have the visual information about which test methods call similar production code! Let's discuss this plot
|
9,078
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.tree import DecisionTreeClassifier as dtc
X = [[0, 0], [1, 1]]
Y = [0, 1]
clf = dtc()
clf = clf.fit(X, Y)
clf.predict([[2., 2.]])
clf.predict_proba([[2., 2.]])
from sklearn.datasets import load_iris
from sklearn import tree
iris = load_iris()
print(iris.DESCR)
print(iris.data)
clf = tree.DecisionTreeClassifier()
clf = clf.fit(iris.data, iris.target)
import graphviz
dot_data = tree.export_graphviz(clf, out_file=None)
graph = graphviz.Source(dot_data)
graph.render("iris")
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
clf.predict(iris.data[:1, :])
clf.predict_proba(iris.data[:1, :])
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
# Parameters
n_classes = 3
plot_colors = "ryb"
plot_step = 0.02
# Load data
iris = load_iris()
# permutation of 2 features from 4 features
for pairidx, pair in enumerate([[0, 1], [0, 2], [0, 3],
[1, 2], [1, 3], [2, 3]]):
# We only take the two corresponding features
print(pairidx, pair)
X = iris.data[:, pair]
y = iris.target
print(len(X), len(y))
print(X[1])
# Train
clf = DecisionTreeClassifier().fit(X, y)
# Plot the decision boundary
plt.subplot(2, 3, pairidx + 1)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
plt.tight_layout(h_pad=0.5, w_pad=0.5, pad=2.5)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.RdYlBu)
plt.xlabel(iris.feature_names[pair[0]])
plt.ylabel(iris.feature_names[pair[1]])
# Plot the training points
for i, color in zip(range(n_classes), plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=color, label=iris.target_names[i],
cmap=plt.cm.RdYlBu, edgecolor='black', s=15)
plt.suptitle("Decision surface of a decision tree using paired features")
plt.legend(loc='lower right', borderpad=0, handletextpad=0)
plt.axis("tight")
plt.show()
from sklearn import tree
X = [[0, 0], [2, 2]]
y = [0.5, 2.5]
clf = tree.DecisionTreeRegressor()
clf = clf.fit(X, y)
clf.predict([[1, 1]])
print(__doc__)
# Import the necessary modules and libraries
import numpy as np
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 3 * (0.5 - rng.rand(16))
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=3)
regr_2 = DecisionTreeRegressor(max_depth=9)
regr_1.fit(X, y)
regr_2.fit(X, y)
# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_1 = regr_1.predict(X_test)
y_2 = regr_2.predict(X_test)
# Plot the results
plt.figure()
plt.scatter(X, y, s=20, edgecolor="black",
c="darkorange", label="data")
plt.plot(X_test, y_1, color="cornflowerblue",
label="max_depth=2", linewidth=2)
plt.plot(X_test, y_2, color="yellowgreen", label="max_depth=5", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Decision Tree Regression")
plt.legend()
plt.show()
# Quadratic training set + noise
np.random.seed(42)
m = 200
X = np.random.rand(m, 1)
y = 4 * (X - 0.5) ** 2
y = y + np.random.randn(m, 1) / 10
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "decision_trees"
def image_path(fig_id):
return os.path.join("files", fig_id)
def save_fig(fig_id, tight_layout=True):
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(image_path(fig_id) + ".png", format='png', dpi=300)
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg.fit(X, y)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(random_state=42, max_depth=2)
tree_reg2 = DecisionTreeRegressor(random_state=42, max_depth=3)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"):
x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1)
y_pred = tree_reg.predict(x1)
plt.axis(axes)
plt.xlabel("$x_1$", fontsize=18)
if ylabel:
plt.ylabel(ylabel, fontsize=18, rotation=0)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_regression_predictions(tree_reg1, X, y)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
plt.text(0.21, 0.65, "Depth=0", fontsize=15)
plt.text(0.01, 0.2, "Depth=1", fontsize=13)
plt.text(0.65, 0.8, "Depth=1", fontsize=13)
plt.legend(loc="upper center", fontsize=18)
plt.title("max_depth=2", fontsize=14)
plt.subplot(122)
plot_regression_predictions(tree_reg2, X, y, ylabel=None)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
for split in (0.0458, 0.1298, 0.2873, 0.9040):
plt.plot([split, split], [-0.2, 1], "k:", linewidth=1)
plt.text(0.3, 0.5, "Depth=2", fontsize=13)
plt.title("max_depth=3", fontsize=14)
save_fig("tree_regression_plot")
plt.show()
from sklearn.tree import export_graphviz
export_graphviz(
tree_reg1,
out_file=image_path("regression_tree.dot"),
feature_names=["x1"],
rounded=True,
filled=True
)
tree_reg1 = DecisionTreeRegressor(random_state=42)
tree_reg2 = DecisionTreeRegressor(random_state=42, min_samples_leaf=10)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
x1 = np.linspace(0, 1, 500).reshape(-1, 1)
y_pred1 = tree_reg1.predict(x1)
y_pred2 = tree_reg2.predict(x1)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred1, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", fontsize=18, rotation=0)
plt.legend(loc="upper center", fontsize=18)
plt.title("No restrictions", fontsize=14)
plt.subplot(122)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred2, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.title("min_samples_leaf={}".format(tree_reg2.min_samples_leaf), fontsize=14)
save_fig("tree_regression_regularization_plot")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After being fitted, the model can then be used to predict the class of samples
Step2: Alternatively, the probability of each class can be predicted, which is the fraction of training samples of the same class in a leaf
Step3: DecisionTreeClassifier is capable of both binary (where the labels are [-1, 1]) classification and multiclass (where the labels are [0, …, K-1]) classification.
Step4: After being fitted, the model can then be used to predict the class of samples
Step5: Example
Step6: Regression
|
9,079
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'query':'', # SQL with newlines and all.
'dataset':'', # Existing BigQuery dataset.
'view':'', # View to create from this query.
'legacy':True, # Query type must match source tables.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bigquery':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'from':{
'query':{'field':{'name':'query','kind':'text','order':1,'default':'','description':'SQL with newlines and all.'}},
'legacy':{'field':{'name':'legacy','kind':'boolean','order':4,'default':True,'description':'Query type must match source tables.'}}
},
'to':{
'dataset':{'field':{'name':'dataset','kind':'string','order':2,'default':'','description':'Existing BigQuery dataset.'}},
'view':{'field':{'name':'view','kind':'string','order':3,'default':'','description':'View to create from this query.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter BigQuery Query To View Recipe Parameters
Step3: 4. Execute BigQuery Query To View
|
9,080
|
<ASSISTANT_TASK:>
Python Code:
strat_train_set_copy = strat_train_set.copy()
housing.plot(kind="scatter", x='longitude', y='latitude')
housing.plot(kind="scatter", x='longitude', y='latitude', alpha=0.1)
strat_train_set_copy.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4,
s=strat_train_set_copy.population/100,
c=strat_train_set_copy.median_house_value,
cmap=plt.get_cmap("jet"),
label="population", figsize=(15, 15),
colorbar=True)
plt.legend()
corr_matrix = strat_train_set_copy.corr()
corr_matrix.median_house_value.sort_values(ascending=False)
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 8))
strat_train_set_copy.plot.scatter(x="median_income", y="median_house_value", alpha=0.1)
housing["rooms_per_household"] = housing["total_rooms"] / housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"]=housing["population"]/housing["households"]
housing.info()
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
housing = strat_train_set.drop('median_house_value', axis=1)
housing_labels = strat_train_set['median_house_value'].copy()
housing.info()
housing.dropna(subset=['total_bedrooms']).info()
housing.drop('total_bedrooms', axis=1).info()
housing['total_bedrooms'].fillna(housing['total_bedrooms'].median()).describe()
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy='median')
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
imputer.statistics_
imputer.strategy
housing.drop("ocean_proximity", axis=1).median().values
X = imputer.transform(housing_num)
X
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.head()
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
housing_cat = housing.ocean_proximity
housing_cat.describe()
housing_cat.value_counts()
housing_cat_encoded = encoder.fit_transform(housing_cat)
housing_cat_encoded
type(housing_cat_encoded)
print(encoder.classes_)
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
print(housing_cat_encoded.shape)
print(type(housing_cat_encoded))
(housing_cat_encoded.reshape(-1, 1)).shape
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1, 1))
housing_cat_1hot
type(housing_cat_1hot)
housing_cat_1hot.toarray()
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer(sparse_output=False)
housing_cat_1hot = encoder.fit_transform(housing_cat)
housing_cat_1hot
type(housing_cat_1hot)
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
housing.head()
housing.iloc[:, 3]
X = housing.values
# This can be achieved by the iloc, with using .values
housing.iloc[:, [rooms_ix, bedrooms_ix, households_ix, population_ix]].head()
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
np.c_[X, rooms_per_household, population_per_household]
np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room=False):
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(X)
print(housing_extra_attribs.shape)
print(housing.shape)
# Convert back to data frame -- My way
new_columns = housing.columns.append(
pd.Index(['rooms_per_household', 'population_per_household'])
)
new_columns
housing_extra_attribs_df = pd.DataFrame(housing_extra_attribs, columns=new_columns)
housing_extra_attribs_df.head()
housing.describe()
housing.total_rooms.describe()
from sklearn.preprocessing import MinMaxScaler
scalar = MinMaxScaler()
scalar.fit(housing["total_rooms"].values.reshape(-1, 1))
pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
scalar.fit(housing["total_rooms"].values.reshape(-1, 1))
pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
# I want to verify the pipelined version
# doest the same thing as the separated steps
num_pipeline_stage1 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
])
X_pipeline = num_pipeline_stage1.fit_transform(housing_num)
X = imputer.transform(housing_num)
X_pipeline
np.array_equal(X, X_pipeline)
num_pipeline_stage2 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
])
Y = attr_adder.fit_transform(X)
Y_pipeline = num_pipeline_stage2.fit_transform(housing_num)
np.array_equal(Y, Y_pipeline)
num_pipeline_stage3 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
Z = scalar.fit_transform(Y)
Z.std(), Z.mean()
Z_pipeline = num_pipeline_stage3.fit_transform(housing_num)
np.array_equal(Z, Z_pipeline)
from sklearn.base import BaseEstimator, TransformerMixin
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
class CustomizedLabelBinarizer(BaseEstimator, TransformerMixin):
def __init__(self, sparse_output=False):
self.encode = LabelBinarizer(sparse_output = sparse_output)
def fit(self, X, y=None):
return self.encode.fit(X)
def transform(self, X):
return self.encode.transform(X)
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
]
)
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
('label_binarizer', CustomizedLabelBinarizer()),
]
)
# LabelBinarizer().fit_transform(DataFrameSelector(cat_attribs).fit_transform(housing))
# num_pipeline.fit_transform(housing)
# cat_pipeline.fit_transform(housing)
from sklearn.pipeline import FeatureUnion
full_pipeline = FeatureUnion(transformer_list=[
('num_pipeline', num_pipeline),
('cat_pipeline', cat_pipeline),
])
housing_prepared = full_pipeline.fit_transform(housing)
print(housing_prepared.shape)
housing_prepared
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
some_data = housing[:5]
some_data
some_labels = housing_labels[:5]
some_labels
some_data_prepared = full_pipeline.transform(some_data)
some_data_prepared
print(f'Prediction:\t{lin_reg.predict(some_data_prepared)}')
print(f'Lables:\t\t{list(some_labels)}')
from sklearn.metrics import mean_squared_error
housing_prediction = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_prediction, housing_labels)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
tree_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(tree_predictions, housing_labels)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
rmse_scores = np.sqrt(-scores)
rmse_scores
def display_scores(scores):
print(f'Scores: {scores}')
print(f'Mean: {scores.mean()}')
print(f'STD: {scores.std()}')
display_scores(rmse_scores)
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
forest_prediction = forest_reg.predict(housing_prepared)
forest_rmse = np.sqrt(mean_squared_error(forest_prediction, housing_labels))
forest_rmse
# Follow the example here: https://scikit-learn.org/stable/auto_examples/plot_kernel_ridge_regression.html
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
param_grid = [
{'kernel': ['linear'], 'C': [0.1, 1.0, 10.0]},
{'kernel': ['rbf'], 'C': [0.1, 1.0, 10.0], 'gamma': np.logspace(-2, 2, 5)},
]
param_grid = [
{'kernel': ['rbf'], 'C': [0.1, 1.0, 10.0], 'gamma': np.logspace(-2, 2, 5)},
]
svm_reg = SVR()
grid_search = GridSearchCV(svm_reg, param_grid, cv=5, scoring="neg_mean_squared_error")
grid_search.fit(housing_prepared, housing_labels)
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
# from sklearn.externals import joblib
# joblib.dump(forest_reg, 'forest_reg.pkl')
# forest_reg_loaded = joblib.load('forest_reg.pkl')
# np.sqrt(mean_squared_error(forest_reg_loaded.predict(housing_prepared), housing_labels))
# from sklearn.model_selection import GridSearchCV
# param_grid = [
# {'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]},
# {'bootstrap': [False], 'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]}
# ]
# forest_reg = RandomForestRegressor()
# grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring="neg_mean_squared_error")
# grid_search.fit(housing_prepared, housing_labels)
# grid_search.best_params_
# grid_search.best_estimator_
# cvres = grid_search.cv_results_
# for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
# print(np.sqrt(-mean_score), params)
# feature_importances = grid_search.best_estimator_.feature_importances_
# feature_importances
# extra_attribs = ['rooms_per_hhold', 'pop_per_hhold']
# cat_one_hot_attribs = list(encoder.classes_)
# cat_one_hot_attribs
# attributes = num_attribs + extra_attribs + cat_one_hot_attribs
# attributes, len(attributes)
# sorted(zip(feature_importances, attributes), reverse=True)
# final_model = grid_search.best_estimator_
# X_test = strat_test_set.drop("median_house_value", axis=1)
# y_test = strat_test_set.median_house_value.copy()
# X_test_prepared = full_pipeline.transform(X_test)
# final_predictions = final_model.predict(X_test_prepared)
# final_mse = mean_squared_error(final_predictions, y_test)
# final_rmse = np.sqrt(final_mse)
# final_rmse
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Experimenting with Attribute Combinations
Step2: 2.5 Prepare the Data for Machine Learning Algorithms
Step3: Handling Text and Categorical Attributes
Step4: One hot encoding
Step5: Combine
Step6: Custom Transformers
Step7: 2.5.4 Feature Scaling
Step8: 2.5.5 Transformation Pipeline
Step9: 2.6.1 Training and Evaluating on the Training Set
Step10: Tree model
Step11: 2.6.2 Better Evaluation Using Cross-Validation
Step12: Random Forest
Step13: Ex01
Step14: From the above results, we can see it doesn't do a very good job.
Step15: 2.7.1 Grid Search
Step16: 2.7.4 Analyze the best models and their errors
Step17: 2.7.5 Evaluate Your System on the Test Set
|
9,081
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import datetime
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
df = pd.read_csv('data/pm25.csv')
print(df.shape)
df.head()
df.isnull().sum()*100/df.shape[0]
df.dropna(subset=['pm2.5'], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(
lambda row: datetime.datetime(year=row['year'],
month=row['month'], day=row['day'],hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
df.head()
df['year'].value_counts()
plt.figure(figsize=(5.5, 5.5))
g = sns.lineplot(data=df['pm2.5'], color='g')
g.set_title('pm2.5 between 2010 and 2014')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1))
df.head()
plt.figure(figsize=(5.5, 5.5))
g = sns.lineplot(data=df['scaled_pm2.5'], color='purple')
g.set_title('Scaled pm2.5 between 2010 and 2014')
g.set_xlabel('Index')
g.set_ylabel('scaled_pm2.5 readings')
# 2014 data as validation data, before 2014 as training data
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
df_val.reset_index(drop=True, inplace=True)
df_val.head()
# The way this works is to have the first nb_timesteps-1 observations as X and nb_timesteps_th as the target,
## collecting the data with 1 stride rolling window.
def makeXy(ts, nb_timesteps):
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
print(X_train[0], y_train[0])
print(X_train[1], y_train[1])
X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
print(X_val[0], y_val[0])
print(X_val[1], y_val[1])
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of arrays after reshaping:', X_train.shape, X_val.shape)
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import SimpleRNN
from tensorflow.keras.layers import Dense, Dropout, Input
from tensorflow.keras.models import load_model
from tensorflow.keras.callbacks import ModelCheckpoint
from sklearn.metrics import mean_absolute_error
tf.random.set_seed(10)
model = Sequential()
model.add(SimpleRNN(32, input_shape=(X_train.shape[1:])))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
model.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics=['mae'])
model.summary()
save_weights_at = 'basic_rnn_model'
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
save_freq='epoch')
history = model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
# load the best model
best_model = load_model('basic_rnn_model')
# Compare the prediction with y_true
preds = best_model.predict(X_val)
pred_pm25 = scaler.inverse_transform(preds)
pred_pm25 = np.squeeze(pred_pm25)
# Measure MAE of y_pred and y_true
mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25)
print('MAE for the validation set:', round(mae, 4))
mae = mean_absolute_error(df_val['scaled_pm2.5'].loc[7:], preds)
print('MAE for the scaled validation set:', round(mae, 4))
# Check the metrics and loss of each apoch
mae = history.history['mae']
val_mae = history.history['val_mae']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(mae))
plt.plot(epochs, mae, 'bo', label='Training MAE')
plt.plot(epochs, val_mae, 'b', label='Validation MAE')
plt.title('Training and Validation MAE')
plt.legend()
plt.figure()
# Here I was using MAE as loss too, that's why they lookedalmost the same...
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Note
Step3: Note
|
9,082
|
<ASSISTANT_TASK:>
Python Code:
import scipy
import numpy as np
a = np.array([[26, 3, 0], [3, 195, 1], [0, 1, 17]])
a = 1-np.sign(a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
9,083
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# use matplotlib style sheet
plt.style.use('ggplot')
# import the t-distribution from scipy.stats
from scipy.stats import t
y = np.array([35,34,38,35,37])
y
n = len(y)
n
estimate = np.mean(y)
estimate
se = np.std(y, ddof=1)/np.sqrt(n)
se
int50 = estimate + t.ppf([0.25, 0.75], n-1)*se
int50
int95 = estimate + t.ppf([0.025, 0.975], n-1)*se
int95
from scipy.stats import norm
y = 700
y
n = 1000
n
estimate = y/n
estimate
se = np.sqrt(estimate*(1-estimate)/n)
se
int95 = estimate + norm.ppf([.025,0.975])*se
int95
y = np.repeat([0,1,2,3,4], [600,300, 50, 30, 20])
y
n = len(y)
n
estimate = np.mean(y)
estimate
se = np.std(y, ddof=1)/np.sqrt(n)
se
int50 = estimate + t.ppf([0.25, 0.75], n-1)*se
int50
int95 = estimate + t.ppf([0.025, 0.975], n-1)*se
int95
%%bash
head ../../ARM_Data/death.polls/polls.dat
# Data is available in death.polls directory of ARM_Data
data = []
temp = []
ncols = 5
with open("../../ARM_Data/death.polls/polls.dat") as f:
for line in f.readlines():
for d in line.strip().split(' '):
temp.append(float(d))
if (len(temp) == ncols):
data.append(temp)
temp = []
polls = pd.DataFrame(data, columns=[u'year', u'month', u'perc for',
u'perc against', u'perc no opinion'])
polls.head()
# --Note: this give the (percent) support for thise that have an opinion
# --The percentage with no opinion are ignored
# --This results in difference between our plot (below) and the Gallup plot (link above)
polls[u'support'] = polls[u'perc for']/(polls[u'perc for']+polls[u'perc against'])
polls.head()
polls[u'year_float'] = polls[u'year'] + (polls[u'month']-6)/12
polls.head()
# add error column -- symmetric so only add one column
# assumes sample size N=1000
# uses +/- 1 standard error, resulting in 68% confidence
polls[u'support_error'] = np.sqrt(polls[u'support']*(1-polls[u'support'])/1000)
polls.head()
fig, ax = plt.subplots(figsize=(8, 6))
plt.errorbar(polls[u'year_float'], 100*polls[u'support'],
yerr=100*polls[u'support_error'], fmt='ko',
ms=4, capsize=0)
plt.ylabel(u'Percentage support for the death penalty')
plt.xlabel(u'Year')
# you can adjust y-limits with command like below
# I will leave the default behavior
#plt.ylim(np.min(100*polls[u'support'])-2, np.max(100*polls[u'support']+2))
N = np.array([66030000, 81083600, 60788845])
p = np.array([0.55, 0.61, 0.38])
se = np.array([0.02, 0.03, 0.03])
w_avg = np.sum(N*p)/np.sum(N)
w_avg
se_w_avg = np.sqrt(np.sum((N*se/np.sum(N))**2))
se_w_avg
# this uses +/- 2 std devs
int_95 = w_avg + np.array([-2,2])*se_w_avg
int_95
# import the normal from scipy.stats
# repeated to make sure that it is clear that it is needed for this section
from scipy.stats import norm
# also need this for estimating CI from samples
from scipy.stats.mstats import mquantiles
n_men = 500
n_men
p_hat_men = 0.75
p_hat_men
se_men = np.sqrt(p_hat_men*(1.-p_hat_men)/n_men)
se_men
n_women = 500
n_women
p_hat_women = 0.65
p_hat_women
se_women = np.sqrt(p_hat_women*(1.-p_hat_women)/n_women)
se_women
n_sims = 10000
n_sims
p_men = norm.rvs(size=n_sims, loc=p_hat_men, scale=se_men)
p_men[:10] # show first ten
p_women = norm.rvs(size=n_sims, loc=p_hat_women, scale=se_women)
p_women[:10] # show first ten
ratio = p_men/p_women
ratio[:10] # show first ten
# the values of alphap and betap replicate the R default behavior
# see http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.mquantiles.html
int95 = mquantiles(ratio, prob=[0.025,0.975], alphap=1., betap=1.)
int95
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CI for continuous data, Pg 18
Step2: Numpy uses a denominator of N in the standard deviation calculation by
Step3: CI for proportions, Pg 18
Step4: CI for discrete data, Pg 18
Step5: See the note above about the difference different defaults for standard
Step6: Plot Figure 2.3, Pg 19
Step7: Using knowledge of the file layout we can read in the file and pre-process into
Step8: Weighted averages, Pg 19
Step9: CI using simulations, Pg 20
|
9,084
|
<ASSISTANT_TASK:>
Python Code:
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
# visualize all four filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a single convolutional layer with four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# returns both layers
return conv_x, activated_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get the convolutional layer (pre and post activation)
conv_layer, activated_layer = model(gray_img_tensor)
# visualize the output of a conv layer
viz_layer(conv_layer)
# after a ReLu is applied
# visualize the output of an activated conv layer
viz_layer(activated_layer)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define and visualize the filters
Step2: Define a convolutional layer
Step3: Visualize the output of each filter
Step4: Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.
Step5: ReLu activation
|
9,085
|
<ASSISTANT_TASK:>
Python Code:
labVersion = 'cs190.1x-lab3-1.0.4'
print labVersion
# load testing library
from test_helper import Test
import os.path
baseDir = os.path.join('mnt', 'spark-mooc')
inputPath = os.path.join('cs190', 'millionsong.txt')
fileName = os.path.join(baseDir, inputPath)
numPartitions = 2
rawData = sc.textFile(fileName, numPartitions)
# TODO: Replace <FILL IN> with appropriate code
numPoints = rawData.count()
print numPoints
samplePoints = rawData.take(5)
print samplePoints
# TEST Load and check the data (1a)
Test.assertEquals(numPoints, 6724, 'incorrect value for numPoints')
Test.assertEquals(len(samplePoints), 5, 'incorrect length for samplePoints')
len(samplePoints)
from pyspark.mllib.regression import LabeledPoint
import numpy as np
# Here is a sample raw data point:
# '2001.0,0.884,0.610,0.600,0.474,0.247,0.357,0.344,0.33,0.600,0.425,0.60,0.419'
# In this raw data point, 2001.0 is the label, and the remaining values are features
# TODO: Replace <FILL IN> with appropriate code
def parsePoint(line):
Converts a comma separated unicode string into a `LabeledPoint`.
Args:
line (unicode): Comma separated unicode string where the first element is the label and the
remaining elements are features.
Returns:
LabeledPoint: The line is converted into a `LabeledPoint`, which consists of a label and
features.
list = line.split(',')
return (LabeledPoint(list[0],list[1:]))
parsedSamplePoints = map(parsePoint, samplePoints)
firstPointFeatures = parsedSamplePoints[0].features
firstPointLabel = parsedSamplePoints[0].label
print firstPointFeatures, firstPointLabel
d = len(firstPointFeatures)
print d
# TEST Using LabeledPoint (1b)
Test.assertTrue(isinstance(firstPointLabel, float), 'label must be a float')
expectedX0 = [0.8841,0.6105,0.6005,0.4747,0.2472,0.3573,0.3441,0.3396,0.6009,0.4257,0.6049,0.4192]
Test.assertTrue(np.allclose(expectedX0, firstPointFeatures, 1e-4, 1e-4),
'incorrect features for firstPointFeatures')
Test.assertTrue(np.allclose(2001.0, firstPointLabel), 'incorrect label for firstPointLabel')
Test.assertTrue(d == 12, 'incorrect number of features')
import matplotlib.pyplot as plt
import matplotlib.cm as cm
sampleMorePoints = rawData.take(50)
# You can uncomment the line below to see randomly selected features. These will be randomly
# selected each time you run the cell. Note that you should run this cell with the line commented
# out when answering the lab quiz questions.
# sampleMorePoints = rawData.takeSample(False, 50)
parsedSampleMorePoints = map(parsePoint, sampleMorePoints)
dataValues = map(lambda lp: lp.features.toArray(), parsedSampleMorePoints)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot
fig, ax = preparePlot(np.arange(.5, 11, 1), np.arange(.5, 49, 1), figsize=(8,7), hideLabels=True,
gridColor='#eeeeee', gridWidth=1.1)
image = plt.imshow(dataValues,interpolation='nearest', aspect='auto', cmap=cm.Greys)
for x, y, s in zip(np.arange(-.125, 12, 1), np.repeat(-.75, 12), [str(x) for x in range(12)]):
plt.text(x, y, s, color='#999999', size='10')
plt.text(4.7, -3, 'Feature', color='#999999', size='11'), ax.set_ylabel('Observation')
display(fig)
pass
# TODO: Replace <FILL IN> with appropriate code
parsedDataInit = rawData.map(parsePoint)
onlyLabels = parsedDataInit.map(lambda p: p.label)
minYear = onlyLabels.min()
maxYear = onlyLabels.max()
print maxYear, minYear
# TEST Find the range (1c)
Test.assertEquals(len(parsedDataInit.take(1)[0].features), 12,
'unexpected number of features in sample point')
sumFeatTwo = parsedDataInit.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedDataInit has unexpected values')
yearRange = maxYear - minYear
Test.assertTrue(yearRange == 89, 'incorrect range for minYear to maxYear')
# TODO: Replace <FILL IN> with appropriate code
parsedData = parsedDataInit.map(lambda p: LabeledPoint(p.label-1922,p.features))
# Should be a LabeledPoint
print type(parsedData.take(1)[0])
# View the first point
print '\n{0}'.format(parsedData.take(1))
# TEST Shift labels (1d)
oldSampleFeatures = parsedDataInit.take(1)[0].features
newSampleFeatures = parsedData.take(1)[0].features
Test.assertTrue(np.allclose(oldSampleFeatures, newSampleFeatures),
'new features do not match old features')
sumFeatTwo = parsedData.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedData has unexpected values')
minYearNew = parsedData.map(lambda lp: lp.label).min()
maxYearNew = parsedData.map(lambda lp: lp.label).max()
Test.assertTrue(minYearNew == 0, 'incorrect min year in shifted data')
Test.assertTrue(maxYearNew == 89, 'incorrect max year in shifted data')
# get data for plot
oldData = (parsedDataInit
.map(lambda lp: (lp.label, 1))
.reduceByKey(lambda x, y: x + y)
.collect())
x, y = zip(*oldData)
# generate layout and plot data
fig, ax = preparePlot(np.arange(1920, 2050, 20), np.arange(0, 150, 20))
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
ax.set_xlabel('Year'), ax.set_ylabel('Count')
display(fig)
pass
# get data for plot
newData = (parsedData
.map(lambda lp: (lp.label, 1))
.reduceByKey(lambda x, y: x + y)
.collect())
x, y = zip(*newData)
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
ax.set_xlabel('Year (shifted)'), ax.set_ylabel('Count')
display(fig)
pass
# TODO: Replace <FILL IN> with appropriate code
weights = [.8, .1, .1]
seed = 42
parsedTrainData, parsedValData, parsedTestData = parsedData.randomSplit(weights,seed)
parsedTrainData.cache()
parsedValData.cache()
parsedTestData.cache()
nTrain = parsedTrainData.count()
nVal = parsedValData.count()
nTest = parsedTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print parsedData.count()
# TEST Training, validation, and test sets (1e)
Test.assertEquals(parsedTrainData.getNumPartitions(), numPartitions,
'parsedTrainData has wrong number of partitions')
Test.assertEquals(parsedValData.getNumPartitions(), numPartitions,
'parsedValData has wrong number of partitions')
Test.assertEquals(parsedTestData.getNumPartitions(), numPartitions,
'parsedTestData has wrong number of partitions')
Test.assertEquals(len(parsedTrainData.take(1)[0].features), 12,
'parsedTrainData has wrong number of features')
sumFeatTwo = (parsedTrainData
.map(lambda lp: lp.features[2])
.sum())
sumFeatThree = (parsedValData
.map(lambda lp: lp.features[3])
.reduce(lambda x, y: x + y))
sumFeatFour = (parsedTestData
.map(lambda lp: lp.features[4])
.reduce(lambda x, y: x + y))
Test.assertTrue(np.allclose([sumFeatTwo, sumFeatThree, sumFeatFour],
2526.87757656, 297.340394298, 184.235876654),
'parsed Train, Val, Test data has unexpected values')
Test.assertTrue(nTrain + nVal + nTest == 6724, 'unexpected Train, Val, Test data set size')
Test.assertEquals(nTrain, 5371, 'unexpected value for nTrain')
Test.assertEquals(nVal, 682, 'unexpected value for nVal')
Test.assertEquals(nTest, 671, 'unexpected value for nTest')
# TODO: Replace <FILL IN> with appropriate code
averageTrainYear = parsedTrainData.map(lambda p: p.label).mean()
print averageTrainYear
print type(averageTrainYear)
# TEST Average label (2a)
Test.assertTrue(np.allclose(averageTrainYear, 53.9316700801),
'incorrect value for averageTrainYear')
# TODO: Replace <FILL IN> with appropriate code
def squaredError(label, prediction):
Calculates the squared error for a single prediction.
Args:
label (float): The correct value for this observation.
prediction (float): The predicted value for this observation.
Returns:
float: The difference between the `label` and `prediction` squared.
return (label-prediction)**2
def calcRMSE(labelsAndPreds):
Calculates the root mean squared error for an `RDD` of (label, prediction) tuples.
Args:
labelsAndPred (RDD of (float, float)): An `RDD` consisting of (label, prediction) tuples.
Returns:
float: The square root of the mean of the squared errors.
return np.sqrt(labelsAndPreds.map(lambda r: squaredError(r[0],r[1])).mean())
labelsAndPreds = sc.parallelize([(3., 1.), (1., 2.), (2., 2.)])
# RMSE = sqrt[((3-1)^2 + (1-2)^2 + (2-2)^2) / 3] = 1.291
exampleRMSE = calcRMSE(labelsAndPreds)
print exampleRMSE
# TEST Root mean squared error (2b)
Test.assertTrue(np.allclose(squaredError(3, 1), 4.), 'incorrect definition of squaredError')
Test.assertTrue(np.allclose(exampleRMSE, 1.29099444874), 'incorrect value for exampleRMSE')
# TODO: Replace <FILL IN> with appropriate code
labelsAndPredsTrain = parsedTrainData.map(lambda p: (p.label,averageTrainYear))
rmseTrainBase = calcRMSE(labelsAndPredsTrain)
labelsAndPredsVal = parsedValData.map(lambda p: (p.label,averageTrainYear))
rmseValBase = calcRMSE(labelsAndPredsVal)
labelsAndPredsTest = parsedTestData.map(lambda p: (p.label,averageTrainYear))
rmseTestBase = calcRMSE(labelsAndPredsTest)
print 'Baseline Train RMSE = {0:.3f}'.format(rmseTrainBase)
print 'Baseline Validation RMSE = {0:.3f}'.format(rmseValBase)
print 'Baseline Test RMSE = {0:.3f}'.format(rmseTestBase)
# TEST Training, validation and test RMSE (2c)
Test.assertTrue(np.allclose([rmseTrainBase, rmseValBase, rmseTestBase],
[21.305869, 21.586452, 22.136957]), 'incorrect RMSE value')
from matplotlib.colors import ListedColormap, Normalize
from matplotlib.cm import get_cmap
cmap = get_cmap('YlOrRd')
norm = Normalize()
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, lp.label))
.map(lambda (l, p): squaredError(l, p))
.collect())
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 100, 20), np.arange(0, 100, 20))
plt.scatter(actual, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.5)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
display(fig)
pass
predictions = np.asarray(parsedValData
.map(lambda lp: averageTrainYear)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, averageTrainYear))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(53.0, 55.0, 0.5), np.arange(0, 100, 20))
ax.set_xlim(53, 55)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.3)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
display(fig)
from pyspark.mllib.linalg import DenseVector
# TODO: Replace <FILL IN> with appropriate code
def gradientSummand(weights, lp):
Calculates the gradient summand for a given weight and `LabeledPoint`.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
weights (DenseVector): An array of model weights (betas).
lp (LabeledPoint): The `LabeledPoint` for a single observation.
Returns:
DenseVector: An array of values the same length as `weights`. The gradient summand.
return (weights.dot(lp.features) - lp.label) * lp.features
exampleW = DenseVector([1, 1, 1])
exampleLP = LabeledPoint(2.0, [3, 1, 4])
# gradientSummand = (dot([1 1 1], [3 1 4]) - 2) * [3 1 4] = (8 - 2) * [3 1 4] = [18 6 24]
summandOne = gradientSummand(exampleW, exampleLP)
print 'SummandOne ...'
print summandOne
exampleW = DenseVector([.24, 1.2, -1.4])
exampleLP = LabeledPoint(3.0, [-1.4, 4.2, 2.1])
summandTwo = gradientSummand(exampleW, exampleLP)
print 'SummandTwo ...'
print summandTwo
print type(summandTwo)
print summandOne + summandTwo
# TEST Gradient summand (3a)
Test.assertTrue(np.allclose(summandOne, [18., 6., 24.]), 'incorrect value for summandOne')
Test.assertTrue(np.allclose(summandTwo, [1.7304,-5.1912,-2.5956]), 'incorrect value for summandTwo')
# TODO: Replace <FILL IN> with appropriate code
def getLabeledPrediction(weights, observation):
Calculates predictions and returns a (label, prediction) tuple.
Note:
The labels should remain unchanged as we'll use this information to calculate prediction
error later.
Args:
weights (np.ndarray): An array with one weight for each features in `trainData`.
observation (LabeledPoint): A `LabeledPoint` that contain the correct label and the
features for the data point.
Returns:
tuple: A (label, prediction) tuple.
return ( observation.label, weights.dot(observation.features) )
weights = np.array([1.0, 1.5])
predictionExample = sc.parallelize([LabeledPoint(2, np.array([1.0, .5])),
LabeledPoint(1.5, np.array([.5, .5]))])
labelsAndPredsExample = predictionExample.map(lambda lp: getLabeledPrediction(weights, lp))
print labelsAndPredsExample.collect()
# TEST Use weights to make predictions (3b)
Test.assertEquals(labelsAndPredsExample.collect(), [(2.0, 1.75), (1.5, 1.25)],
'incorrect definition for getLabeledPredictions')
# TODO: Replace <FILL IN> with appropriate code
def linregGradientDescent(trainData, numIters):
Calculates the weights and error for a linear regression model trained with gradient descent.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
trainData (RDD of LabeledPoint): The labeled data for use in training the model.
numIters (int): The number of iterations of gradient descent to perform.
Returns:
(np.ndarray, np.ndarray): A tuple of (weights, training errors). Weights will be the
final weights (one weight per feature) for the model, and training errors will contain
an error (RMSE) for each iteration of the algorithm.
# The length of the training data
n = trainData.count()
# The number of features in the training data
d = len(trainData.take(1)[0].features)
w = np.zeros(d) # 1-dimensional ndarray
alpha = 1.0
# We will compute and store the training error after each iteration
errorTrain = np.zeros(numIters)
for i in range(numIters):
# Use getLabeledPrediction from (3b) with trainData to obtain an RDD of (label, prediction)
# tuples. Note that the weights all equal 0 for the first iteration, so the predictions will
# have large errors to start.
labelsAndPredsTrain = trainData.map(lambda lp: getLabeledPrediction(w,lp))
errorTrain[i] = calcRMSE(labelsAndPredsTrain)
# Calculate the `gradient`. Make use of the `gradientSummand` function you wrote in (3a).
# Note that `gradient` should be a `DenseVector` of length `d`.
# The next three operations represent the complete formula for the Gradient Descent weight update described in 3(a) above
gradient = trainData.map(lambda lp: gradientSummand(w, lp) )
# Update the weights (where did this equation come from?)
alpha_i = alpha / (n * np.sqrt(i+1))
w -= alpha_i * gradient.sum()
return w, errorTrain
# create a toy dataset with n = 10, d = 3, and then run 5 iterations of gradient descent
# note: the resulting model will not be useful; the goal here is to verify that
# linregGradientDescent is working properly
exampleN = 10
exampleD = 3
exampleData = (sc
.parallelize(parsedTrainData.take(exampleN))
.map(lambda lp: LabeledPoint(lp.label, lp.features[0:exampleD])))
print 'Two sample points ...'
print exampleData.take(2)
exampleNumIters = 5
exampleWeights, exampleErrorTrain = linregGradientDescent(exampleData, exampleNumIters)
print 'Weights ...'
print exampleWeights
print type(exampleWeights)
54# TEST Gradient descent (3c)
expectedOutput = [48.88110449, 36.01144093, 30.25350092]
Test.assertTrue(np.allclose(exampleWeights, expectedOutput), 'value of exampleWeights is incorrect')
expectedError = [79.72013547, 30.27835699, 9.27842641, 9.20967856, 9.19446483]
Test.assertTrue(np.allclose(exampleErrorTrain, expectedError),
'value of exampleErrorTrain is incorrect')
# TODO: Replace <FILL IN> with appropriate code
numIters = 50
print 'Rows in parsedTrainData: %d' % parsedTrainData.count()
weightsLR0, errorTrainLR0 = linregGradientDescent(parsedTrainData, numIters)
print 'Weights ...'
print weightsLR0
print len(weightsLR0)
print 'Rows in parsedValData: %d' % parsedValData.count()
# labelsAndPreds = parsedValData.<FILL IN>
labelsAndPreds = parsedValData.map(lambda lp: getLabeledPrediction(weightsLR0,lp) )
print 'Rows in labelsAndPreds: %d' % labelsAndPreds.count()
rmseValLR0 = calcRMSE(labelsAndPreds)
print 'Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}'.format(rmseValBase, rmseValLR0)
# TEST Train the model (3d)
expectedOutput = [22.64535883, 20.064699, -0.05341901, 8.2931319, 5.79155768, -4.51008084,
15.23075467, 3.8465554, 9.91992022, 5.97465933, 11.36849033, 3.86452361]
Test.assertTrue(np.allclose(weightsLR0, expectedOutput), 'incorrect value for weightsLR0')
norm = Normalize()
clrs = cmap(np.asarray(norm(np.log(errorTrainLR0))))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(2, 6, 1))
ax.set_ylim(2, 6)
plt.scatter(range(0, numIters), np.log(errorTrainLR0), s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xlabel('Iteration'), ax.set_ylabel(r'$\log_e(errorTrainLR0)$')
display(fig)
pass
norm = Normalize()
clrs = cmap(np.asarray(norm(errorTrainLR0[6:])))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(17, 22, 1))
ax.set_ylim(17.8, 21.2)
plt.scatter(range(0, numIters-6), errorTrainLR0[6:], s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xticklabels(map(str, range(6, 66, 10)))
ax.set_xlabel('Iteration'), ax.set_ylabel(r'Training Error')
display(fig)
pass
from pyspark.mllib.regression import LinearRegressionWithSGD
# Values to use when training the linear regression model
numIters = 500 # iterations
alpha = 1.0 # step
miniBatchFrac = 1.0 # miniBatchFraction
reg = 1e-1 # regParam
regType = 'l2' # regType
useIntercept = True # intercept
# TODO: Replace <FILL IN> with appropriate code
firstModel = LinearRegressionWithSGD.train(data=parsedTrainData,
iterations=numIters,
intercept=useIntercept,
regType=regType,
regParam=reg,
miniBatchFraction=miniBatchFrac)
# weightsLR1 stores the model weights; interceptLR1 stores the model intercept
weightsLR1 = firstModel.weights
interceptLR1 = firstModel.intercept
print weightsLR1, interceptLR1
# TEST LinearRegressionWithSGD (4a)
expectedIntercept = 13.3763009811
expectedInterceptE = 13.3335907631
expectedWeights = [15.9789216525, 13.923582484, 0.781551054803, 6.09257051566, 3.91814791179, -2.30347707767,
10.3002026917, 3.04565129011, 7.23175674717, 4.65796458476, 7.98875075855, 3.1782463856]
expectedWeightsE = [16.682292427, 14.7439059559, -0.0935105608897, 6.22080088829, 4.01454261926, -3.30214858535,
11.0403027232, 2.67190962854, 7.18925791279, 4.46093254586, 8.14950409475, 2.75135810882]
Test.assertTrue(np.allclose(interceptLR1, expectedIntercept) or np.allclose(interceptLR1, expectedInterceptE),
'incorrect value for interceptLR1')
Test.assertTrue(np.allclose(weightsLR1, expectedWeights) or np.allclose(weightsLR1, expectedWeightsE),
'incorrect value for weightsLR1')
# TODO: Replace <FILL IN> with appropriate code
samplePoint = parsedTrainData.take(1)[0]
samplePrediction = firstModel.predict(samplePoint.features)
print samplePoint.label
print samplePrediction
# TEST Predict (4b)
Test.assertTrue(np.allclose(samplePrediction, 56.5823796609) or np.allclose(samplePrediction, 56.8013380112),
'incorrect value for samplePrediction')
# TODO: Replace <FILL IN> with appropriate code
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, firstModel.predict(lp.features) ) )
rmseValLR1 = calcRMSE(labelsAndPreds)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}' + '\n\tLR1 = {2:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1)
# TEST Evaluate RMSE (4c)
Test.assertTrue(np.allclose(rmseValLR1, 19.8730701066) or np.allclose(rmseValLR1, 19.6912473416),
'incorrect value for rmseValLR1')
# TODO: Replace <FILL IN> with appropriate code
bestRMSE = rmseValLR1
bestRegParam = reg
bestModel = firstModel
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
for reg in [1e-10, 1e-5, 1]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))
rmseValGrid = calcRMSE(labelsAndPreds)
print rmseValGrid
if rmseValGrid < bestRMSE:
bestRMSE = rmseValGrid
bestRegParam = reg
bestModel = model
rmseValLRGrid = bestRMSE
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n' +
'\tLRGrid = {3:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1, rmseValLRGrid)
# TEST Grid search (4d)
Test.assertTrue(np.allclose(17.4831362704, rmseValLRGrid) or np.allclose(17.0171700716, rmseValLRGrid),
'incorrect value for rmseValLRGrid')
predictions = np.asarray(parsedValData
.map(lambda lp: bestModel.predict(lp.features))
.collect())
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, bestModel.predict(lp.features)))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))
ax.set_xlim(15, 82), ax.set_ylim(-5, 105)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=.5)
ax.set_xlabel('Predicted'), ax.set_ylabel(r'Actual')
display(fig)
pass
# TODO: Replace <FILL IN> with appropriate code
reg = bestRegParam
modelRMSEs = []
for alpha in [1e-5, 10]:
for numIters in [500, 5]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))
rmseVal = calcRMSE(labelsAndPreds)
print 'alpha = {0:.0e}, numIters = {1}, RMSE = {2:.3f}'.format(alpha, numIters, rmseVal)
modelRMSEs.append(rmseVal)
# TEST Vary alpha and the number of iterations (4e)
expectedResults = sorted([56.972629385122502, 56.972629385122502, 355124752.22122133])
expectedResultsE = sorted([56.892948663998297, 56.96970493238036, 355124752.22122133])
actualResults = sorted(modelRMSEs)[:3]
Test.assertTrue(np.allclose(actualResults, expectedResults) or np.allclose(actualResults, expectedResultsE),
'incorrect value for modelRMSEs')
from matplotlib.colors import LinearSegmentedColormap
# Saved parameters and results, to save the time required to run 36 models
numItersParams = [10, 50, 100, 250, 500, 1000]
regParams = [1e-8, 1e-6, 1e-4, 1e-2, 1e-1, 1]
rmseVal = np.array([[ 20.36769649, 20.36770128, 20.36818057, 20.41795354, 21.09778437, 301.54258421],
[ 19.04948826, 19.0495 , 19.05067418, 19.16517726, 19.97967727, 23.80077467],
[ 18.40149024, 18.40150998, 18.40348326, 18.59457491, 19.82155716, 23.80077467],
[ 17.5609346 , 17.56096749, 17.56425511, 17.88442127, 19.71577117, 23.80077467],
[ 17.0171705 , 17.01721288, 17.02145207, 17.44510574, 19.69124734, 23.80077467],
[ 16.58074813, 16.58079874, 16.58586512, 17.11466904, 19.6860931 , 23.80077467]])
numRows, numCols = len(numItersParams), len(regParams)
rmseVal = np.array(rmseVal)
rmseVal.shape = (numRows, numCols)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7), hideLabels=True,
gridWidth=0.)
ax.set_xticklabels(regParams), ax.set_yticklabels(numItersParams)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Number of Iterations')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(rmseVal,interpolation='nearest', aspect='auto',
cmap = colors)
display(fig)
# Zoom into the bottom left
numItersParamsZoom, regParamsZoom = numItersParams[-3:], regParams[:4]
rmseValZoom = rmseVal[-3:, :4]
numRows, numCols = len(numItersParamsZoom), len(regParamsZoom)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7), hideLabels=True,
gridWidth=0.)
ax.set_xticklabels(regParamsZoom), ax.set_yticklabels(numItersParamsZoom)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Number of Iterations')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(rmseValZoom,interpolation='nearest', aspect='auto',
cmap = colors)
display(fig)
pass
p = LabeledPoint(0.0, [1, 2, 3])
print p
print type(p)
print type(p.features)
np.square(p.features)
# TODO: Replace <FILL IN> with appropriate code
import itertools
def twoWayInteractions(lp):
Creates a new `LabeledPoint` that includes two-way interactions.
Note:
For features [x, y] the two-way interactions would be [x^2, x*y, y*x, y^2] and these
would be appended to the original [x, y] feature list.
Args:
lp (LabeledPoint): The label and features for this observation.
Returns:
LabeledPoint: The new `LabeledPoint` should have the same label as `lp`. Its features
should include the features from `lp` followed by the two-way interaction features.
prod = np.empty([0])
# sqr = np.square(lp.features)
for i in itertools.product(lp.features, lp.features):
prod = np.append(prod, i[0] * i[1])
return LabeledPoint(lp.label, np.hstack((lp.features,prod)))
p = twoWayInteractions(LabeledPoint(0.0, [1, 2, 3]))
print p
# Transform the existing train, validation, and test sets to include two-way interactions.
trainDataInteract = parsedTrainData.map(lambda lp: twoWayInteractions(lp))
valDataInteract = parsedValData.map(lambda lp: twoWayInteractions(lp))
testDataInteract = parsedTestData.map(lambda lp: twoWayInteractions(lp))
print 'Count trainDataInteract: %d' % trainDataInteract.count()
print 'Count valDataInteract: %d' % valDataInteract.count()
print 'Count testDataInteract: %d' % testDataInteract.count()
print 'All done.'
# TEST Add two-way interactions (5a)
twoWayExample = twoWayInteractions(LabeledPoint(0.0, [2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayExample.features),
sorted([2.0, 3.0, 4.0, 6.0, 6.0, 9.0])),
'incorrect features generatedBy twoWayInteractions')
twoWayPoint = twoWayInteractions(LabeledPoint(1.0, [1, 2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayPoint.features),
sorted([1.0,2.0,3.0,1.0,2.0,3.0,2.0,4.0,6.0,3.0,6.0,9.0])),
'incorrect features generated by twoWayInteractions')
Test.assertEquals(twoWayPoint.label, 1.0, 'incorrect label generated by twoWayInteractions')
Test.assertTrue(np.allclose(sum(trainDataInteract.take(1)[0].features), 40.821870576035529),
'incorrect features in trainDataInteract')
Test.assertTrue(np.allclose(sum(valDataInteract.take(1)[0].features), 45.457719932695696),
'incorrect features in valDataInteract')
Test.assertTrue(np.allclose(sum(testDataInteract.take(1)[0].features), 35.109111632783168),
'incorrect features in testDataInteract')
# TODO: Replace <FILL IN> with appropriate code
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
reg = 1e-10
modelInteract = LinearRegressionWithSGD.train(trainDataInteract, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPredsInteract = valDataInteract.map(lambda lp: (lp.label, modelInteract.predict(lp.features)))
rmseValInteract = calcRMSE(labelsAndPredsInteract)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n\tLRGrid = ' +
'{3:.3f}\n\tLRInteract = {4:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1,
rmseValLRGrid, rmseValInteract)
# TEST Build interaction model (5b)
Test.assertTrue(np.allclose(rmseValInteract, 15.9963259256) or np.allclose(rmseValInteract, 15.6894664683),
'incorrect value for rmseValInteract')
# TODO: Replace <FILL IN> with appropriate code
labelsAndPredsTest = testDataInteract.map(lambda lp: (lp.label, modelInteract.predict(lp.features)))
rmseTestInteract = calcRMSE(labelsAndPredsTest)
print ('Test RMSE:\n\tBaseline = {0:.3f}\n\tLRInteract = {1:.3f}'
.format(rmseTestBase, rmseTestInteract))
# TEST Evaluate interaction model on test data (5c)
Test.assertTrue(np.allclose(rmseTestInteract, 16.5251427618) or np.allclose(rmseTestInteract, 16.3272040537),
'incorrect value for rmseTestInteract')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: WARNING
Step3: (1b) Using LabeledPoint
Step5: Visualization 1
Step6: (1c) Find the range
Step7: (1d) Shift labels
Step8: Visualization 2
Step9: (1e) Training, validation, and test sets
Step10: Part 2
Step13: (2b) Root mean squared error
Step14: (2c) Training, validation and test RMSE
Step15: Visualization 3
Step17: Part 3
Step19: (3b) Use weights to make predictions
Step21: (3c) Gradient descent
Step22: (3d) Train the model
Step23: Visualization 4
Step24: Part 4
Step25: (4b) Predict
Step26: (4c) Evaluate RMSE
Step27: (4d) Grid search
Step28: Visualization 5
Step29: (4e) Vary alpha and the number of iterations
Step30: Visualization 6
Step32: Part 5
Step33: (5b) Build interaction model
Step34: (5c) Evaluate interaction model on test data
|
9,086
|
<ASSISTANT_TASK:>
Python Code:
import sys
import math
import numpy as np
import pandas as pd
import scipy.optimize as so
import scipy.integrate as si
import matplotlib.pyplot as plt
import nest
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 3)
def Vpass(t, V0, gNaL, ENa, gKL, EK, taum, I=0):
tau_eff = taum/(gNaL + gKL)
Vinf = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL)
return V0*np.exp(-t/tau_eff) + Vinf*(1-np.exp(-t/tau_eff))
def theta(t, th0, theq, tauth):
return th0*np.exp(-t/tauth) + theq*(1-np.exp(-t/tauth))
nest.ResetKernel()
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.,
'tau_theta': 10.})
hp = nest.GetDefaults('ht_neuron')
V_0 = [-100., -70., -55.]
th_0 = [-65., -51., -10.]
T_sim = 20.
nrns = nest.Create('ht_neuron', n=len(V_0), params={'V_m': V_0, 'theta': th_0})
nest.Simulate(T_sim)
V_th_sim = nrns.get(['V_m', 'theta'])
for (V0, th0, Vsim, thsim) in zip(V_0, th_0, V_th_sim['V_m'], V_th_sim['theta']):
Vex = Vpass(T_sim, V0, hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], hp['tau_m'])
thex = theta(T_sim, th0, hp['theta_eq'], hp['tau_theta'])
print('Vex = {:.3f}, Vsim = {:.3f}, Vex-Vsim = {:.3e}'.format(Vex, Vsim, Vex-Vsim))
print('thex = {:.3f}, thsim = {:.3f}, thex-thsim = {:.3e}'.format(thex, thsim, thex-thsim))
def t_first_spike(gNaL, ENa, gKL, EK, taum, theq, tI, I):
tau_eff = taum/(gNaL + gKL)
Vinf0 = (gNaL*ENa + gKL*EK)/(gNaL + gKL)
VinfI = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL)
return tI - tau_eff * np.log((theq-VinfI) / (Vinf0-VinfI))
nest.ResetKernel()
nest.SetKernelStatus({'resolution': 0.001})
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.})
hp = nest.GetDefaults('ht_neuron')
I = [25., 50., 100.]
tI = 1.
delay = 1.
T_sim = 40.
nrns = nest.Create('ht_neuron', n=len(I))
dcgens = nest.Create('dc_generator', n=len(I), params={'amplitude': I, 'start': tI})
sdets = nest.Create('spike_detector', n=len(I))
nest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay})
nest.Connect(nrns, sdets, 'one_to_one')
nest.Simulate(T_sim)
t_first_sim = [t[0] for t in sdets.get('events', 'times')]
for dc, tf_sim in zip(I, t_first_sim):
tf_ex = t_first_spike(hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'],
hp['tau_m'], hp['theta_eq'], tI+delay, dc)
print('tex = {:.4f}, tsim = {:.4f}, tex-tsim = {:.4f}'.format(tf_ex,
tf_sim,
tf_ex-tf_sim))
def Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0):
tau_eff = taum/(gNaL + gKL + taum/tauspk)
Vinf = (gNaL*ENa + gKL*EK + I + taum/tauspk*EK)/(gNaL + gKL + taum/tauspk)
return ENa*np.exp(-tspk/tau_eff) + Vinf*(1-np.exp(-tspk/tau_eff))
def thetaspike(tspk, ENa, theq, tauth):
return ENa*np.exp(-tspk/tauth) + theq*(1-np.exp(-tspk/tauth))
def Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0):
Vsp = Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I)
return Vpass(t-tspk, Vsp, gNaL, ENa, gKL, EK, taum, I)
def thetapost(t, tspk, ENa, theq, tauth):
thsp = thetaspike(tspk, ENa, theq, tauth)
return theta(t-tspk, thsp, theq, tauth)
def threshold(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I, theq, tauth):
return Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I) - thetapost(t, tspk, ENa, theq, tauth)
nest.ResetKernel()
nest.SetKernelStatus({'resolution': 0.001})
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.})
hp = nest.GetDefaults('ht_neuron')
I = [25., 50., 100.]
tI = 1.
delay = 1.
T_sim = 1000.
nrns = nest.Create('ht_neuron', n=len(I))
dcgens = nest.Create('dc_generator', n=len(I), params={'amplitude': I, 'start': tI})
sdets = nest.Create('spike_detector', n=len(I))
nest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay})
nest.Connect(nrns, sdets, 'one_to_one')
nest.Simulate(T_sim)
isi_sim = []
for ev in sdets.events:
t_spk = ev['times']
isi = np.diff(t_spk)
isi_sim.append((np.min(isi), np.mean(isi), np.max(isi)))
for dc, (isi_min, isi_mean, isi_max) in zip(I, isi_sim):
isi_ex = so.bisect(threshold, hp['t_ref'], 50,
args=(hp['t_ref'], hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'],
hp['tau_m'], hp['tau_spike'], dc, hp['theta_eq'], hp['tau_theta']))
print('isi_ex = {:.4f}, isi_sim (min, mean, max) = ({:.4f}, {:.4f}, {:.4f})'.format(
isi_ex, isi_min, isi_mean, isi_max))
nest.ResetKernel()
class Channel:
Base class for channel models in Python.
def tau_m(self, V):
raise NotImplementedError()
def tau_h(self, V):
raise NotImplementedError()
def m_inf(self, V):
raise NotImplementedError()
def h_inf(self, V):
raise NotImplementedError()
def D_inf(self, V):
raise NotImplementedError()
def dh(self, h, t, V):
return (self.h_inf(V)-h)/self.tau_h(V)
def dm(self, m, t, V):
return (self.m_inf(V)-m)/self.tau_m(V)
def voltage_clamp(channel, DT_V_seq, nest_dt=0.1):
"Run voltage clamp with voltage V through intervals DT."
# NEST part
nest_g_0 = {'g_peak_h': 0., 'g_peak_T': 0., 'g_peak_NaP': 0., 'g_peak_KNa': 0.}
nest_g_0[channel.nest_g] = 1.
nest.ResetKernel()
nest.SetKernelStatus({'resolution': nest_dt})
nrn = nest.Create('ht_neuron', params=nest_g_0)
mm = nest.Create('multimeter', params={'record_from': ['V_m', 'theta', channel.nest_I],
'interval': nest_dt})
nest.Connect(mm, nrn)
# ensure we start from equilibrated state
nrn.set(V_m=DT_V_seq[0][1], equilibrate=True, voltage_clamp=True)
for DT, V in DT_V_seq:
nrn.set(V_m=V, voltage_clamp=True)
nest.Simulate(DT)
t_end = nest.GetKernelStatus('time')
# simulate a little more so we get all data up to t_end to multimeter
nest.Simulate(2 * nest.GetKernelStatus('min_delay'))
tmp = pd.DataFrame(mm.events)
nest_res = tmp[tmp.times <= t_end]
# Control part
t_old = 0.
try:
m_old = channel.m_inf(DT_V_seq[0][1])
except NotImplementedError:
m_old = None
try:
h_old = channel.h_inf(DT_V_seq[0][1])
except NotImplementedError:
h_old = None
try:
D_old = channel.D_inf(DT_V_seq[0][1])
except NotImplementedError:
D_old = None
t_all, I_all = [], []
if D_old is not None:
D_all = []
for DT, V in DT_V_seq:
t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt)
I_loc = channel.compute_I(t_loc, V, m_old, h_old, D_old)
t_all.extend(t_old + t_loc[1:])
I_all.extend(I_loc[1:])
if D_old is not None:
D_all.extend(channel.D[1:])
m_old = channel.m[-1] if m_old is not None else None
h_old = channel.h[-1] if h_old is not None else None
D_old = channel.D[-1] if D_old is not None else None
t_old = t_all[-1]
if D_old is None:
ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all})
else:
ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all, 'D': D_all})
return nest_res, ctrl_res
nest.ResetKernel()
class Ih(Channel):
nest_g = 'g_peak_h'
nest_I = 'I_h'
def __init__(self, ht_params):
self.hp = ht_params
def tau_m(self, V):
return 1/(np.exp(-14.59-0.086*V) + np.exp(-1.87 + 0.0701*V))
def m_inf(self, V):
return 1/(1+np.exp((V+75)/5.5))
def compute_I(self, t, V, m0, h0, D0):
self.m = si.odeint(self.dm, m0, t, args=(V,))
return - self.hp['g_peak_h'] * self.m * (V - self.hp['E_rev_h'])
ih = Ih(nest.GetDefaults('ht_neuron'))
V = np.linspace(-110, 30, 100)
plt.plot(V, ih.tau_m(V));
ax = plt.gca();
ax.set_xlabel('Voltage V [mV]');
ax.set_ylabel('Time constant tau_m [ms]', color='b');
ax2 = ax.twinx()
ax2.plot(V, ih.m_inf(V), 'g');
ax2.set_ylabel('Steady-state m_h^inf', color='g');
ih = Ih(nest.GetDefaults('ht_neuron'))
nr, cr = voltage_clamp(ih, [(500, -65.), (500, -80.), (500, -100.), (500, -90.), (500, -55.)])
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_h, label='NEST');
plt.plot(cr.times, cr.I_h, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_h [mV]');
plt.title('I_h current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_h-cr.I_h)/np.abs(cr.I_h));
plt.title('Relative I_h error')
plt.xlabel('Time [ms]');
plt.ylabel('Rel. error (NEST-Control)/|Control|');
nest.ResetKernel()
class IT(Channel):
nest_g = 'g_peak_T'
nest_I = 'I_T'
def __init__(self, ht_params):
self.hp = ht_params
def tau_m(self, V):
return 0.13 + 0.22/(np.exp(-(V+132)/16.7) + np.exp((V+16.8)/18.2))
def tau_h(self, V):
return 8.2 + (56.6 + 0.27 * np.exp((V+115.2)/5.0)) /(1 + np.exp((V+86.0)/3.2))
def m_inf(self, V):
return 1/(1+np.exp(-(V+59.0)/6.2))
def h_inf(self, V):
return 1/(1+np.exp((V+83.0)/4.0))
def compute_I(self, t, V, m0, h0, D0):
self.m = si.odeint(self.dm, m0, t, args=(V,))
self.h = si.odeint(self.dh, h0, t, args=(V,))
return - self.hp['g_peak_T'] * self.m**2 * self.h * (V - self.hp['E_rev_T'])
iT = IT(nest.GetDefaults('ht_neuron'))
V = np.linspace(-110, 30, 100)
plt.plot(V, 10 * iT.tau_m(V), 'b-', label='10 * tau_m');
plt.plot(V, iT.tau_h(V), 'b--', label='tau_h');
ax1 = plt.gca();
ax1.set_xlabel('Voltage V [mV]');
ax1.set_ylabel('Time constants [ms]', color='b');
ax2 = ax1.twinx()
ax2.plot(V, iT.m_inf(V), 'g-', label='m_inf');
ax2.plot(V, iT.h_inf(V), 'g--', label='h_inf');
ax2.set_ylabel('Steady-state', color='g');
ln1, lb1 = ax1.get_legend_handles_labels()
ln2, lb2 = ax2.get_legend_handles_labels()
plt.legend(ln1+ln2, lb1+lb2, loc='upper right');
iT = IT(nest.GetDefaults('ht_neuron'))
nr, cr = voltage_clamp(iT, [(200, -65.), (200, -80.), (200, -100.), (200, -90.), (200, -70.),
(200, -55.)],
nest_dt=0.1)
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_T, label='NEST');
plt.plot(cr.times, cr.I_T, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_T [mV]');
plt.title('I_T current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_T-cr.I_T)/np.abs(cr.I_T));
plt.title('Relative I_T error')
plt.xlabel('Time [ms]');
plt.ylabel('Rel. error (NEST-Control)/|Control|');
nest.ResetKernel()
class INaP(Channel):
nest_g = 'g_peak_NaP'
nest_I = 'I_NaP'
def __init__(self, ht_params):
self.hp = ht_params
def m_inf(self, V):
return 1/(1+np.exp(-(V+55.7)/7.7))
def compute_I(self, t, V, m0, h0, D0):
return self.I_V_curve(V * np.ones_like(t))
def I_V_curve(self, V):
self.m = self.m_inf(V)
return - self.hp['g_peak_NaP'] * self.m**3 * (V - self.hp['E_rev_NaP'])
iNaP = INaP(nest.GetDefaults('ht_neuron'))
V = np.arange(-110., 30., 1.)
nr, cr = voltage_clamp(iNaP, [(1, v) for v in V], nest_dt=0.1)
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_NaP, label='NEST');
plt.plot(cr.times, cr.I_NaP, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_NaP [mV]');
plt.title('I_NaP current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_NaP-cr.I_NaP));
plt.title('I_NaP error')
plt.xlabel('Time [ms]');
plt.ylabel('Error (NEST-Control)');
nest.ResetKernel()
class IDK(Channel):
nest_g = 'g_peak_KNa'
nest_I = 'I_KNa'
def __init__(self, ht_params):
self.hp = ht_params
def m_DK(self, D):
return 1/(1+(0.25/D)**3.5)
def D_inf(self, V):
return 1250. * self.D_influx(V) + 0.001
def D_influx(self, V):
return 0.025 / ( 1 + np.exp(-(V+10)/5.) )
def dD(self, D, t, V):
return (self.D_inf(V) - D)/1250.
def compute_I(self, t, V, m0, h0, D0):
self.D = si.odeint(self.dD, D0, t, args=(V,))
self.m = self.m_DK(self.D)
return - self.hp['g_peak_KNa'] * self.m * (V - self.hp['E_rev_KNa'])
iDK = IDK(nest.GetDefaults('ht_neuron'))
D=np.linspace(0.01, 1.5,num=200);
V=np.linspace(-110, 30, num=200);
ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4);
ax2 = ax1.twinx()
ax3 = plt.subplot2grid((1, 9), (0, 6), colspan=3);
ax1.plot(V, -iDK.m_DK(iDK.D_inf(V))*(V - iDK.hp['E_rev_KNa']), 'g');
ax1.set_ylabel('Current I_inf(V)', color='g');
ax2.plot(V, iDK.m_DK(iDK.D_inf(V)), 'b');
ax2.set_ylabel('Activation m_inf(D_inf(V))', color='b');
ax1.set_xlabel('Membrane potential V [mV]');
ax2.set_title('Steady-state activation and current');
ax3.plot(D, iDK.m_DK(D), 'b');
ax3.set_xlabel('D');
ax3.set_ylabel('Activation m_inf(D)', color='b');
ax3.set_title('Activation as function of D');
nr, cr = voltage_clamp(iDK, [(500, -65.), (500, -35.), (500, -25.), (500, 0.), (5000, -70.)],
nest_dt=1.)
ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4);
ax2 = plt.subplot2grid((1, 9), (0, 6), colspan=3);
ax1.plot(nr.times, nr.I_KNa, label='NEST');
ax1.plot(cr.times, cr.I_KNa, label='Control');
ax1.legend(loc='lower right');
ax1.set_xlabel('Time [ms]');
ax1.set_ylabel('I_DK [mV]');
ax1.set_title('I_DK current');
ax2.plot(nr.times, (nr.I_KNa-cr.I_KNa)/np.abs(cr.I_KNa));
ax2.set_title('Relative I_DK error')
ax2.set_xlabel('Time [ms]');
ax2.set_ylabel('Rel. error (NEST-Control)/|Control|');
nest.ResetKernel()
class SynChannel:
Base class for synapse channel models in Python.
def t_peak(self):
return self.tau_1 * self.tau_2 / (self.tau_2 - self.tau_1) * np.log(self.tau_2/self.tau_1)
def beta(self, t):
val = ( ( np.exp(-t/self.tau_1) - np.exp(-t/self.tau_2) ) /
( np.exp(-self.t_peak()/self.tau_1) - np.exp(-self.t_peak()/self.tau_2) ) )
val[t < 0] = 0
return val
def syn_voltage_clamp(channel, DT_V_seq, nest_dt=0.1):
"Run voltage clamp with voltage V through intervals DT with single spike at time 1"
spike_time = 1.0
delay = 1.0
nest.ResetKernel()
nest.SetKernelStatus({'resolution': nest_dt})
try:
nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6,
'instant_unblock_NMDA': channel.instantaneous})
except:
nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6})
mm = nest.Create('multimeter',
params={'record_from': ['g_'+channel.receptor],
'interval': nest_dt})
sg = nest.Create('spike_generator', params={'spike_times': [spike_time]})
nest.Connect(mm, nrn)
nest.Connect(sg, nrn, syn_spec={'weight': 1.0, 'delay': delay,
'receptor_type': channel.rec_code})
# ensure we start from equilibrated state
nrn.set(V_m=DT_V_seq[0][1], equilibrate=True, voltage_clamp=True)
for DT, V in DT_V_seq:
nrn.set(V_m=V, voltage_clamp=True)
nest.Simulate(DT)
t_end = nest.GetKernelStatus('time')
# simulate a little more so we get all data up to t_end to multimeter
nest.Simulate(2 * nest.GetKernelStatus('min_delay'))
tmp = pd.DataFrame(mm.get('events'))
nest_res = tmp[tmp.times <= t_end]
# Control part
t_old = 0.
t_all, g_all = [], []
m_fast_old = (channel.m_inf(DT_V_seq[0][1])
if channel.receptor == 'NMDA' and not channel.instantaneous else None)
m_slow_old = (channel.m_inf(DT_V_seq[0][1])
if channel.receptor == 'NMDA' and not channel.instantaneous else None)
for DT, V in DT_V_seq:
t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt)
g_loc = channel.g(t_old+t_loc-(spike_time+delay), V, m_fast_old, m_slow_old)
t_all.extend(t_old + t_loc[1:])
g_all.extend(g_loc[1:])
m_fast_old = channel.m_fast[-1] if m_fast_old is not None else None
m_slow_old = channel.m_slow[-1] if m_slow_old is not None else None
t_old = t_all[-1]
ctrl_res = pd.DataFrame({'times': t_all, 'g_'+channel.receptor: g_all})
return nest_res, ctrl_res
nest.ResetKernel()
class PlainChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
def g(self, t, V, mf0, ms0):
return self.g_peak * self.beta(t)
def I(self, t, V):
return - self.g(t) * (V-self.E_rev)
ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA')
am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.1)
plt.subplot(1, 2, 1);
plt.plot(am_n.times, am_n.g_AMPA, label='NEST');
plt.plot(am_c.times, am_c.g_AMPA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_AMPA');
plt.title('AMPA Channel');
plt.subplot(1, 2, 2);
plt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('AMPA rel error');
ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA')
am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.001)
plt.subplot(1, 2, 1);
plt.plot(am_n.times, am_n.g_AMPA, label='NEST');
plt.plot(am_c.times, am_c.g_AMPA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_AMPA');
plt.title('AMPA Channel');
plt.subplot(1, 2, 2);
plt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('AMPA rel error');
gaba_a = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_A')
ga_n, ga_c = syn_voltage_clamp(gaba_a, [(50, -70.)])
plt.subplot(1, 2, 1);
plt.plot(ga_n.times, ga_n.g_GABA_A, label='NEST');
plt.plot(ga_c.times, ga_c.g_GABA_A, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_GABA_A');
plt.title('GABA_A Channel');
plt.subplot(1, 2, 2);
plt.plot(ga_n.times, (ga_n.g_GABA_A-ga_c.g_GABA_A)/ga_c.g_GABA_A);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('GABA_A rel error');
gaba_b = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_B')
gb_n, gb_c = syn_voltage_clamp(gaba_b, [(750, -70.)])
plt.subplot(1, 2, 1);
plt.plot(gb_n.times, gb_n.g_GABA_B, label='NEST');
plt.plot(gb_c.times, gb_c.g_GABA_B, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_GABA_B');
plt.title('GABA_B Channel');
plt.subplot(1, 2, 2);
plt.plot(gb_n.times, (gb_n.g_GABA_B-gb_c.g_GABA_B)/gb_c.g_GABA_B);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('GABA_B rel error');
class NMDAInstantChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
self.S_act = hp['S_act_NMDA']
self.V_act = hp['V_act_NMDA']
self.instantaneous = True
def m_inf(self, V):
return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)))
def g(self, t, V, mf0, ms0):
return self.g_peak * self.m_inf(V) * self.beta(t)
def I(self, t, V):
return - self.g(t) * (V-self.E_rev)
nmdai = NMDAInstantChannel(nest.GetDefaults('ht_neuron'), 'NMDA')
ni_n, ni_c = syn_voltage_clamp(nmdai, [(50, -60.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)])
plt.subplot(1, 2, 1);
plt.plot(ni_n.times, ni_n.g_NMDA, label='NEST');
plt.plot(ni_c.times, ni_c.g_NMDA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_NMDA');
plt.title('NMDA Channel (instant unblock)');
plt.subplot(1, 2, 2);
plt.plot(ni_n.times, (ni_n.g_NMDA-ni_c.g_NMDA)/ni_c.g_NMDA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('NMDA (inst) rel error');
class NMDAChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
self.S_act = hp['S_act_NMDA']
self.V_act = hp['V_act_NMDA']
self.tau_fast = hp['tau_Mg_fast_NMDA']
self.tau_slow = hp['tau_Mg_slow_NMDA']
self.instantaneous = False
def m_inf(self, V):
return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)) )
def dm(self, m, t, V, tau):
return ( self.m_inf(V) - m ) / tau
def g(self, t, V, mf0, ms0):
self.m_fast = si.odeint(self.dm, mf0, t, args=(V, self.tau_fast))
self.m_slow = si.odeint(self.dm, ms0, t, args=(V, self.tau_slow))
a = 0.51 - 0.0028 * V
m_inf = self.m_inf(V)
mfs = self.m_fast[:]
mfs[mfs > m_inf] = m_inf
mss = self.m_slow[:]
mss[mss > m_inf] = m_inf
m = np.squeeze(a * mfs + ( 1 - a ) * mss)
return self.g_peak * m * self.beta(t)
def I(self, t, V):
raise NotImplementedError()
nmda = NMDAChannel(nest.GetDefaults('ht_neuron'), 'NMDA')
nm_n, nm_c = syn_voltage_clamp(nmda, [(50, -70.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)])
plt.subplot(1, 2, 1);
plt.plot(nm_n.times, nm_n.g_NMDA, label='NEST');
plt.plot(nm_c.times, nm_c.g_NMDA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_NMDA');
plt.title('NMDA Channel');
plt.subplot(1, 2, 2);
plt.plot(nm_n.times, (nm_n.g_NMDA-nm_c.g_NMDA)/nm_c.g_NMDA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('NMDA rel error');
nest.ResetKernel()
sp = nest.GetDefaults('ht_synapse')
P0 = sp['P']
dP = sp['delta_P']
tP = sp['tau_P']
spike_times = [10., 12., 20., 20.5, 100., 200., 1000.]
expected = [(0., P0, P0)]
for idx, t in enumerate(spike_times):
tlast, Psend, Ppost = expected[idx]
Psend = 1 - (1-Ppost)*math.exp(-(t-tlast)/tP)
expected.append((t, Psend, (1-dP)*Psend))
expected_weights = list(zip(*expected[1:]))[1]
sg = nest.Create('spike_generator', params={'spike_times': spike_times})
n = nest.Create('parrot_neuron', 2)
wr = nest.Create('weight_recorder')
nest.SetDefaults('ht_synapse', {'weight_recorder': wr, 'weight': 1.0})
nest.Connect(sg, n[:1])
nest.Connect(n[:1], n[1:], syn_spec='ht_synapse')
nest.Simulate(1200)
rec_weights = wr.get('events', 'weights')
print('Recorded weights:', rec_weights)
print('Expected weights:', expected_weights)
print('Difference :', np.array(rec_weights) - np.array(expected_weights))
nest.ResetKernel()
nrn = nest.Create('ht_neuron')
ppg = nest.Create('pulsepacket_generator', n=4,
params={'pulse_times': [700., 1700., 2700., 3700.],
'activity': 700, 'sdev': 50.})
pr = nest.Create('parrot_neuron', n=4)
mm = nest.Create('multimeter',
params={'interval': 0.1,
'record_from': ['V_m', 'theta',
'g_AMPA', 'g_NMDA',
'g_GABA_A', 'g_GABA_B',
'I_NaP', 'I_KNa', 'I_T', 'I_h']})
weights = {'AMPA': 25., 'NMDA': 20., 'GABA_A': 10., 'GABA_B': 1.}
receptors = nest.GetDefaults('ht_neuron')['receptor_types']
nest.Connect(ppg, pr, 'one_to_one')
for p, (rec_name, rec_wgt) in zip(pr, weights.items()):
nest.Connect(p, nrn, syn_spec={'synapse_model': 'ht_synapse',
'receptor_type': receptors[rec_name],
'weight': rec_wgt})
nest.Connect(mm, nrn)
nest.Simulate(5000)
data = nest.GetStatus(mm)[0]['events']
t = data['times']
def texify_name(name):
return r'${}_{{\mathrm{{{}}}}}$'.format(*name.split('_'))
fig = plt.figure(figsize=(12,10))
Vax = fig.add_subplot(311)
Vax.plot(t, data['V_m'], 'k', lw=1, label=r'$V_m$')
Vax.plot(t, data['theta'], 'r', alpha=0.5, lw=1, label=r'$\Theta$')
Vax.set_ylabel('Potential [mV]')
Vax.legend(fontsize='small')
Vax.set_title('ht_neuron driven by sinousiodal Poisson processes')
Iax = fig.add_subplot(312)
for iname, color in (('I_h', 'blue'), ('I_KNa', 'green'),
('I_NaP', 'red'), ('I_T', 'cyan')):
Iax.plot(t, data[iname], color=color, lw=1, label=texify_name(iname))
#Iax.set_ylim(-60, 60)
Iax.legend(fontsize='small')
Iax.set_ylabel('Current [mV]')
Gax = fig.add_subplot(313)
for gname, sgn, color in (('g_AMPA', 1, 'green'), ('g_GABA_A', -1, 'red'),
('g_GABA_B', -1, 'cyan'), ('g_NMDA', 1, 'magenta')):
Gax.plot(t, sgn*data[gname], lw=1, label=texify_name(gname), color=color)
#Gax.set_ylim(-150, 150)
Gax.legend(fontsize='small')
Gax.set_ylabel('Conductance')
Gax.set_xlabel('Time [ms]');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Neuron Model
Step2: Agreement is excellent.
Step3: Agreement is as good as possible
Step5: ISIs are as predicted
Step6: I_h channel
Step7: The time constant is extremely long, up to 1s, for relevant voltages where $I_h$ is perceptible. We thus need long test runs.
Step8: Agreement is very good
Step9: Time constants here are much shorter than for I_h
Step10: Also here the results are in good agreement and the error appears acceptable.
Step11: Perfect agreement
Step structure is because $V$ changes only every second.
Step12: Properties of I_DK
Step13: Note that current in steady state is
Step15: Looks very fine.
Step16: AMPA, GABA_A, GABA_B channels
Step17: Looks quite good, but the error is maybe a bit larger than one would hope.
Step18: Looks good for all
Step19: Looks good
Step20: Looks fine, too.
Step21: Perfect agreement, synapse model looks fine.
|
9,087
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['ecc'] = 0.2
b['dperdt'] = 2.0 * u.deg/u.d
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('lc', times=np.linspace(4,5,101), dataset='lc02')
b.add_dataset('orb', compute_times=np.linspace(0,5,401), dataset='orb01')
b.run_compute(irrad_method='none')
afig, mplfig = b['orb01@model'].plot(y='ws', time=[0,1,2,3,4,5], show=True)
afig, mplfig = b['lc01@model'].plot()
afig, mplfig = b['lc02@model'].plot(time=[0,1,4,5], show=True)
afig, mplfig = b['lc01@model'].plot(x='phases')
afig, mplfig = b['lc02@model'].plot(x='phases', show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: In order for apsidal motion to be apparent, we need an eccentric system that is precessing.
Step3: Let's set a very noticeable rate of precession.
Step4: We'll add lc and orb datasets to see how the apsidal motion affects each. We'll need to sample over several orbits of the binary (which has a period of 3 days, by default).
Step5: Influence on Orbits (positions)
Step6: Influence on Light Curves (fluxes)
|
9,088
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all shogun classes
from shogun import *
import shogun as sg
#number of data points.
n=100
#generate a random 2d line(y1 = mx1 + c)
m = random.randint(1,10)
c = random.randint(1,10)
x1 = random.random_integers(-20,20,n)
y1=m*x1+c
#generate the noise.
noise=random.random_sample([n]) * random.random_integers(-35,35,n)
#make the noise orthogonal to the line y=mx+c and add it.
x=x1 + noise*m/sqrt(1+square(m))
y=y1 + noise/sqrt(1+square(m))
twoD_obsmatrix=array([x,y])
#to visualise the data we must plot it.
rcParams['figure.figsize'] = 7, 7
figure,axis=subplots(1,1)
xlim(-50,50)
ylim(-50,50)
axis.plot(twoD_obsmatrix[0,:],twoD_obsmatrix[1,:],'o',color='green',markersize=6)
#the line from which we generated the data is plotted in red
axis.plot(x1[:],y1[:],linewidth=0.3,color='red')
title('One-Dimensional sub-space with noise')
xlabel("x axis")
_=ylabel("y axis")
#convert the observation matrix into dense feature matrix.
train_features = features(twoD_obsmatrix)
#PCA(EVD) is choosen since N=100 and D=2 (N>D).
#However we can also use PCA(AUTO) as it will automagically choose the appropriate method.
preprocessor = sg.transformer('PCA', method='EVD')
#since we are projecting down the 2d data, the target dim is 1. But here the exhaustive method is detailed by
#setting the target dimension to 2 to visualize both the eigen vectors.
#However, in future examples we will get rid of this step by implementing it directly.
preprocessor.put('target_dim', 2)
#Centralise the data by subtracting its mean from it.
preprocessor.fit(train_features)
#get the mean for the respective dimensions.
mean_datapoints=preprocessor.get('mean_vector')
mean_x=mean_datapoints[0]
mean_y=mean_datapoints[1]
#Get the eigenvectors(We will get two of these since we set the target to 2).
E = preprocessor.get('transformation_matrix')
#Get all the eigenvalues returned by PCA.
eig_value=preprocessor.get('eigenvalues_vector')
e1 = E[:,0]
e2 = E[:,1]
eig_value1 = eig_value[0]
eig_value2 = eig_value[1]
#find out the M eigenvectors corresponding to top M number of eigenvalues and store it in E
#Here M=1
#slope of e1 & e2
m1=e1[1]/e1[0]
m2=e2[1]/e2[0]
#generate the two lines
x1=range(-50,50)
x2=x1
y1=multiply(m1,x1)
y2=multiply(m2,x2)
#plot the data along with those two eigenvectors
figure, axis = subplots(1,1)
xlim(-50, 50)
ylim(-50, 50)
axis.plot(x[:], y[:],'o',color='green', markersize=5, label="green")
axis.plot(x1[:], y1[:], linewidth=0.7, color='black')
axis.plot(x2[:], y2[:], linewidth=0.7, color='blue')
p1 = Rectangle((0, 0), 1, 1, fc="black")
p2 = Rectangle((0, 0), 1, 1, fc="blue")
legend([p1,p2],["1st eigenvector","2nd eigenvector"],loc='center left', bbox_to_anchor=(1, 0.5))
title('Eigenvectors selection')
xlabel("x axis")
_=ylabel("y axis")
#The eigenvector corresponding to higher eigenvalue(i.e eig_value2) is choosen (i.e e2).
#E is the feature vector.
E=e2
#transform all 2-dimensional feature matrices to target-dimensional approximations.
yn=preprocessor.transform(train_features).get('feature_matrix')
#Since, here we are manually trying to find the eigenvector corresponding to the top eigenvalue.
#The 2nd row of yn is choosen as it corresponds to the required eigenvector e2.
yn1=yn[1,:]
x_new=(yn1 * E[0]) + tile(mean_x,[n,1]).T[0]
y_new=(yn1 * E[1]) + tile(mean_y,[n,1]).T[0]
figure, axis = subplots(1,1)
xlim(-50, 50)
ylim(-50, 50)
axis.plot(x[:], y[:],'o',color='green', markersize=5, label="green")
axis.plot(x_new, y_new, 'o', color='blue', markersize=5, label="red")
title('PCA Projection of 2D data into 1D subspace')
xlabel("x axis")
ylabel("y axis")
#add some legend for information
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="g")
p3 = Rectangle((0, 0), 1, 1, fc="b")
legend([p1,p2,p3],["normal projection","2d data","1d projection"],loc='center left', bbox_to_anchor=(1, 0.5))
#plot the projections in red:
for i in range(n):
axis.plot([x[i],x_new[i]],[y[i],y_new[i]] , color='red')
rcParams['figure.figsize'] = 8,8
#number of points
n=100
#generate the data
a=random.randint(1,20)
b=random.randint(1,20)
c=random.randint(1,20)
d=random.randint(1,20)
x1=random.random_integers(-20,20,n)
y1=random.random_integers(-20,20,n)
z1=-(a*x1+b*y1+d)/c
#generate the noise
noise=random.random_sample([n])*random.random_integers(-30,30,n)
#the normal unit vector is [a,b,c]/magnitude
magnitude=sqrt(square(a)+square(b)+square(c))
normal_vec=array([a,b,c]/magnitude)
#add the noise orthogonally
x=x1+noise*normal_vec[0]
y=y1+noise*normal_vec[1]
z=z1+noise*normal_vec[2]
threeD_obsmatrix=array([x,y,z])
#to visualize the data, we must plot it.
from mpl_toolkits.mplot3d import Axes3D
fig = pyplot.figure()
ax=fig.add_subplot(111, projection='3d')
#plot the noisy data generated by distorting a plane
ax.scatter(x, y, z,marker='o', color='g')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
ax.set_zlabel('z label')
legend([p2],["3d data"],loc='center left', bbox_to_anchor=(1, 0.5))
title('Two dimensional subspace with noise')
xx, yy = meshgrid(range(-30,30), range(-30,30))
zz=-(a * xx + b * yy + d) / c
#convert the observation matrix into dense feature matrix.
train_features = features(threeD_obsmatrix)
#PCA(EVD) is choosen since N=100 and D=3 (N>D).
#However we can also use PCA(AUTO) as it will automagically choose the appropriate method.
preprocessor = sg.transformer('PCA', method='EVD')
#If we set the target dimension to 2, Shogun would automagically preserve the required 2 eigenvectors(out of 3) according to their
#eigenvalues.
preprocessor.put('target_dim', 2)
preprocessor.fit(train_features)
#get the mean for the respective dimensions.
mean_datapoints=preprocessor.get('mean_vector')
mean_x=mean_datapoints[0]
mean_y=mean_datapoints[1]
mean_z=mean_datapoints[2]
#get the required eigenvectors corresponding to top 2 eigenvalues.
E = preprocessor.get('transformation_matrix')
#This can be performed by shogun's PCA preprocessor as follows:
yn=preprocessor.transform(train_features).get('feature_matrix')
new_data=dot(E,yn)
x_new=new_data[0,:]+tile(mean_x,[n,1]).T[0]
y_new=new_data[1,:]+tile(mean_y,[n,1]).T[0]
z_new=new_data[2,:]+tile(mean_z,[n,1]).T[0]
#all the above points lie on the same plane. To make it more clear we will plot the projection also.
fig=pyplot.figure()
ax=fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z,marker='o', color='g')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
ax.set_zlabel('z label')
legend([p1,p2,p3],["normal projection","3d data","2d projection"],loc='center left', bbox_to_anchor=(1, 0.5))
title('PCA Projection of 3D data into 2D subspace')
for i in range(100):
ax.scatter(x_new[i], y_new[i], z_new[i],marker='o', color='b')
ax.plot([x[i],x_new[i]],[y[i],y_new[i]],[z[i],z_new[i]],color='r')
rcParams['figure.figsize'] = 10, 10
import os
def get_imlist(path):
Returns a list of filenames for all jpg images in a directory
return [os.path.join(path,f) for f in os.listdir(path) if f.endswith('.pgm')]
#set path of the training images
path_train=os.path.join(SHOGUN_DATA_DIR, 'att_dataset/training/')
#set no. of rows that the images will be resized.
k1=100
#set no. of columns that the images will be resized.
k2=100
filenames = get_imlist(path_train)
filenames = array(filenames)
#n is total number of images that has to be analysed.
n=len(filenames)
# we will be using this often to visualize the images out there.
def showfig(image):
imgplot=imshow(image, cmap='gray')
imgplot.axes.get_xaxis().set_visible(False)
imgplot.axes.get_yaxis().set_visible(False)
from PIL import Image
from scipy import misc
# to get a hang of the data, lets see some part of the dataset images.
fig = pyplot.figure()
title('The Training Dataset')
for i in range(49):
fig.add_subplot(7,7,i+1)
train_img=array(Image.open(filenames[i]).convert('L'))
train_img=misc.imresize(train_img, [k1,k2])
showfig(train_img)
#To form the observation matrix obs_matrix.
#read the 1st image.
train_img = array(Image.open(filenames[0]).convert('L'))
#resize it to k1 rows and k2 columns
train_img=misc.imresize(train_img, [k1,k2])
#since features accepts only data of float64 datatype, we do a type conversion
train_img=array(train_img, dtype='double')
#flatten it to make it a row vector.
train_img=train_img.flatten()
# repeat the above for all images and stack all those vectors together in a matrix
for i in range(1,n):
temp=array(Image.open(filenames[i]).convert('L'))
temp=misc.imresize(temp, [k1,k2])
temp=array(temp, dtype='double')
temp=temp.flatten()
train_img=vstack([train_img,temp])
#form the observation matrix
obs_matrix=train_img.T
train_features = features(obs_matrix)
preprocessor= sg.transformer('PCA', method='AUTO')
preprocessor.put('target_dim', 100)
preprocessor.fit(train_features)
mean=preprocessor.get('mean_vector')
#get the required eigenvectors corresponding to top 100 eigenvalues
E = preprocessor.get('transformation_matrix')
#lets see how these eigenfaces/eigenvectors look like:
fig1 = pyplot.figure()
title('Top 20 Eigenfaces')
for i in range(20):
a = fig1.add_subplot(5,4,i+1)
eigen_faces=E[:,i].reshape([k1,k2])
showfig(eigen_faces)
#we perform the required dot product.
yn=preprocessor.transform(train_features).get('feature_matrix')
re=tile(mean,[n,1]).T[0] + dot(E,yn)
#lets plot the reconstructed images.
fig2 = pyplot.figure()
title('Reconstructed Images from 100 eigenfaces')
for i in range(1,50):
re1 = re[:,i].reshape([k1,k2])
fig2.add_subplot(7,7,i)
showfig(re1)
#set path of the training images
path_train=os.path.join(SHOGUN_DATA_DIR, 'att_dataset/testing/')
test_files=get_imlist(path_train)
test_img=array(Image.open(test_files[0]).convert('L'))
rcParams.update({'figure.figsize': (3, 3)})
#we plot the test image , for which we have to identify a good match from the training images we already have
fig = pyplot.figure()
title('The Test Image')
showfig(test_img)
#We flatten out our test image just the way we have done for the other images
test_img=misc.imresize(test_img, [k1,k2])
test_img=array(test_img, dtype='double')
test_img=test_img.flatten()
#We centralise the test image by subtracting the mean from it.
test_f=test_img-mean
#We have already projected our training images into pca subspace as yn.
train_proj = yn
#Projecting our test image into pca subspace
test_proj = dot(E.T, test_f)
#To get Eucledian Distance as the distance measure use EuclideanDistance.
workfeat = features(mat(train_proj))
testfeat = features(mat(test_proj).T)
RaRb = sg.distance('EuclideanDistance')
RaRb.init(testfeat, workfeat)
#The distance between one test image w.r.t all the training is stacked in matrix d.
d=empty([n,1])
for i in range(n):
d[i]= RaRb.distance(0,i)
#The one having the minimum distance is found out
min_distance_index = d.argmin()
iden=array(Image.open(filenames[min_distance_index]))
title('Identified Image')
showfig(iden)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some Formal Background (Skip if you just want code examples)
Step2: Step 2
Step3: Step 3
Step4: Step 5
Step5: In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions.
Step6: Step 6
Step7: Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis.
Step 7
Step8: The new data is plotted below
Step9: PCA on a 3d data.
Step1
Step10: Step 2
Step11: Step 3 & Step 4
Step12: Steps 5
Step13: Step 7
Step15: PCA Performance
Step16: Lets have a look on the data
Step17: Represent every image $I_i$ as a vector $\Gamma_i$
Step18: Step 2
Step19: Step 3 & Step 4
Step20: These 20 eigenfaces are not sufficient for a good image reconstruction. Having more eigenvectors gives us the most flexibility in the number of faces we can reconstruct. Though we are adding vectors with low variance, they are in directions of change nonetheless, and an external image that is not in our database could in fact need these eigenvectors to get even relatively close to it. But at the same time we must also keep in mind that adding excessive eigenvectors results in addition of little or no variance, slowing down the process.
Step 5
Step21: Step 7
Step22: Recognition part.
Step23: Here we have to project our training image as well as the test image on the PCA subspace.
Step24: Shogun's way of doing things
|
9,089
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np, matplotlib.pyplot as plt, matplotlib.gridspec as gridspec
from mpl_toolkits.basemap import Basemap
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
[ra, dec, z], [ra_isol, dec_isol, z_isol], [ra_pair, dec_pair, z_pair], [ra_trip, dec_trip, z_trip] = [np.loadtxt(filename+'.txt', usecols = (0, 1, 2), unpack=True) for filename in ['SDSS_DR10_galaxy_local', 'table1', 'table2', 'table3']]
decstart, decrange, alpha0, raCen, raDelta = 0., 5., .2, 0., 180.
ra_tot, dec_tot, z_tot = [ra, ra_isol, ra_pair, ra_trip], [dec, dec_isol, dec_pair, dec_trip], [z, z_isol, z_pair, z_trip]
rad_tot = [np.radians(raval) for raval in ra_tot]
def plotSkymap(SDSS=True, Isolated=True, Pairs=False, Triplets=False, Opacity=alpha0, DecRange=decrange, Declination=decstart):
plt.figure(figsize=(15, 15))
gs = gridspec.GridSpec(1, 2)
ax1, ax2 = [plt.subplot(gs[i], polar=val, projection=pj) for i, val, pj in zip(range(0, 2), [True, False], [None, 'mollweide'])]
[ax.grid(True) for ax in [ax1, ax2]]
cond_dec = [((decval > Declination) & (decval < Declination + DecRange)) for decval in dec_tot]
xyplt = [ax1.plot(rad_tot[i][cond_dec[i]], z_tot[i][cond_dec[i]], krgb, ms=mval, alpha=alpval, visible=visi)[0]
for i, krgb, mval, alpval, visi in zip(range(0, 4), ['k.', 'ro', 'go', 'bo'], [1, 4, 4, 4],
[Opacity, .7, .7, .7], [SDSS, Isolated, Pairs, Triplets])]
x, y = [np.radians(val) for val in [-1*(ra - 180), dec]]
H, xedges, yedges = np.histogram2d(x.T, y.T, bins=50)
extent, levels = [xedges[0], xedges[-1], yedges[0], yedges[-1]], [100, 10000]
ax2.contourf(H.T, levels, origin='lower', colors='b', lw=1, extent=extent, alpha=.3)
[plt.setp(gtval, fontsize=fontval, alpha=.6) for gtval, fontval in zip([ax2.get_xticklabels(), ax2.get_yticklabels()], [8, 12])]
x_rect, y_rect = [np.radians(val) for val in [raCen + np.array([-1, -1, 1, 1, -1])*raDelta, Declination + np.array([0, 1, 1, 0, 0])*DecRange]]
ax2.fill(x_rect, y_rect, 'r', lw=0, alpha=.5)
plt.show()
interact(plotSkymap, SDSS=True, Isolated=True, Pairs=False, Triplets=False,
Declination=widgets.FloatSlider(min=-20.0, max=90.0, step=2.0, value=decstart),
DecRange=widgets.FloatSlider(min=0.0, max=90.0, step=1.0, value=decrange),
Opacity=widgets.FloatSlider(min=0.0, max=1.0, step=0.1, value=alpha0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the following code to load the catalogues of galaxies that will be represented. In this case we represent the LSS by all the galaxies in the local Universe (with redshift less than 0.1) from the SDSS. We will visualize where are located isolated galaxies, isolated pairs, and isolated triplets with respect to SDSS galaxies. We also set the default values of the visualization at declination 0 degrees, with a declination range 5., for the wedge diagram representation. This ranges will be also shown by a red line in the complemented Mollweide projection. We select a default value of 0.2 for the transparency.
Step2: The following code creates the representation.
Step3: Run the following code to interact with the representation.
|
9,090
|
<ASSISTANT_TASK:>
Python Code:
%%cython
cpdef noop():
pass
%load_ext Cython
%%cython
cimport numpy
cpdef cysum(numpy.ndarray[double] A):
Compute the sum of an array
cdef double a=0
for i in range(A.shape[0]):
a += A[i]
return a
def pysum(A):
Compute the sum of an array
a = 0
for i in range(A.shape[0]):
a += A[i]
return a
import numpy as np
for sz in (100, 1000, 10000):
A = np.random.random(sz)
print("Python %i" % sz, end=' ')
%timeit pysum(A)
print("np.sum %i" % sz, end=' ')
%timeit A.sum()
print("Cython %i" % sz, end=' ')
%timeit cysum(A)
import Cython
Cython.load_ipython_extension??
from Cython.Build.IpythonMagic import CythonMagics
CythonMagics??
%pycat soln/mymagics.py
%install_ext soln/mymagics.py
%load_ext mymagics
import time
%tic
time.sleep(0.1)
%toc
%nbrun _Sample
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Customizing IPython - Extensions
Step3: Let's see what Cython's load_ipython_extension function looks like
Step4: Our own extension
Step5: %install_ext is a function that can take any file path or URL, and puts the target into IPYTHONDIR/extensions
|
9,091
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
d = 5e-9 # particle radius in meters
eta = 1.0e-3 # viscosity of water in SI units (Pascal-seconds) at 293 K
kB = 1.38e-23 # Boltzmann constant
T = 293 # Temperature in degrees Kelvin
D = kB*T/(3*np.pi*eta*d) # [m^2 / s]
D
Du = D*(1e6)**2/(1e3) # [um^2 / ms]
Du
Du = D*(1e9)**2/(1e6) # [nm^2 / us]
Du
S_spot = 0.8e-6
N = 3
tau_spot = 1e-3
D = S_spot**2 / (2*N*tau_spot) # [m^2 / s]
D
Du = D*(1e6)**2/(1e3) # [um^2 / ms]
Du
time = 10. # seconds
sigma = np.sqrt(2*D*3*time)
print('Displacement (std_dev): %.2f um' % (sigma*1e6))
space = 1e-6 # m
time = 1.*space**2/(2*D*3)
print('Time for %.1f um displacement: %.1g s' % (space*1e6, time))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Or expressing $D$ in $\textrm{nm}^2 /\mu s$
Step2: We can also estimate $D$ experimentally from the knowledge of the PSF and the diffusion time $\tau_{spot}$
Step3: not very different from what we obtained before from the viscosity model.
Step4: How long we need to diffuse an X distance?
|
9,092
|
<ASSISTANT_TASK:>
Python Code:
%%capture
%matplotlib inline
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
# To get equations the look like, well, equations, use the following.
from sympy.interactive import printing
printing.init_printing(use_latex=True)
from IPython.display import display
# Tools for manipulating quaternions.
import Q_tools as qt;
TimeReversal = np.array([[-1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
display(TimeReversal)
t, x, y, z = sp.symbols("t x y z")
Vector = np.array([t, x, y, z])
display(Vector)
display(Vector * TimeReversal)
display(Vector * TimeReversal * TimeReversal)
display(Vector * TimeReversal * TimeReversal * TimeReversal)
P = qt.QH([t, x, y, z])
Tr = P.flip_signs().invert().product(P.conj())
display(Tr.t)
display(Tr.x)
display(Tr.y)
display(Tr.z)
PFlip = P.product(Tr)
display(sp.simplify(PFlip.t))
display(sp.simplify(PFlip.x))
display(sp.simplify(PFlip.y))
display(sp.simplify(PFlip.z))
PFlipFlip = P.product(Tr).product(Tr)
display(sp.simplify(PFlipFlip.t))
display(sp.simplify(PFlipFlip.x))
display(sp.simplify(PFlipFlip.y))
display(sp.simplify(PFlipFlip.z))
Classical_subs = {t:1, x:0.0000000002, y:0.00000000012, z:-0.0000000003}
display(sp.simplify(PFlip.t.subs(Classical_subs)))
display(sp.simplify(PFlip.x.subs(Classical_subs)))
display(sp.simplify(PFlip.y.subs(Classical_subs)))
display(sp.simplify(PFlip.z.subs(Classical_subs)))
print()
display(sp.simplify(PFlipFlip.t.subs(Classical_subs)))
display(sp.simplify(PFlipFlip.x.subs(Classical_subs)))
display(sp.simplify(PFlipFlip.y.subs(Classical_subs)))
display(sp.simplify(PFlipFlip.z.subs(Classical_subs)))
def reverse_n_times(P1, T1, sub_1, n):
Given a symbolic expression P, applies symbolic space-time reversal using a dictionary of values n times.
P1_t, P1_x, P1_y, P1_z = P1.t.subs(sub_1), P1.x.subs(sub_1), P1.y.subs(sub_1), P1.z.subs(sub_1)
P_result = qt.QH([P1_t, P1_x, P1_y, P1_z])
T1_t, T1_x, T1_y, T1_z = T1.t.subs(sub_1), T1.x.subs(sub_1), T1.y.subs(sub_1), T1.z.subs(sub_1)
T_sub = qt.QH([T1_t, T1_x, T1_y, T1_z])
for i in range(n):
P_result = P_result.product(T_sub)
return P_result
print(reverse_n_times(P, Tr, Classical_subs, 100))
print(reverse_n_times(P, Tr, Classical_subs, 101))
print(reverse_n_times(P, Tr, Classical_subs, 1000))
print(reverse_n_times(P, Tr, Classical_subs, 1001))
print(reverse_n_times(P, Tr, Classical_subs, 10000))
print(reverse_n_times(P, Tr, Classical_subs, 100000))
print(reverse_n_times(P, Tr, Classical_subs, 1000000))
print(reverse_n_times(P, Tr, Classical_subs, 10000000))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The member of the Lorentz group that reverses time is remarkably simple
Step2: Create a 4-vector.
Step3: Do the time reversal a bunch of times.
Step4: It comes as no surprise that if the time-reversal member of the Lorentz group is done an even number of times, then nothing changes, but an odd number of times reverses time exactly.
Step5: Does this considerably more complicated expression than the Lorentz group do its job? Of course is should, but let's just show this is the case
Step6: Apply Tr twice to see if one gets back to the start point.
Step7: This may not look "right" to the eye, so test it. Use "classical" values, meaning time $t >>> x, y, z$.
Step9: The value for t returned to unity as it should, but the same cannot be said for the spatial terms. This is due to the cross product. See what happens if one does this many, many times. Define a function to do the work.
|
9,093
|
<ASSISTANT_TASK:>
Python Code:
# The Python Spark (pyspark) libraries include functions designed to be run on columns of data
# stored in Spark data frames. They need to be imported in order to use them. Here we
# are going to use
from pyspark.sql.functions import year
# The matplotlib package is used for graphing. The next line tells Jupyter that when a
# graphing function is used, it should draw the graph here inline in the notebook.
import matplotlib.pyplot as plt
%matplotlib inline
df = sqlContext.read.load("/guoda/data/idigbio-20190612T171757.parquet")
df.count()
df.printSchema()
df.head(1)
# The outer "(" and ")" surround the chain of Python method calls to allow them to
# span lines. This is a common convention and makes the data processing pipeline
# easy to read and modify.
#
# The persist() function tells Spark to store the data frame in memory so it can be
# accessed repeatedly without having to be reloaded.
year_summary = (df
.groupBy(year("datecollected").cast("integer").alias("yearcollected"))
.count()
.orderBy("yearcollected")
.persist()
)
year_summary.count()
year_summary.printSchema()
year_summary.head(10)
year_summary.describe().show()
pandas_year_summary = (year_summary
.filter(year_summary.yearcollected >= 1817)
.filter(year_summary.yearcollected <= 2017)
.orderBy("yearcollected")
.toPandas()
)
pandas_year_summary.head()
plt.bar(pandas_year_summary["yearcollected"],
pandas_year_summary["count"],
edgecolor='none', width=1.0
)
plt.title("Specimens in iDigBio by Collection Year and Continent")
plt.ylabel("Number of Specimen Records")
plt.xlabel("Year")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the data set
Step2: Examining the data
Step3: Next we can look at the first row of data. The (1) after head tells Python how many rows to print. Since this is all iDigBio data, the rows are pretty big so we'll only show one.
Step4: Summarizing the data
Step5: Let's take a look at this new data frame using some of the commands from above
Step6: Now that our data is both much smaller and mostly numeric, we can use the describe() method to quickly make summary statistics. This method returns a data frame so we have to use show() to actually print the whole contents of the data frame.
Step7: Spark data frames, Pandas data frames, and filtering
Step8: (Notice that the display of the first rows looks different from when we ran head() on the Spark data frame? That's because we're looking at the display generated by the Pandas library instead of the Spark library.)
|
9,094
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
%%capture
!pip3 install seaborn
#@title Load the Universal Sentence Encoder's TF Hub module
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"]
model = hub.load(module_url)
print ("module %s loaded" % module_url)
def embed(input):
return model(input)
#@title Compute a representation for each message, showing various lengths supported.
word = "Elephant"
sentence = "I am a sentence for which I would like to get its embedding."
paragraph = (
"Universal Sentence Encoder embeddings also support short paragraphs. "
"There is no hard limit on how long the paragraph is. Roughly, the longer "
"the more 'diluted' the embedding will be.")
messages = [word, sentence, paragraph]
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
message_embeddings = embed(messages)
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
def run_and_plot(messages_):
message_embeddings_ = embed(messages_)
plot_similarity(messages_, message_embeddings_, 90)
messages = [
# Smartphones
"I like my phone",
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
"Global warming is real",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
"Is paleo better than keto?",
# Asking about age
"How old are you?",
"what is your age?",
]
run_and_plot(messages)
import pandas
import scipy
import math
import csv
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = pandas.read_table(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"),
error_bad_lines=False,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
sts_test = pandas.read_table(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"),
error_bad_lines=False,
quoting=csv.QUOTE_NONE,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
# cleanup some NaN values in sts_dev
sts_dev = sts_dev[[isinstance(s, str) for s in sts_dev['sent_2']]]
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
def run_sts_benchmark(batch):
sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1)
sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1)
cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1)
clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0)
scores = 1.0 - tf.acos(clip_cosine_similarities) / math.pi
Returns the similarity scores
return scores
dev_scores = sts_data['sim'].tolist()
scores = []
for batch in np.array_split(sts_data, 10):
scores.extend(run_sts_benchmark(batch))
pearson_correlation = scipy.stats.pearsonr(scores, dev_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ユニバーサルセンテンスエンコーダー
Step2: Tensorflow のインストールに関する詳細は、https
Step3: セマンティックテキストの類似性タスクの例
Step4: 類似性の視覚化
Step5: 評価
Step7: 文章埋め込みの評価
|
9,095
|
<ASSISTANT_TASK:>
Python Code:
import os
import tensorflow.compat.v1 as tf
import pprint
assert 'COLAB_TPU_ADDR' in os.environ, 'Did you forget to switch to TPU?'
tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
with tf.Session(tpu_address) as sess:
devices = sess.list_devices()
pprint.pprint(devices)
device_is_tpu = [True if 'TPU' in str(x) else False for x in devices]
assert True in device_is_tpu, 'Did you forget to switch to TPU?'
import json
import os
import pprint
import re
import time
import tensorflow.compat.v1 as tf
import tensorflow_gcs_config
# Google Cloud Storage bucket for storing the training dataset.
bucket = '' #@param {type:"string"}
assert bucket, 'Must specify an existing GCS bucket name'
print('Using bucket: {}'.format(bucket))
assert 'COLAB_TPU_ADDR' in os.environ, 'Missing TPU; did you request a TPU in Notebook Settings?'
tpu_address = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])
from google.colab import auth
auth.authenticate_user()
# Upload credentials to TPU.
tf.config.experimental_connect_to_host(tpu_address)
tensorflow_gcs_config.configure_gcs_from_colab_auth()
# Now credentials are set for all future sessions on this TPU.
# Check that imports for the rest of the file work.
import os
import tensorflow as tf
!pip install tensorflow-gan
import tensorflow_gan as tfgan
from tensorflow.keras import layers
import tensorflow_datasets as tfds
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
# Allow matplotlib images to render immediately.
%matplotlib inline
Params = {
'batch_size' : 32, # Number of image samples used in each training step
'hr_dimension' : 256, # Dimension of a High Resolution (HR) Image
'scale' : 4, # Factor by which Low Resolution (LR) Images will be downscaled.
'data_name': 'div2k/bicubic_x4', # Dataset name - loaded using tfds.
'trunk_size' : 11, # Number of Residual blocks used in Generator,
'init_lr' : 0.00005, # Initial Learning rate for networks.
'ph1_steps' : 10000, # Number of steps required for phase-1 training
'ph2_steps' : 100000, # Number of steps required for phase-2 training
'decay_ph1' : 0.2, # Factor by which learning rates are modified during phase-1 training
'decay_ph2' : 0.5, # Factor by which learning rates are modified during phase-2 training
'model_dir' : 'gs://{}/SavedModels' # Path to save the model after training. (inside the cloud bucket)
.format(bucket),
'ckpt_dir' : '/content/ckpts/', # Path to save the training checkpoints. (outside the cloud bucket)
'lambda' : 0.005, # To balance adversarial loss during phase-2 training.
'eta' : 0.01, # To balance L1 loss during phase-2 training.
'val_steps' : 100 # Number of steps required for validation.
}
dataset_dir = 'gs://{}/{}'.format(bucket, 'datasets')
def input_fn(mode, params):
assert 'batch_size' in params
bs = params['batch_size']
split = 'train' if mode == 'train' else 'validation'
shuffle = True
def scale(image, *args):
hr_size = params['hr_dimension']
scale = params['scale']
hr_image = image
hr_image = tf.image.resize(hr_image, [hr_size, hr_size])
lr_image = tf.image.resize(hr_image, [hr_size//scale, hr_size//scale], method='bicubic')
hr_image = tf.clip_by_value(hr_image, 0, 255)
lr_image = tf.clip_by_value(lr_image, 0, 255)
return lr_image, hr_image
dataset = (tfds.load(params['data_name'], split=split, data_dir=dataset_dir, as_supervised=True)
.map(scale, num_parallel_calls=4)
.cache()
.repeat())
if shuffle:
dataset = dataset.shuffle(
buffer_size=10000, reshuffle_each_iteration=True)
dataset = (dataset.batch(bs, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE))
return dataset
train_ds = input_fn(mode='train', params=Params)
img_lr, img_hr = next(iter(train_ds))
lr = Image.fromarray(np.array(img_lr)[0].astype(np.uint8))
lr = lr.resize([256, 256])
display(lr)
hr = Image.fromarray(np.array(img_hr)[0].astype(np.uint8))
hr = hr.resize([256, 256])
display(hr)
def _conv_block(input, filters, activation=True):
h = layers.Conv2D(filters, kernel_size=[3,3],
kernel_initializer="he_normal", bias_initializer="zeros",
strides=[1,1], padding='same', use_bias=True)(input)
if activation:
h = layers.LeakyReLU(0.2)(h)
return h
def dense_block(input):
h1 = _conv_block(input, 32)
h1 = layers.Concatenate()([input, h1])
h2 = _conv_block(h1, 32)
h2 = layers.Concatenate()([input, h1, h2])
h3 = _conv_block(h2, 32)
h3 = layers.Concatenate()([input, h1, h2, h3])
h4 = _conv_block(h3, 32)
h4 = layers.Concatenate()([input, h1, h2, h3, h4])
h5 = _conv_block(h4, 32, activation=False)
h5 = layers.Lambda(lambda x: x * 0.2)(h5)
h = layers.Add()([h5, input])
return h
def rrdb(input):
h = dense_block(input)
h = dense_block(h)
h = dense_block(h)
h = layers.Lambda(lambda x:x * 0.2)(h)
out = layers.Add()([h, input])
return out
def upsample(x, filters):
x = layers.Conv2DTranspose(filters, kernel_size=3,
strides=2, padding='same',
use_bias = True)(x)
x = layers.LeakyReLU(alpha=0.2)(x)
return x
def generator_network(filter=32,
trunk_size=Params['trunk_size'],
out_channels=3):
lr_input = layers.Input(shape=(None, None, 3))
x = layers.Conv2D(filter, kernel_size=[3,3], strides=[1,1],
padding='same', use_bias=True)(lr_input)
x = layers.LeakyReLU(0.2)(x)
ref = x
for i in range(trunk_size):
x = rrdb(x)
x = layers.Conv2D(filter, kernel_size=[3,3], strides=[1,1],
padding='same', use_bias = True)(x)
x = layers.Add()([x, ref])
x = upsample(x, filter)
x = upsample(x, filter)
x = layers.Conv2D(filter, kernel_size=3, strides=1,
padding='same', use_bias=True)(x)
x = layers.LeakyReLU(0.2)(x)
hr_output = layers.Conv2D(out_channels, kernel_size=3, strides=1,
padding='same', use_bias=True)(x)
model = tf.keras.models.Model(inputs=lr_input, outputs=hr_output)
return model
def _conv_block_d(x, out_channel):
x = layers.Conv2D(out_channel, 3,1, padding='same', use_bias=False)(x)
x = layers.BatchNormalization(momentum=0.8)(x)
x = layers.LeakyReLU(alpha=0.2)(x)
x = layers.Conv2D(out_channel, 4,2, padding='same', use_bias=False)(x)
x = layers.BatchNormalization(momentum=0.8)(x)
x = layers.LeakyReLU(alpha=0.2)(x)
return x
def discriminator_network(filters = 64, training=True):
img = layers.Input(shape = (Params['hr_dimension'], Params['hr_dimension'], 3))
x = layers.Conv2D(filters, [3,3], 1, padding='same', use_bias=False)(img)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU(alpha=0.2)(x)
x = layers.Conv2D(filters, [3,3], 2, padding='same', use_bias=False)(x)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU(alpha=0.2)(x)
x = _conv_block_d(x, filters *2)
x = _conv_block_d(x, filters *4)
x = _conv_block_d(x, filters *8)
x = layers.Flatten()(x)
x = layers.Dense(100)(x)
x = layers.LeakyReLU(alpha=0.2)(x)
x = layers.Dense(1)(x)
model = tf.keras.models.Model(inputs = img, outputs = x)
return model
def pixel_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
return tf.reduce_mean(tf.reduce_mean(tf.abs(y_true - y_pred), axis = 0))
# Function for calculating perceptual loss
def vgg_loss(weight=None, input_shape=None):
vgg_model = tf.keras.applications.vgg19.VGG19(
input_shape=input_shape, weights=weight, include_top=False
)
for layer in vgg_model.layers:
layer.trainable = False
vgg_model.get_layer("block5_conv4").activation = lambda x: x
vgg = tf.keras.Model(
inputs=[vgg_model.input],
outputs=[vgg_model.get_layer("block5_conv4").output])
def loss(y_true, y_pred):
return tf.compat.v1.losses.absolute_difference(vgg(y_true), vgg(y_pred))
return loss
# To display images in the order : LR Image -> Generated Image -> HR Image
def visualize_results(image_lr, generated, image_hr):
size = 128
resized_lr = tf.image.resize(image_lr, [size, size], method=tf.image.ResizeMethod.BILINEAR)
resized_gen = tf.image.resize(generated, [size, size], method=tf.image.ResizeMethod.BILINEAR)
resized_hr = tf.image.resize(image_hr, [size, size], method=tf.image.ResizeMethod.BILINEAR)
stack = tf.stack([resized_lr[0], resized_gen[0], resized_hr[0]])
image_grid = tfgan.eval.python_image_grid(stack, grid_shape=(1, 3))
result = Image.fromarray(image_grid.astype(np.uint8))
return result
# Define the TPU strategy
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
train_ds = iter(strategy.experimental_distribute_dataset(train_ds))
with strategy.scope():
metric = tf.keras.metrics.Mean()
psnr_metric = tf.keras.metrics.Mean()
generator = generator_network()
g_optimizer = tf.optimizers.Adam(
learning_rate = 0.0002,
beta_1 = 0.9,
beta_2 = 0.99
)
@tf.function
def train_step(image_lr, image_hr):
with tf.GradientTape() as tape:
fake = generator(image_lr)
loss = pixel_loss(image_hr, fake) * (1.0 / Params['batch_size'])
psnr_value = tf.image.psnr(fake, image_hr,max_val = 256.0)
metric(loss)
gradient = tape.gradient(loss, generator.trainable_variables)
g_optimizer.apply_gradients(zip(gradient, generator.trainable_variables))
return psnr_value
def val_steps(image_lr, image_hr):
fake = generator(image_lr)
result = visualize_results(image_lr, fake, image_hr)
display(result)
step_count = 0
while step_count < Params['ph1_steps']:
lr, hr = next(train_ds)
psnr_loss = strategy.run(train_step, args = (lr, hr))
loss = strategy.reduce(tf.distribute.ReduceOp.MEAN, psnr_loss, axis=None)
psnr_metric(loss)
if step_count%1000 == 0:
lr = np.array(lr.values)[0]
hr = np.array(hr.values)[0]
print("step {} PNSR = {}".format(step_count, psnr_metric.result()))
val_steps(lr, hr)
if step_count%5000 == 0:
g_optimizer.learning_rate.assign(
g_optimizer.learning_rate * Params['decay_ph1'])
step_count+=1
# Save the generator network which is then used for phase-2 training
os.makedirs(Params['model_dir'] + '/Phase_1/generator', exist_ok = True)
generator.save(Params['model_dir'] + '/Phase_1/generator')
with strategy.scope():
optimizer = tf.optimizers.Adam(
learning_rate = 0.0002,
beta_1 = 0.9,
beta_2 = 0.99
)
generator = tf.keras.models.load_model(Params['model_dir'] + '/Phase_1/generator/')
discriminator = discriminator_network()
g_optimizer = optimizer
g_optimizer.learning_rate.assign(0.00005)
d_optimizer = optimizer
checkpoint = tf.train.Checkpoint(G=generator,
D = discriminator,
G_optimizer=g_optimizer,
D_optimizer=d_optimizer)
local_device_option = tf.train.CheckpointOptions(experimental_io_device="/job:localhost")
with strategy.scope():
perceptual_loss = vgg_loss(
weight = "imagenet",
input_shape = [Params['hr_dimension'], Params['hr_dimension'], 3])
with strategy.scope():
gen_metric = tf.keras.metrics.Mean()
disc_metric = tf.keras.metrics.Mean()
psnr_metric = tf.keras.metrics.Mean()
@tf.function
def train_step(image_lr, image_hr):
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
fake = generator(image_lr)
percep_loss = tf.reduce_mean(perceptual_loss(image_hr, fake))
l1_loss = pixel_loss(image_hr, fake)
real_logits = discriminator(image_hr)
fake_logits = discriminator(fake)
loss_RaG = tfgan.losses.losses_impl.relativistic_generator_loss(real_logits,
fake_logits)
disc_loss = tfgan.losses.losses_impl.relativistic_discriminator_loss(real_logits,
fake_logits)
gen_loss = percep_loss + Params['lambda'] * loss_RaG + Params['eta'] * l1_loss
gen_loss = gen_loss / Params['batch_size']
disc_loss = disc_loss / Params['batch_size']
psnr_loss = tf.image.psnr(fake, image_hr, max_val = 256.0)
disc_metric(disc_loss)
gen_metric(gen_loss)
psnr_metric(psnr_loss)
disc_grad = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
d_optimizer.apply_gradients(zip(disc_grad, discriminator.trainable_variables))
gen_grad = gen_tape.gradient(gen_loss, generator.trainable_variables)
g_optimizer.apply_gradients(zip(gen_grad, generator.trainable_variables))
return [disc_loss, gen_loss, psnr_loss]
def val_step(image_lr, image_hr):
fake = generator(image_lr)
result = visualize_results(image_lr, fake, image_hr)
display(result)
step_count = 0
decay_step = [9000, 30000, 50000]
while step_count < Params['ph2_steps']:
lr, hr = next(train_ds)
if tf.train.latest_checkpoint(Params['ckpt_dir']):
checkpoint.restore(tf.train.latest_checkpoint(Params['ckpt_dir']))
disc_loss, gen_loss, psnr_loss = strategy.run(train_step, args = (lr, hr))
if step_count % 1000 == 0:
print("step {}".format(step_count) + " Generator Loss = {} ".format(gen_metric.result()) +
"Disc Loss = {}".format(disc_metric.result()) + " PSNR : {}".format(psnr_metric.result()))
lr = np.array(lr.values)[0]
hr = np.array(hr.values)[0]
val_step(lr, hr)
checkpoint.write(Params['ckpt_dir'], options=local_device_option)
if step_count >= decay_step[0]:
decay_step.pop(0)
g_optimizer.learning_rate.assign(
g_optimizer.learning_rate * Params['decay_ph2'])
d_optimizer.learning_rate.assign(
d_optimizer.learning_rate * Params['decay_ph2'])
step_count+=1
os.makedirs(Params['model_dir'] + '/Phase_2/generator', exist_ok = True)
os.makedirs(Params['model_dir'] + '/Phase_2/discriminator', exist_ok = True)
generator.save(Params['model_dir'] + '/Phase_2/generator')
discriminator.save(Params['model_dir'] + '/Phase_2/discriminator')
def network_interpolation(alpha=0.2,
phase_1_path=None,
phase_2_path=None):
psnr_gen = tf.keras.model.load_model(phase_1_path)
gan_gen = tf.keras.models.load_model(phase_2_path)
for var_1, var_2 in zip(gan_gen.trainable_variables,
psnr_gen.trainable_variables):
var_1.assign((1 - alpha) * var_2 + alpha * var_1)
return gan_gen
generator = network_interpolation(phase_1_path = Params['model_dir'] + '/Phase_1/generator',
phase_2_path = Params['model_dir'] + '/Phase_2/generator')
generator.save(Params['model_dir'] + '/InterpolatedGenerator/')
val_ds = input_fn(mode='validation', params=Params)
def val_steps(image_lr, image_hr):
fake = generator(image_lr)
result = visualize_results(image_lr, fake, image_hr)
display(result)
for i in range(3):
lr, hr = next(iter(val_ds))
val_steps(lr, hr)
@tf.function
def get_fid_score(real_image, gen_image):
size = tfgan.eval.INCEPTION_DEFAULT_IMAGE_SIZE
resized_real_images = tf.image.resize(real_image, [size, size], method=tf.image.ResizeMethod.BILINEAR)
resized_generated_images = tf.image.resize(gen_image, [size, size], method=tf.image.ResizeMethod.BILINEAR)
num_inception_images = 1
num_batches = Params['batch_size'] // num_inception_images
fid = tfgan.eval.frechet_inception_distance(resized_real_images, resized_generated_images, num_batches=num_batches)
return fid
@tf.function
def get_inception_score(images, gen, num_inception_images = 8):
size = tfgan.eval.INCEPTION_DEFAULT_IMAGE_SIZE
resized_images = tf.image.resize(images, [size, size], method=tf.image.ResizeMethod.BILINEAR)
num_batches = Params['batch_size'] // num_inception_images
inc_score = tfgan.eval.inception_score(resized_images, num_batches=num_batches)
return inc_score
with strategy.scope():
generator = tf.keras.models.load_model(Params['model_dir'] + '/InterpolatedGenerator')
fid_metric = tf.keras.metrics.Mean()
inc_metric = tf.keras.metrics.Mean()
psnr_metric = tf.keras.metrics.Mean()
count = 0
i = 0
while i < Params['val_steps']:
lr, hr = next(iter(val_ds))
gen = generator(lr)
fid = strategy.run(get_fid_score, args = (hr, gen))
real_is = strategy.run(get_inception_score, args=(hr, gen))
gen_is = strategy.run(get_inception_score, args=(gen, hr))
val_steps(lr, hr)
fid_metric(fid)
inc_metric(gen_is)
psnr_metric(tf.reduce_mean(tf.image.psnr(gen, hr, max_val = 256.0)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Authentication
Step2: Check imports
Step3: Training ESRGAN
Step4: Load Training Dataset
Step5: Visualize the dataset
Step6: Network Architecture
Step7: Discriminator
Step8: Loss Functions
Step9: Training
Step10: Phase - 1 Training
Steps Involved
Step11: Phase - 2
Step12: Load VGG weights
Step13: Training step
Step14: Network Interpolation
Step15: Evaluation
Step16: Visualize Generated Images
Step17: FID and Inception Scores are two common metrices used to evaluate the performance of a GAN model and PSNR value is used to quantify the similarity between two images and is used for benchmarking super resolution models.
|
9,096
|
<ASSISTANT_TASK:>
Python Code:
purity_coll = client['run']['purity']
purity_coll.count()
d = purity_coll.find_one()
for d in purity_coll.find(sort=(('calculation_time', -1), )):
print(str(d['calculation_time']), parse_expr(d['function']))
import numpy as np
data = np.array([
4, 0.9948, # Julien
5, 1.1224, # Julien
6, 1.2137, # Julien
7, 1.2833, # Julien
8, 1.3323, # Julien
9, 1.371, # Jelle
12, 1.4401, # Jelle & Julien (same value, well Julien had 1.4402 and I had 1.440...)
13, 1.456,
15, 1.482
])
cathode_kv, vdrift_kmpers = data.reshape(-1, 2).T
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
vs = np.linspace(0, 20, 100)
deg = len(cathode_kv) - 4
coeffs = np.polyfit(cathode_kv, vdrift_kmpers, deg=deg)
plt.plot(vs, np.polyval(coeffs, vs), label='%dth degree polynomial fit' % deg)
plt.scatter(cathode_kv, vdrift_kmpers, marker='x', c='k', label='Datapoints')
plt.ylim(0.5, 1.7)
plt.legend(loc='lower right')
plt.xlabel("Cathode voltage (kV)")
plt.ylabel("Drift velocity (km/sec)")
plt.plot(cathode_kv, 100 * (np.polyval(coeffs, cathode_kv) - vdrift_kmpers)/vdrift_kmpers,
linestyle='', marker='x')
plt.ylabel('Relative fit error (%)')
plt.xlabel("Cathode voltage kV")
v = sympy.Symbol('v')
sp_poly = sympy.Poly.from_list(coeffs.tolist(), gens=v)
# Sanity check:
sp_poly.subs(dict(v=7.5))
# Try converting to and from serialization format, repeat sanity check
serialized_fit = sympy.srepr(sp_poly)
parse_expr(serialized_fit).subs(dict(v=7.5))
drift_coll = client['run']['drift_velocity']
from datetime import datetime
# Uncomment this, adjust versions and comments to do actual insertion
# drift_coll.insert_one(dict(
# version='1.0.0',
# calculation_time=datetime.now(),
# function=serialized_fit,
# comment="5th degree polynomial fit to Julien&Jelle's Drift velocity results. Jelle, 1 April 2017."
# ))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Try to add drift velocity correction function
Step2: Systematic error on drift velocity estimates is around 0.2%, see
Step3: Insert in runs db
|
9,097
|
<ASSISTANT_TASK:>
Python Code:
# Add tools
# NOTE: This should only be needed if you do not store the notebook on the lxmls root
import sys
sys.path.append('../../')
from pdb import set_trace
# Location of Part-of-Speech WSJ Data
WSJ_TRAIN = "../../data/train-02-21.conll"
WSJ_TEST = "../../data/test-23.conll"
WSJ_DEV = "../../data/dev-22.conll"
# Load Part-of-Speech data
import lxmls.readers.pos_corpus as pcc
corpus = pcc.PostagCorpus()
train_seq = corpus.read_sequence_list_conll(WSJ_TRAIN, max_sent_len=15, max_nr_sent=1000)
test_seq = corpus.read_sequence_list_conll(WSJ_TEST, max_sent_len=15, max_nr_sent=1000)
dev_seq = corpus.read_sequence_list_conll(WSJ_DEV, max_sent_len=15, max_nr_sent=1000)
# Redo indices so that they are consecutive. Also cast all data to numpy arrays
# of int32 for compatibility with GPUs and theano and add reverse index
train_seq, test_seq, dev_seq = pcc.compacify(train_seq, test_seq, dev_seq, theano=True)
# Get number of words and tags in the corpus
nr_words = len(train_seq.x_dict)
nr_tags = len(train_seq.y_dict)
import lxmls.deep_learning.rnn as rnns
reload(rnns)
# RNN configuration
SEED = 1234 # Random seed to initialize weigths
emb_size = 50 # Size of word embeddings
hidden_size = 20 # size of hidden layer
np_rnn = rnns.NumpyRNN(nr_words, emb_size, hidden_size, nr_tags, seed=SEED)
x0 = train_seq[0].x
y0 = train_seq[0].y
# Forward pass
p_y, y_rnn, h, z1, x = np_rnn.forward(x0, all_outputs=True)
# Compute gradients
numpy_rnn_gradients = np_rnn.grads(x0, y0)
import numpy as np
import theano
import theano.tensor as T
theano.config.optimizer='None'
def square(x):
return x**2
# Python
def np_square_n_steps(nr_steps):
out = []
for n in np.arange(nr_steps):
out.append(square(n))
return np.array(out)
# Theano
nr_steps = T.lscalar('nr_steps')
h, _ = theano.scan(fn=square, sequences=T.arange(nr_steps))
th_square_n_steps = theano.function([nr_steps], h)
print np_square_n_steps(10)
print th_square_n_steps(10)
# Configuration
nr_states = 3
nr_steps = 5
# Transition matrix
A = np.abs(np.random.randn(nr_states, nr_states))
A = A/A.sum(0, keepdims=True)
# Initial state
s0 = np.zeros(nr_states)
s0[0] = 1
# Numpy version
def np_markov_step(s_tm1):
s_t = np.dot(s_tm1, A.T)
return s_t
def np_markov_chain(nr_steps, A, s0):
# Pre-allocate space
s = np.zeros((nr_steps+1, nr_states))
s[0, :] = s0
for t in np.arange(nr_steps):
s[t+1, :] = np_markov_step(s[t, :])
return s
np_markov_chain(nr_steps, A, s0)
# Theano version
# Store variables as shared variables
th_A = theano.shared(A, name='A', borrow=True)
th_s0 = theano.shared(s0, name='s0', borrow=True)
# Symbolic variable for the number of steps
th_nr_steps = T.lscalar('nr_steps')
def th_markov_step(s_tm1):
s_t = T.dot(s_tm1, th_A.T)
# Remember to name variables
s_t.name = 's_t'
return s_t
s, _ = theano.scan(th_markov_step,
outputs_info=[dict(initial=th_s0)],
n_steps=th_nr_steps)
th_markov_chain = theano.function([th_nr_steps], T.concatenate((th_s0[None, :], s), 0))
th_markov_chain(nr_steps)
rnn = rnns.RNN(nr_words, emb_size, hidden_size, nr_tags, seed=SEED)
# Compile theano function
x = T.ivector('x')
th_forward = theano.function([x], rnn._forward(x).T)
assert np.allclose(th_forward(x0), np_rnn.forward(x0)), \
"Numpy and Theano forward pass differ!"
# Compile function returning the list of gradients
x = T.ivector('x') # Input words
y = T.ivector('y') # gold tags
p_y = rnn._forward(x)
cost = -T.mean(T.log(p_y)[T.arange(y.shape[0]), y])
grads_fun = theano.function([x, y], [T.grad(cost, par) for par in rnn.param])
# Compare numpy and theano gradients
theano_rnn_gradients = grads_fun(x0, y0)
for n in range(len(theano_rnn_gradients)):
assert np.allclose(numpy_rnn_gradients[n], theano_rnn_gradients[n]), \
"Numpy and Theano gradients differ in step n"
rnn_prediction = theano.function([x], T.argmax(p_y, 1))
# Lets test the predictions
def test_model(sample_seq, rnn_prediction):
words = [train_seq.word_dict[wrd] for wrd in sample_seq.x]
tags = [train_seq.tag_dict[pred] for pred in rnn_prediction(sample_seq.x)]
print ["/".join([word, tag]) for word , tag in zip(words, tags)]
test_model(train_seq[0], rnn_prediction)
lrate = 0.5
n_iter = 5
# Get list of SGD batch update rule for each parameter
updates = [(par, par - lrate*T.grad(cost, par)) for par in rnn.param]
# compile
rnn_batch_update = theano.function([x, y], cost, updates=updates)
nr_words = sum([len(seq.x) for seq in train_seq])
for i in range(n_iter):
# Training
cost = 0
errors = 0
for n, seq in enumerate(train_seq):
cost += rnn_batch_update(seq.x, seq.y)
errors += sum(rnn_prediction(seq.x) != seq.y)
acc_train = 100*(1-errors*1./nr_words)
print "Epoch %d: Train cost %2.2f Acc %2.2f %%" % (i+1, cost, acc_train),
# Evaluation
errors = 0
for n, seq in enumerate(dev_seq):
errors += sum(rnn_prediction(seq.x) != seq.y)
acc_dev = 100*(1-errors*1./nr_words)
print " Devel Acc %2.2f %%" % acc_dev
sys.stdout.flush()
# Embeddings Path
EMBEDDINGS = "../../data/senna_50"
import lxmls.deep_learning.embeddings as emb
import os
reload(emb)
if not os.path.isfile(EMBEDDINGS):
emb.download_embeddings('senna_50', EMBEDDINGS)
E = emb.extract_embeddings(EMBEDDINGS, train_seq.x_dict)
# Reset model to remove the effect of training
rnn = rnns.reset_model(rnn, seed=SEED)
# Set the embedding layer to the pre-trained values
rnn.param[0].set_value(E.astype(theano.config.floatX))
lstm = rnns.LSTM(nr_words, emb_size, hidden_size, nr_tags)
lstm_prediction = theano.function([x],
T.argmax(lstm._forward(x), 1))
lstm_cost = -T.mean(T.log(lstm._forward(x))[T.arange(y.shape[0]), y])
# Get list of SGD batch update rule for each parameter
lstm_updates = [(par, par - lrate*T.grad(lstm_cost, par)) for par in lstm.param]
# compile
lstm_batch_update = theano.function([x, y], lstm_cost,
updates=lstm_updates)
nr_words = sum([len(seq.x) for seq in train_seq])
for i in range(n_iter):
# Training
cost = 0
errors = 0
for n, seq in enumerate(train_seq):
cost += lstm_batch_update(seq.x, seq.y)
errors += sum(lstm_prediction(seq.x) != seq.y)
acc_train = 100*(1-errors*1./nr_words)
print "Epoch %d: Train cost %2.2f Acc %2.2f %%" % (i+1, cost, acc_train),
# Evaluation:
errors = 0
for n, seq in enumerate(dev_seq):
errors += sum(lstm_prediction(seq.x) != seq.y)
acc_dev = 100*(1-errors*1./nr_words)
print " Devel Acc %2.2f %%" % acc_dev
sys.stdout.flush()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model configuration
Step2: Exercise 6.2
Step3: The following example should help you understand about matrix multiplications and passing values from one iteration to the other. It at each step it we multiply the output of the previous step by a matrix A. We start with an initial vector s0. The matrix and vector are random but normalized to result on a Markov chain.
Step4: Exercise 6.3
Step5: When working with theano, it is more difficult to localize the source of errors. It is therefore extremely important to work step by step and test the code frequently. To debug we suggest to implement and compile the forward pass first. You can use this code for testing. If it raises no error you are good to go.
Step6: Once you are confident the forward pass is working you can test the gradients
Step7: Finally, its time to test our network!. For this, lets first compile a function that does predictions
Step8: Now lets define the optimization parameters and compile a batch update function
Step9: Finally it is time to run SGD. You can use the following code for this purpose
Step10: Test the effect of using pre-trained embeddings. Run the following code to download the embeddings, reset the layer parameters and initialize the embedding layer with the pre-trained embeddings. Then run the training code above.
Step11: Exercise 6.4
|
9,098
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
!pip install --user apache-beam[interactive]==2.24.0
import apache_beam as beam
print(beam.__version__)
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
# TODO 1
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'num_workers': 4,
'max_num_workers': 5
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Step2: NOTE
Step3: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step5: <h2> Save the query from earlier </h2>
Step7: <h2> Create ML dataset using Dataflow </h2>
Step8: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
|
9,099
|
<ASSISTANT_TASK:>
Python Code:
from EQTransformer.utils.plot import plot_data_chart
plot_data_chart('preproc/time_tracks.pkl', time_interval=10)
from EQTransformer.utils.plot import plot_detections, plot_helicorder
plot_helicorder(input_mseed='downloads_mseeds/CA06/GS.CA06.00.HHZ__20190901T000000Z__20190902T000000Z.mseed',
input_csv=None, save_plot=True)
plot_helicorder(input_mseed='downloads_mseeds/CA06/GS.CA06.00.HH2__20190901T000000Z__20190902T000000Z.mseed',
input_csv='detections1/CA06_outputs/X_prediction_results.csv', save_plot=True)
plot_detections(input_dir ="detections1",
input_json="json/station_list.json",
plot_type='station_map',
marker_size=50)
plot_detections(input_dir ="detections1",
input_json="json/station_list.json",
plot_type='hist',
time_window=120)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check your current directory for 'data_chart.png'
Step2: Now you can mark those events that you have detected in your helicorder plot to check if you have catched most of them or you are missing too many of them (high false negativ). This togather with the events plots (in the Figure subfolders in your station_output folders) can give you a sense that if you are using a too high or too low threshold levels.
Step3: 3) map plot
Step4: check for 'station_map.png'
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.