Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
12,200
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
diabetes = pd.read_csv('Class01_diabetes_data.csv')
diabetes.head()
diabetes.dropna(inplace=True)
diabetes.head()
diabetes.plot(x='Age',y='Target',kind='scatter')
diabetes.plot(x='Sex',y='Target',kind='scatter')
diabetes.plot(x='BMI',y='Target',kind='scatter')
diabetes.plot(x='BP',y='Target',kind='scatter')
diabetes.plot(x='TC',y='Target',kind='scatter')
diabetes.plot(x='LDL',y='Target',kind='scatter')
diabetes.plot(x='HDL',y='Target',kind='scatter')
diabetes.plot(x='TCH',y='Target',kind='scatter')
diabetes.plot(x='LTG',y='Target',kind='scatter')
diabetes.plot(x='GLU',y='Target',kind='scatter')
irisDF = pd.read_csv('Class01_iris_data.csv')
irisDF.head()
irisDF.columns=['sepalLen','sepalWid','petalLen','petalWid','target']
irisDF.head()
irisDF.plot(x='sepalLen',y='sepalWid',kind='scatter')
import seaborn as sns
sns.set_style('white')
sns.lmplot(x='sepalLen', y='sepalWid', data=irisDF, hue='target', fit_reg=False)
sns.pairplot(irisDF, hue="target")
digitDF = pd.read_csv('Class01_digits_data.csv')
digitDF.head()
testnum = 61
#
# First, get the first 64 columns which correspond to the image data
#
testimage = digitDF.loc[testnum][0:64]
#
# Then reshape this from a 1 by 64 array into a matrix that is 8 by 8.
#
testimage = testimage.reshape((8,8))
#
# We'll print out what the image is supposed to be. Note the format of the print statement.
# The '{}' means 'insert the argument from the format here'.
# The .format means 'pass these values into the string.
#
print('Expected Digit: {}'.format(digitDF.loc[testnum][64]))
#
# Finally, we need one more library to plot the images.
#
import matplotlib.pyplot as plt
#
# We tell Python to plot a gray scale image, then to show our resahped data as an image.
#
plt.gray()
plt.matshow(testimage)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The next step will be to copy the data file that we will be using for this tutorial into the same folder as these notes. We will be looking at a couple of different types of data sets. We'll start with a simple data set that appears to be a functional set of data where one output column depends on the input columns of the data. In this case, we're looking at a set of patient data where there are a handful of input variables that may feed into the likelyhood that the patient will develop type 2 diabetes. The output column is a quantitative measure of disease progression one year after baseline measurements. (http
Step2: Now that we've loaded the data in, the first thing to do is to take a look at the raw data. We can look at the first 5 rows (the head of the data set) by doing the following.
Step3: Before we move forward, note that there is a strange value in the first row under 'GLU'
Step4: So we see the first row is gone. That's what we wanted. However, this doesn't really tell us much by itself. It is better to start investigating how the output variable ('Target' in this case) depends on the inputs. We'll visualize the data one at a time to look at this. We'll make a scatter plot where we look at the Target as a function of the Age column. The first entry provides the 'x' values where the second provides the 'y' values. The final input tells the plotting software to plot the data points as dots, not connected lines. We'll almost always use this feature.
Step5: This doesn't tell us much. It looks like there isn't a large dependence on age - othewise we would have seen something more specific than a large blob of data. Let's try other inputs. We'll plot a bunch of them in a row.
Step6: It looks like there are some of these, like BMI, that as the BMI goes up, so does the Target.
Step7: As you can see, the 'target' column is no longer numerical, but a text entry that is one of the three possible iris varieties. We also see that the default column headings are a bit long and will get tiring to type out when we want to reference them. Let's rename the columns first.
Step8: Now we want to visualize the data. We don't know what to expect, so let's just pick a couple of variables and see what the data look like.
Step9: So we see that there are entries at a number of different points, but it would be really nice to be able to identify which point correpsonds to which variety. We will use another python library to do this. We'll also set the default style to 'white' which looks better.
Step10: The seaborn library provides a number of different plotting options. One of them is lmplot. It is designed to provide a linear model fit (which we don't want right now), so we'll set the fig_reg option to False so that it doesn't try to fit them.
Step11: Now we can see that the cluster off to the left all belongs to the Setosa variety. It would be really nice to try plotting the other variables as well. We could do that manually or use a nice shortcut in seaborn called pairplot. This plots the hue column against all possible pairs of the other data columns.
Step12: We see that there are some of these plots that show there might be a way to distinuish the three different varieties. We'll look at how to do that later on, but this gives us a start.
Step13: This data set has 65 columns. The first 64 correspond to the grayscale value for each of the pixels in an 8 by 8 image. The last column (the 'target') indicates what digit the image is supposed to be. We'll pick one row to start with (row 41 in this case). We'll use some in-line commenting to explain each step here.
|
12,201
|
<ASSISTANT_TASK:>
Python Code:
quotient = 7 / 3
print(format(quotient, '.2f'))
remainder = 7 % 3
print(remainder)
5 == 5
5 == 6
type(True), type(False)
x = 10
y = 9
if x < y:
print('x is less than y')
elif x > y:
print('x is greater than y')
else:
print('x and y are equal')
x = 18
y = 20
if x == y:
print('x and y are equal')
else:
if x < y:
print('x is less than y')
else:
print('x is greater than y')
x = 8
if 0 < x:
if x < 10:
print('x is a positive single-digit number.')
if 0 < x and x < 10:
print('x is a positive single-digit number.')
def countdown(n):
if n <= 0:
print('Blastoff!')
else:
print(n)
countdown(n-1)
countdown(3)
def print_n(s, n):
if n <= 0:
return
print(s)
print_n(s, n-1)
text = input()
print(text)
name = input('What...is your name?\n')
print(name)
prompt = 'What...is the airspeed velocity of an unladen swallow?\n'
speed = input(prompt)
int(speed)
speed = input(prompt)
int(speed)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another
Step2: The True and False are special values that belong to the type bool; they are not strings
Step3: The == operator is one of the relational operators; the others are
Step4: The elif is an abbreviation of “else if.” Again, exactly one branch will be executed. There is no limit on the number of elif statements. If there is an else clause, it has to be at the end, but there doesn’t have to be one.
Step5: The outer conditional contains two branches. The first branch contains a simple statement. The second branch contains another if statement, which has two branches of its own. Those two branches are both simple statements, although they could have been conditional statements as well.
Step6: The print statement is executed only if we make it past both conditionals, so we can get the same effect with the and operator
Step7: Recursion
Step8: If n is 0 or negative, it outputs the word, “Blastoff!” Otherwise, it outputs n and then calls a function named countdown itself passing n-1 as an argument.
Step9: A function that calls itself is recursive; the process is called recursion.
Step10: If n <= 0 the return statement exits the function. The flow of execution immediately returns to the caller, and the remaining lines of the function are not executed.
Step11: Before getting input from the user, it is a good idea to print a prompt telling the user what
Step12: The sequence \n at the end of the prompt represents a newline, which is a special character that causes a line break. That’s why the user’s input appears below the prompt.
Step13: But if the user types something other than a string of digits, you get an error
|
12,202
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
pd.__version__
np.__version__
# set some options to control output display
pd.set_option('display.notebook_repr_html',False)
pd.set_option('display.max_columns',10)
pd.set_option('display.max_rows',10)
# create one item series
s1 = pd.Series(1)
s1
# get value with label 0
s1[0]
# create from list
s2 = pd.Series([1,2,3,4,5])
s2
# get the values in the series
s2.values
# get the index of the series
s2.index
# explicitly create an index
# index is alpha, not an integer
s3 = pd.Series([1,2,3], index=['a','b','c'])
s3
s3.index
# look up by label value and not object position
s3['b']
# position also works
s3[2]
# create series from an existing index
# scalar value will be copied at each index label
s4 = pd.Series(2,index=s2.index)
s4
np.random.seed(123456)
pd.Series(np.random.randn(5))
# 0 through 9
pd.Series(np.linspace(0,9,10))
# o through 8
pd.Series(np.arange(0,9))
s6 = pd.Series({'a':1,'b':2,'c':3,'d':4})
s6
# example series which also contains a NaN
s = pd.Series([0,1,1,2,3,4,5,6,7,np.NaN])
s
# length of the Series
len(s)
s.size
# shape is a tuple with one value
s.shape
# number of values not part of NaN can be found using count() method
s.count()
# all unique values
s.unique()
# count of non-NaN values, returned max to min order
s.value_counts()
# first five
s.head()
# first three
s.head(3)
# last five
s.tail()
# last 2
s.tail(n=2) # equivalent to s.tail(2)
# only take specific items
s.take([0,3,9])
# single item lookup
s3['a']
# lookup by position since index is not an integer
s3[2]
# multiple items
s3[['a','c']]
# series with an integer index but not starting with 0
s5 = pd.Series([1,2,3], index =[11,12,13])
s5[12] # by value as value passed and index are both integer
# force lookup by index label
s5.loc[12]
# force lookup by position or location
s5.iloc[1]
# multiple items by index label
s5.loc[[12,10]]
# multiple items by position or location
s5.iloc[[1,2]]
s5.loc[[12,-1,15]]
s3
# label based lookup
s3.ix[['a','b']]
# position based lookup
s3.ix[[1,2]]
# this looks by label and not position
# note that 1,2 have NaN as those labels do not exist in the index
s5.ix[[1,2,10,11]]
s6 = pd.Series([1,2,3,4], index=['a','b','c','d'])
s6
s7 = pd.Series([4,3,2,1], index=['d','c','b','a'])
s7
s6 + s7
a1 = np.array([1,2,3,4,5])
a2 = np.array([5,4,3,2,1])
a1 + a2
# multiply all values in s3 by 2
s3 * 2
# scalar series using the s3's index
# not efficient as it will no use vectorisation
t = pd.Series(2,s3.index)
s3 * t
# we will add this to s9
s8 = pd.Series({'a':1,'b':2,'c':3,'d':5})
s8
s9 = pd.Series({'b':6,'c':7,'d':9,'e':10})
s9
# NaN's result for a and e demonstrates alignment
s8 + s9
s10 = pd.Series([1.0,2.0,3.0],index=['a','a','b'])
s10
s11 = pd.Series([4.0,5.0,6.0], index=['a','a','c'])
s11
# will result in four 'a' index labels
s10 + s11
nda = np.array([1,2,3,4,5])
nda.mean()
# mean of numpy array values with a NaN
nda = np.array([1,2,3,4,np.NaN])
nda.mean()
# Series object ignores NaN values - does not get factored
s = pd.Series(nda)
s.mean()
# handle NaN values like Numpy
s.mean(skipna=False)
# which rows have values that are > 5
s = pd.Series(np.arange(0,10))
s > 5
# select rows where values are > 5
# overloading the Series object [] operator
logicalResults = s > 5
s[logicalResults]
# a little shorter version
s[s > 5]
# using & operator
s[(s>5)&(s<9)]
# using | operator
s[(s > 3) | (s < 5)]
# are all items >= 0?
(s >=0).all()
# are any items < 2
s[s < 2].any()
(s < 2).sum()
# sample series of five items
s = pd.Series(np.random.randn(5))
s
# change the index
s.index = ['a','b','c','d','e']
s
# concat copies index values verbatim
# potentially making duplicates
np.random.seed(123456)
s1 = pd.Series(np.random.randn(3))
s2 = pd.Series(np.random.randn(3))
combined = pd.concat([s1,s2])
combined
# reset the index
combined.index = np.arange(0,len(combined))
combined
np.random.seed(123456)
s1 = pd.Series(np.random.randn(4),['a','b','c','d'])
# reindex with different number of labels
# results in dropped rows and/or NaN's
s2 = s1.reindex(['a','c','g'])
s2
# s2 is a different series than s1
s2['a'] = 0
s2
# this did not modify s1
s1
# different types for the same values of labels causes big issue
s1 = pd.Series([0,1,2],index=[0,1,2])
s2 = pd.Series([3,4,5],index=['0','1','2'])
s1 + s2
# reindex by casting the label types and we will get the desired result
s2.index = s2.index.values.astype(int)
s1 + s2
# fill with 0 instead on NaN
s2 = s.copy()
s2.reindex(['a','f'],fill_value=0)
# create example to demonstrate fills
s3 = pd.Series(['red','green','blue'],index=[0,3,5])
s3
# forward fill using ffill method
s3.reindex(np.arange(0,7), method='ffill')
# backward fill using bfill method
s3.reindex(np.arange(0,7),method='bfill')
np.random.seed(123456)
s = pd.Series(np.random.randn(3),index=['a','b','c'])
s
# change a value in the Series
# this done in-place
# a new Series is not returned that has a modified value
s['d'] = 100
s
# value at a specific index label can be changed by assignment:
s['d'] = -100
s
del(s['a'])
s
# a series to use for slicing
# using index labels not starting at 0 to demonstrate
# position based slicing
s = pd.Series(np.arange(100,110),index=np.arange(10,20))
s
# items at position 0,2,4
s[0:6:2]
# equivalent to
s.iloc[[0,2,4]]
# first five by slicing, same as .head(5)
s[:5]
# fourth position to the end
s[4:]
# every other item in the first five positions
s[:5:2]
# every other item starting at the fourth position
s[4::2]
# reverse the series
s[::-1]
# every other starting at position 4, in reverse
s[4::-2]
# :-2 which means positions 0 through (10-2) which is [8]
s[:-2]
# last 3 items
# equivalent to tail(3)
s[-3:]
# equivalent to s.tail(4).head(3)
s[-4:-1]
# preserve s
# slice with first 2 rows
copy = s.copy()
slice = copy[:2]
slice
slice[11] = 1000
copy
# used to demonstrate the next two slices
s = pd.Series(np.arange(0,5),index=['a','b','c','d','e'])
s
# slicing with integer values will extract items based on position:
s[1:3]
# with non-integer index, it is also possible to slice with values in the same type of the index:
s['b':'d']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating Series
Step2: '0' is the index and '1' is the value. The data type (dtype) is also shown. We can also retrieve the value using the associated index.
Step3: Creating Series with named index
Step4: Please note the type of the index items. It is not string but 'object'.
Step5: It is a common practice to initialize the Series objects using NumPy ndarrays, and with various NumPy functions that create arrays. The following code creates a Series from five normally distributed values
Step6: A Series can also be created from a Python dictionary. The keys of the dictionary are used as the index lables for the Series
Step7: Size, Shape, Count and Uniqueness of Values
Step8: Peeking at data with heads, tails and take
Step9: The .take() method will return the rows in a series that correspond to the zero-based positions specified in a list
Step10: Looking up values in Series
Step11: To alleviate the potential confusion in determining the label-based lookups versus position-based lookups, index based lookup can be enforced using the .loc[] accessor
Step12: Lookup by position can be enforced using the iloc[] accessor
Step13: If a location / position passed to .iloc[] in a list is out of bounds, an exception will be thrown. This is different than with .loc[], which if passed a label that does not exist, will return NaN as the value for that label
Step14: A Series also has a property .ix that can be used to look up items either by label or by zero-based array position.
Step15: This can become complicated if the indexes are integers and you pass a list of integers to ix. Since they are of the same type, the lookup will be by index label instead of position
Step16: Alignment via index labels
Step17: This is a very different result that what it would have been if it were two pure NumPy arrays being added. A NumPy ndarray would add the items in identical positions of each array resulting in different values.
Step18: The process of adding two Series objects differs from the process of addition of arrays as it first aligns data based on index label values instead of simply applying the operation to elements in the same position. This becomes significantly powerful when using pandas Series to combine data based on labels instead of having to first order the data manually.
Step19: To reinforce the point that alignment is being performed when applying arithmetic operations across two Series objects, look at the following two Series as examples
Step20: The reason for the above result is that during alignment, pandas actually performs a cartesian product of the sets of all the unique index labels in both Series objects, and then applies the specified operation on all items in the products.
Step21: Boolean selection
Step22: The result of these logical expressions is a Boolean selection, a Series of True and False values. The .sum() method of a Series, when given a series of Boolean values, will treat True as 1 and False as 0. The following demonstrates using this to determine the number of items in a Series that satisfy a given expression
Step23: Reindexing a Series
Step24: Greater flexibility in creating a new index is provided using the .reindex() method. An example of the flexibility of .reindex() over assigning the .index property directly is that the list provided to .reindex() can be of a different length than the number of rows in the Series
Step25: There are several things here that are important to point out about .reindex() method.
Step26: Reindex is also useful when you want to align two Series to perform an operation on matching elements from each series; however, for some reason, the two Series has index labels that will not initially align.
Step27: The reason why this happens in pandas are as follows
Step28: The default action of inserting NaN as a missing value during reindexing can be changed by using the fill_value parameter of the method.
Step29: When performing a reindex on ordered data such as a time series, it is possible to perform interpolation or filling of values. The following example demonstrates forward filling, often referred to as "last known value".
Step30: Modifying a Series in-place
Step31: Items can be removed from a Series using the del() function and passing the index label(s) to be removed.
Step32: Slicing a Series
Step33: An important thing to keep in mind when using slicing, is that the result of the slice is actually a view into the original Series. Modification of values through the result of the slice will modify the original Series.
Step34: Now the assignment of a value to an element of a slice will change the value in the original Series
Step35: Slicing can be performed on Series objects with a non-integer index.
|
12,203
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
12,204
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
from numpy.fft import fft2
from numpy.fft import ifft2
fr = np.linspace(-1,1,6)
f = np.array([fr,2*fr,fr,fr])
print(f)
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
g = ia.pconv(f,h)
print(g)
#Deixar o h (3,3) com o mesmo shape de f (4,6)
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
F = fft2(f)
H = fft2(aux)
G = F * H
gg = ifft2(G)
print("Result gg: \n",np.around(gg))
print('The discrete convolution theorem worked?', np.allclose(gg.real,g))
f = mpimg.imread('/home/lotufo/ia898/data/keyb.tif')
plt.imshow(f,cmap='gray');
plt.title('Original')
plt.colorbar()
plt.show()
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
F = fft2(f)
H = fft2(aux)
x,y = f.shape
plt.figure(1)
plt.imshow(np.log(np.abs(ia.ptrans(F,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT of f')
plt.colorbar()
plt.figure(2)
plt.imshow(np.log(np.abs(ia.ptrans(H,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT of h')
plt.colorbar()
G = F * H
plt.figure(3)
plt.imshow(np.log(np.abs(ia.ptrans(G,(x//2,y//2))+1)),cmap='gray')
plt.title('F * H')
plt.colorbar()
gg = ifft2(G)
plt.figure(4)
plt.imshow(gg.real.astype(np.float),cmap='gray');
plt.title('Convolution in frequency domain')
plt.colorbar()
plt.show()
g = ia.pconv(f,h)
plt.imshow(g.real.astype(np.float),cmap='gray');
plt.title('Convolution in space domain')
plt.colorbar()
plt.show()
print('The discrete convolution theorem worked?', np.allclose(gg.real,g))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numeric sample
Step2: See that f and h are periodic images and the period is (H,W) that is the shape of f.
Step3: gg and g need to be equal
Step4: Using an image to illustrate the discrete convolution theorem
Step5: Convolution in frequency domain
Step6: Convolution in space domain
Step7: The convolution in frequency domain and space domain need to be equals
|
12,205
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
# landsat_qa_clean_mask, landsat_clean_mask_invalid
from utils.data_cube_utilities.dc_mosaic import create_hdmedians_multiple_band_mosaic
from utils.data_cube_utilities.dc_mosaic import create_mosaic
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
product = 'ls8_usgs_sr_scene'
platform = 'LANDSAT_8'
collection = 'c1'
level = 'l2'
landsat_ds = dc.load(platform=platform, product=product,
time=("2017-01-01", "2017-12-31"),
lat=(-1.395447, -1.172343),
lon=(36.621306, 37.033980),
group_by='solar_day',
dask_chunks={'latitude':500, 'longitude':500,
'time':5})
# clean_mask = (landsat_qa_clean_mask(landsat_ds, platform) &
# (landsat_ds != -9999).to_array().all('variable') &
# landsat_clean_mask_invalid(landsat_ds))
clean_mask = landsat_clean_mask_full(dc, landsat_ds, product=product, platform=platform,
collection=collection, level=level)
landsat_ds = landsat_ds.where(clean_mask)
mean_composite = landsat_ds.mean('time', skipna=True)
median_composite = landsat_ds.median('time', skipna=True)
geomedian_composite = create_hdmedians_multiple_band_mosaic(landsat_ds)
geomedoid_composite = create_hdmedians_multiple_band_mosaic(landsat_ds, operation='medoid')
most_recent_composite = create_mosaic(landsat_ds)
most_recent_composite = create_mosaic(landsat_ds, reverse_time=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <span id="Composites_retrieve_data">Load Data from the Data Cube ▴</span>
Step2: <span id="Composites_most_common">Most Common Composites ▴</span>
Step3: Median composites
Step4: Geometric median (geomedian) composites
Step5: Geometric medoid (geomedoid) composites
Step6: <span id="Composites_other_composites">Other Composites ▴</span>
Step7: Least-recent composites
|
12,206
|
<ASSISTANT_TASK:>
Python Code:
import luigi as lg
import json
import pickle
import sys
basedir = '/Users/joewandy/git/lda/code/'
sys.path.append(basedir)
from multifile_feature import SparseFeatureExtractor
from lda import MultiFileVariationalLDA
class ExtractSpectra(lg.Task):
datadir = lg.Parameter()
prefix = lg.Parameter()
def run(self):
# we could actually extract the spectra from mzxml, mzml files here
print 'Processing %s and %s' % (datadir, prefix)
def output(self):
out_dict = {
'ms1': lg.LocalTarget(self.datadir + self.prefix + '_ms1.csv'),
'ms2': lg.LocalTarget(self.datadir + self.prefix + '_ms2.csv')
}
return out_dict
class GroupFeatures(lg.Task):
scaling_factor = lg.IntParameter(default=1000)
fragment_grouping_tol = lg.IntParameter(default=7)
loss_grouping_tol = lg.IntParameter(default=7)
loss_threshold_min_count = lg.IntParameter(default=5)
loss_threshold_max_val = lg.IntParameter(default=200)
loss_threshold_min_val = lg.IntParameter(default=0)
datadir = lg.Parameter()
prefixes = lg.ListParameter()
def requires(self):
return [ExtractSpectra(datadir=datadir, prefix=prefix) for prefix in self.prefixes]
def run(self):
# input_set is a list of tuples of (ms1, ms2)
input_set = []
for out_dict in self.input():
ms1 = out_dict['ms1'].path
ms2 = out_dict['ms2'].path
items = (ms1, ms2)
input_set.append(items)
# performs the grouping here
extractor = SparseFeatureExtractor(input_set, self.fragment_grouping_tol, self.loss_grouping_tol,
self.loss_threshold_min_count, self.loss_threshold_max_val,
self.loss_threshold_min_val,
input_type='filename')
fragment_q = extractor.make_fragment_queue()
fragment_groups = extractor.group_features(fragment_q, extractor.fragment_grouping_tol)
loss_q = extractor.make_loss_queue()
loss_groups = extractor.group_features(loss_q, extractor.loss_grouping_tol,
check_threshold=True)
extractor.create_counts(fragment_groups, loss_groups, self.scaling_factor)
mat, vocab, ms1, ms2 = extractor.get_entry(0)
global_word_index = {}
for i,v in enumerate(vocab):
global_word_index[v] = i
corpus_dictionary = {}
for f in range(extractor.F):
print "Processing file {}".format(f)
corpus = {}
mat, vocab, ms1, ms2 = extractor.get_entry(f)
n_docs,n_words = mat.shape
print n_docs,n_words
d_pos = 0
for d in ms1.iterrows():
doc_name = "{}_{}".format(d[1]['mz'],d[1]['rt'])
corpus[doc_name] = {}
for word_index,count in zip(mat[d_pos,:].rows[0],mat[d_pos,:].data[0]):
if count > 0:
corpus[doc_name][vocab[word_index]] = count
d_pos += 1
# Added by Simon
name = input_set[f][0].split('/')[-1].split('ms1')[0][:-1]
corpus_dictionary[name] = corpus
output_dict = {}
output_dict['global_word_index'] = global_word_index
output_dict['corpus_dictionary'] = corpus_dictionary
with self.output().open('w') as f:
pickle.dump(output_dict, f)
def output(self):
return lg.LocalTarget('output_dict.p')
class RunLDA(lg.Task):
n_its = lg.IntParameter(default=10)
K = lg.IntParameter(default=300)
alpha = lg.FloatParameter(default=1)
eta = lg.FloatParameter(default=0.1)
update_alpha = lg.BoolParameter(default=True)
datadir = lg.Parameter()
prefixes = lg.ListParameter()
def requires(self):
return GroupFeatures(datadir=self.datadir, prefixes=self.prefixes)
def run(self):
with self.input().open('r') as f:
output_dict = pickle.load(f)
global_word_index = output_dict['global_word_index']
corpus_dictionary = output_dict['corpus_dictionary']
mf_lda = MultiFileVariationalLDA(corpus_dictionary, word_index=global_word_index,
K=self.K, alpha=self.alpha, eta=self.eta,
update_alpha=self.update_alpha)
mf_lda.run_vb(parallel=False, n_its=self.n_its, initialise=True)
datadir = '/Users/joewandy/Dropbox/Meta_clustering/MS2LDA/large_study/Urine_mzXML_large_study/method_1/POS/'
prefixes = [
'Urine_StrokeDrugs_02_T10_POS',
'Urine_StrokeDrugs_03_T10_POS',
'Urine_StrokeDrugs_08_T10_POS',
'Urine_StrokeDrugs_09_T10_POS',
]
prefixes_json = json.dumps(prefixes)
lg.run(['RunLDA', '--workers', '1', '--local-scheduler', '--datadir', datadir, '--prefixes', prefixes_json])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: These are what we want from the new pipeline.
Step2: Example Step 2
Step3: Example Step 3
Step4: Run the pipeline
Step5: And run the pipeline
|
12,207
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Laura Gwilliams <laura.gwilliams@nyu.edu>
# Jean-Remi King <jeanremi.king@gmail.com>
# Alex Barachant <alexandre.barachant@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne import Epochs, find_events, create_info
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
from mne.time_frequency import AverageTFR
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder
event_id = dict(hands=2, feet=3) # motor imagery: hands vs feet
subject = 1
runs = [6, 10, 14]
raw_fnames = eegbci.load_data(subject, runs)
raw_files = [read_raw_edf(f, stim_channel='auto', preload=True)
for f in raw_fnames]
raw = concatenate_raws(raw_files)
# Extract information from the raw file
sfreq = raw.info['sfreq']
events = find_events(raw, shortest_event=0, stim_channel='STI 014')
raw.pick_types(meg=False, eeg=True, stim=False, eog=False, exclude='bads')
# Assemble the classifier using scikit-learn pipeline
clf = make_pipeline(CSP(n_components=4, reg=None, log=True, norm_trace=False),
LinearDiscriminantAnalysis())
n_splits = 5 # how many folds to use for cross-validation
cv = StratifiedKFold(n_splits=n_splits, shuffle=True)
# Classification & Time-frequency parameters
tmin, tmax = -.200, 2.000
n_cycles = 10. # how many complete cycles: used to define window size
min_freq = 5.
max_freq = 25.
n_freqs = 8 # how many frequency bins to use
# Assemble list of frequency range tuples
freqs = np.linspace(min_freq, max_freq, n_freqs) # assemble frequencies
freq_ranges = list(zip(freqs[:-1], freqs[1:])) # make freqs list of tuples
# Infer window spacing from the max freq and number of cycles to avoid gaps
window_spacing = (n_cycles / np.max(freqs) / 2.)
centered_w_times = np.arange(tmin, tmax, window_spacing)[1:]
n_windows = len(centered_w_times)
# Instantiate label encoder
le = LabelEncoder()
# init scores
freq_scores = np.zeros((n_freqs - 1,))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
X = epochs.get_data()
# Save mean scores over folds for each frequency and time window
freq_scores[freq] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
plt.bar(freqs[:-1], freq_scores, width=np.diff(freqs)[0],
align='edge', edgecolor='black')
plt.xticks(freqs)
plt.ylim([0, 1])
plt.axhline(len(epochs['feet']) / len(epochs), color='k', linestyle='--',
label='chance level')
plt.legend()
plt.xlabel('Frequency (Hz)')
plt.ylabel('Decoding Scores')
plt.title('Frequency Decoding Scores')
# init scores
tf_scores = np.zeros((n_freqs - 1, n_windows))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
# Roll covariance, csp and lda over time
for t, w_time in enumerate(centered_w_times):
# Center the min and max of the window
w_tmin = w_time - w_size / 2.
w_tmax = w_time + w_size / 2.
# Crop data into time-window of interest
X = epochs.copy().crop(w_tmin, w_tmax).get_data()
# Save mean scores over folds for each frequency and time window
tf_scores[freq, t] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
# Set up time frequency object
av_tfr = AverageTFR(create_info(['freq'], sfreq), tf_scores[np.newaxis, :],
centered_w_times, freqs[1:], 1)
chance = np.mean(y) # set chance level to white in the plot
av_tfr.plot([0], vmin=chance, title="Time-Frequency Decoding Scores",
cmap=plt.cm.Reds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters and read data
Step2: Loop through frequencies, apply classifier and save scores
Step3: Plot frequency results
Step4: Loop through frequencies and time, apply classifier and save scores
Step5: Plot time-frequency results
|
12,208
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v2 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
# Example of a picture
index = 10
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, activation='relu')
A2, cache2 = linear_activation_forward(A1, W2, b2, activation='sigmoid')
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation='sigmoid')
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation='relu')
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
predictions_train = predict(train_x, train_y, parameters)
predictions_test = predict(test_x, test_y, parameters)
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 5-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
pred_train = predict(train_x, train_y, parameters)
pred_test = predict(test_x, test_y, parameters)
print_mislabeled_images(classes, test_x, test_y, pred_test)
## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 - Dataset
Step2: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
Step3: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
Step5: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
Step6: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
Step7: Expected Output
Step8: Expected Output
Step10: Expected Output
Step11: You will now train the model as a 5-layer neural network.
Step12: Expected Output
Step13: <table>
Step14: Expected Output
Step15: A few type of images the model tends to do poorly on include
|
12,209
|
<ASSISTANT_TASK:>
Python Code:
# Setup taken from notebook 17.
import itertools
import sys
import bson
import h5py
import keras.layers
import keras.models
import matplotlib.pyplot
import numpy
import pandas
import sklearn.cross_validation
import sklearn.dummy
import sklearn.linear_model
import sklearn.metrics
sys.path.insert(1, '..')
import crowdastro.data
import crowdastro.show
with pandas.HDFStore('../crowdastro-data/training.h5') as store:
data = store['data']
n = 5000
# I'm gathering up the radio patches first so I can run them through the CNN at the same time
# as one big matrix operation. In principle this would run on the GPU.
radio_patches = numpy.zeros((n, 80, 80))
labels = numpy.zeros((n,))
linear_distances = numpy.zeros((n,))
radius = 40
padding = 150
for idx, row in data.head(n).iterrows():
sid = bson.objectid.ObjectId(row['subject_id'][0].decode('ascii'))
x = row['x'][0]
y = row['y'][0]
label = row['is_host'][0]
labels[idx] = label
subject = crowdastro.data.db.radio_subjects.find_one({'_id': sid})
radio = crowdastro.data.get_radio(subject, size='5x5')
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[idx, :] = patch
linear_distances[idx] = numpy.hypot(x - 100, y - 100)
# Load the CNN.
with open('../crowdastro-data/cnn_model_2.json', 'r') as f:
cnn = keras.models.model_from_json(f.read())
cnn.load_weights('../crowdastro-data/cnn_weights_2.h5')
cnn.layers = cnn.layers[:5] # Pop the layers after the second convolution's activation.
cnn.add(keras.layers.Flatten())
cnn.compile(optimizer='sgd', loss='mse') # I don't actually care about the optimiser or loss.
# Load the PCA.
with h5py.File('../crowdastro-data/pca.h5') as f:
pca = f['conv_2'][:]
# Find the radio features.
radio_features = cnn.predict(radio_patches.reshape(n, 1, 80, 80)) @ pca.T
# Add on the astronomy features.
features = numpy.hstack([radio_features, data.ix[:n-1, 'flux_ap2_24':'flux_ap2_80'].as_matrix(),
linear_distances.reshape(-1, 1)])
features = numpy.nan_to_num(features)
subject_ids = set()
for idx, row in data.ix[n:n * 2].iterrows():
sid = row['subject_id'][0]
subject_ids.add(sid)
def test(features, labels):
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(
features, labels, test_size=0.2, random_state=0)
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
print('Classification accuracy: {:.02%}'.format(lr.score(xs_test, ts_test)))
hits = 0
attempts = 0
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts),))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features.reshape(-1, 1)])
scores = lr.predict_proba(features)[:, 1].reshape(-1)
predicted_host = scores.argmax()
if labels[predicted_host]:
hits += 1
attempts += 1
print('Problem accuracy: {:.02%}'.format(hits / attempts))
test(features, labels)
import sklearn.preprocessing
def test(features, labels):
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(
features, labels, test_size=0.2, random_state=0)
scaler = sklearn.preprocessing.StandardScaler().fit(xs_train)
xs_train = scaler.transform(xs_train)
xs_test = scaler.transform(xs_test)
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
print('Classification accuracy: {:.02%}'.format(lr.score(xs_test, ts_test)))
hits = 0
attempts = 0
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts), 1))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features])
features = scaler.transform(features)
scores = lr.predict_proba(features)[:, 1].reshape(-1)
predicted_host = scores.argmax()
if labels[predicted_host]:
hits += 1
attempts += 1
print('Problem accuracy: {:.02%}'.format(hits / attempts))
test(features, labels)
def test(features):
scaler = sklearn.preprocessing.StandardScaler().fit(features)
return scaler.scale_
print(test(features))
def test(features, labels):
labels = labels * 2 - 1
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(
features, labels, test_size=0.2, random_state=0)
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
print('Classification accuracy: {:.02%}'.format(lr.score(xs_test, ts_test)))
hits = 0
attempts = 0
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool)) * 2 - 1
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts), 1))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features])
scores = lr.predict_proba(features)[:, 1].reshape(-1)
predicted_host = scores.argmax()
if labels[predicted_host] == 1:
hits += 1
attempts += 1
print('Problem accuracy: {:.02%}'.format(hits / attempts))
test(features, labels)
def test(features, labels):
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(
features, labels, test_size=0.2, random_state=0)
normaliser = sklearn.preprocessing.Normalizer()
xs_train = normaliser.transform(xs_train)
xs_test = normaliser.transform(xs_test)
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
print('Classification accuracy: {:.02%}'.format(lr.score(xs_test, ts_test)))
hits = 0
attempts = 0
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts), 1))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features])
features = normaliser.transform(features)
scores = lr.predict_proba(features)[:, 1].reshape(-1)
predicted_host = scores.argmax()
if labels[predicted_host]:
hits += 1
attempts += 1
print('Problem accuracy: {:.02%}'.format(hits / attempts))
test(features, labels)
def test(features, labels):
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(
features, labels, test_size=0.2, random_state=0)
normaliser = sklearn.preprocessing.Normalizer()
scaler = sklearn.preprocessing.StandardScaler().fit(normaliser.transform(xs_train))
xs_train = scaler.transform(normaliser.transform(xs_train))
xs_test = scaler.transform(normaliser.transform(xs_test))
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
print('Classification accuracy: {:.02%}'.format(lr.score(xs_test, ts_test)))
hits = 0
attempts = 0
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts), 1))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features])
features = scaler.transform(normaliser.transform(features))
scores = lr.predict_proba(features)[:, 1].reshape(-1)
predicted_host = scores.argmax()
if labels[predicted_host]:
hits += 1
attempts += 1
print('Problem accuracy: {:.02%}'.format(hits / attempts))
test(features, labels)
def test(features, labels):
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(
features, labels, test_size=0.2, random_state=0)
normaliser = sklearn.preprocessing.Normalizer()
scaler = sklearn.preprocessing.StandardScaler().fit(normaliser.transform(xs_train[:, -6:]))
xs_train = numpy.hstack([normaliser.transform(xs_train[:, :-6]),
scaler.transform(normaliser.transform(xs_train[:, -6:]))])
xs_test = numpy.hstack([normaliser.transform(xs_test[:, :-6]),
scaler.transform(normaliser.transform(xs_test[:, -6:]))])
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
print('Classification accuracy: {:.02%}'.format(lr.score(xs_test, ts_test)))
hits = 0
attempts = 0
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts), 1))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features])
features = numpy.hstack([normaliser.transform(features[:, :-6]),
scaler.transform(normaliser.transform(features[:, -6:]))])
scores = lr.predict_proba(features)[:, 1].reshape(-1)
predicted_host = scores.argmax()
if labels[predicted_host]:
hits += 1
attempts += 1
print('Problem accuracy: {:.02%}'.format(hits / attempts))
test(features, labels)
%matplotlib inline
subject = crowdastro.data.db.radio_subjects.find_one({'zooniverse_id': 'ARG0003r8e'})
crowdastro.show.subject(subject)
def test(features, labels, subject):
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(
features, labels, test_size=0.2, random_state=0)
normaliser = sklearn.preprocessing.Normalizer()
scaler = sklearn.preprocessing.StandardScaler().fit(normaliser.transform(xs_train[:, -6:]))
xs_train = numpy.hstack([normaliser.transform(xs_train[:, :-6]),
scaler.transform(normaliser.transform(xs_train[:, -6:]))])
xs_test = numpy.hstack([normaliser.transform(xs_test[:, :-6]),
scaler.transform(normaliser.transform(xs_test[:, -6:]))])
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
print('Classification accuracy: {:.02%}'.format(lr.score(xs_test, ts_test)))
subject_id = str(subject['_id']).encode('ascii')
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts), 1))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features])
features = numpy.hstack([normaliser.transform(features[:, :-6]),
scaler.transform(normaliser.transform(features[:, -6:]))])
scores = lr.predict_proba(features)[:, 1].reshape(-1)
crowdastro.show.subject(subject)
matplotlib.pyplot.scatter(potential_hosts[:, 0], potential_hosts[:, 1], c=scores)
matplotlib.pyplot.show()
return normaliser, scaler, lr
normaliser, scaler, classifier = test(features, labels, subject)
def softmax(x):
exp = numpy.exp(x)
return exp / numpy.sum(exp, axis=0)
def test(features, labels, normaliser, scaler, lr):
max_entropy = float('-inf')
max_subject = None
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts), 1))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features])
features = numpy.hstack([normaliser.transform(features[:, :-6]),
scaler.transform(normaliser.transform(features[:, -6:]))])
probabilities = softmax(lr.predict_proba(features)[:, 1].reshape(-1))
entropy = -(probabilities * numpy.log(probabilities)).sum()
if entropy > max_entropy:
max_entropy = entropy
max_subject = subject
return max_subject
max_subject = test(features, labels, normaliser, scaler, classifier)
crowdastro.show.subject(max_subject)
def test(features, labels, subject):
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(
features, labels, test_size=0.2, random_state=0)
normaliser = sklearn.preprocessing.Normalizer()
scaler = sklearn.preprocessing.StandardScaler().fit(normaliser.transform(xs_train[:, -6:]))
xs_train = numpy.hstack([normaliser.transform(xs_train[:, :-6]),
scaler.transform(normaliser.transform(xs_train[:, -6:]))])
xs_test = numpy.hstack([normaliser.transform(xs_test[:, :-6]),
scaler.transform(normaliser.transform(xs_test[:, -6:]))])
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
print('Classification accuracy: {:.02%}'.format(lr.score(xs_test, ts_test)))
subject_id = str(subject['_id']).encode('ascii')
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
linear_features = numpy.zeros((len(potential_hosts), 1))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
linear_features[index] = numpy.hypot(x - 100, y - 100)
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features, linear_features])
features = numpy.hstack([normaliser.transform(features[:, :-6]),
scaler.transform(normaliser.transform(features[:, -6:]))])
scores = lr.predict_proba(features)[:, 1].reshape(-1)
matplotlib.pyplot.plot(sorted(scores), marker='x')
matplotlib.pyplot.show()
return normaliser, scaler, lr
test(features, labels, subject)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Baseline — logistic regression with CNN, astro, and distance features
Step2: Scaling inputs
Step3: Scaling inputs = good. This isn't terribly surprising.
Step4: Negative/positive labels (instead of binary)
Step5: This had no effect (maybe sklearn does this internally?).
Step6: Normalising massively raises the accuracy. This isn't surprising by itself, but the size of the accuracy gain is surprising.
Step7: While better than scaling alone, this is worse than just normalising. I wonder if there's a relationship between the scales of the CNN features that we're missing somehow when we scale? To test this, I'll normalise all the data, but only scale the last 6 features.
Step8: That worked really well. I'll have to find out why. I'll try looking at the output of one of the more difficult-to-classify subjects.
Step9: Interestingly, this classifier does a lot better at finding the middle of the compact source. It also highlights some odd points off to the side.
Step10: Looks like this one is still hard to classify.
|
12,210
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alex Rockhill <aprockhill206@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif',
preload=True)
events = mne.find_events(raw)
raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=False,
exclude=raw.info['bads'])
raw.set_eeg_reference(projection=True).apply_proj()
raw_csd = mne.preprocessing.compute_current_source_density(raw)
raw.plot()
raw_csd.plot()
raw.plot_psd()
raw_csd.plot_psd()
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,
preload=True)
evoked = epochs['auditory'].average()
times = np.array([-0.1, 0., 0.05, 0.1, 0.15])
evoked_csd = mne.preprocessing.compute_current_source_density(evoked)
evoked.plot_joint(title='Average Reference', show=False)
evoked_csd.plot_joint(title='Current Source Density')
fig, ax = plt.subplots(4, 4)
fig.subplots_adjust(hspace=0.5)
fig.set_size_inches(10, 10)
for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]):
for j, m in enumerate([5, 4, 3, 2]):
this_evoked_csd = mne.preprocessing.compute_current_source_density(
evoked, stiffness=m, lambda2=lambda2)
this_evoked_csd.plot_topomap(
0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s',
colorbar=False, show=False)
ax[i, j].set_title('stiffness=%i\nλ²=%s' % (m, lambda2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load sample subject data
Step2: Plot the raw data and CSD-transformed raw data
Step3: Also look at the power spectral densities
Step4: CSD can also be computed on Evoked (averaged) data.
Step5: First let's look at how CSD affects scalp topography
Step6: CSD has parameters stiffness and lambda2 affecting smoothing and
|
12,211
|
<ASSISTANT_TASK:>
Python Code:
from collections import defaultdict
dict_of_colors = defaultdict(list)
with open('input.txt', 'r') as fd:
for line in fd:
if 'no other bags' not in line:
sentence = line.split(' ')
main_color = ' '.join(line.split(' ')[:2])
for i, word in enumerate(sentence):
if word.isdigit():
dict_of_colors[main_color] += [' '.join([sentence[i + 1], sentence[i + 2]])]
def pokemon_games_finder(bags, dict_of_colors):
for outside_color, inside_colors in dict_of_colors.items():
for color in inside_colors:
if color in good_bags:
bags.add(outside_color)
return bags
my_color = 'shiny gold'
good_bags = set([my_color])
n_good_bags = []
for i in range(0, 10):
good_bags = pokemon_games_finder(good_bags, dict_of_colors)
if len(n_good_bags) > 2 and len(good_bags) == n_good_bags[i - 1]:
break
n_good_bags.append(len(good_bags))
len(good_bags) - 1 # includes my_color
dict_of_n = defaultdict(dict)
with open('input.txt', 'r') as fd:
for line in fd:
if 'no other bags' not in line:
sentence = line.split(' ')
main_color = ' '.join(line.split(' ')[:2])
for i, word in enumerate(sentence):
if word.isdigit():
dict_of_n[main_color][' '.join([sentence[i + 1], sentence[i + 2]])] = int(word)
else:
main_color = ' '.join(line.split(' ')[:2])
dict_of_n[main_color] = 0
total_bags = 0
for bag1, count1 in dict_of_n[my_color].items():
total_bags += count1
if dict_of_n[bag1] != 0:
for bag2, count2 in dict_of_n[bag1].items():
count2 = count1 * count2
total_bags += count2
if dict_of_n[bag2] != 0:
for bag3, count3 in dict_of_n[bag2].items():
count3 = count2 * count3
total_bags += count3
if dict_of_n[bag3] != 0:
for bag4, count4 in dict_of_n[bag3].items():
count4 = count3 * count4
total_bags += count4
if dict_of_n[bag4] != 0:
for bag5, count5 in dict_of_n[bag4].items():
count5 = count4 * count5
total_bags += count5
if dict_of_n[bag5] != 0:
for bag6, count6 in dict_of_n[bag5].items():
count6 = count5 * count6
total_bags += count6
total_bags
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 2
|
12,212
|
<ASSISTANT_TASK:>
Python Code:
!pip3 install bayesian-optimization
def black_box_function(x, y):
return -x ** 2 - (y - 1) ** 2 + 1
from bayes_opt import BayesianOptimization
# 파라미터 경계 정의
pbounds = {'x': (2, 4), 'y': (-3, 3)}
optimizer = BayesianOptimization(
f=black_box_function,
pbounds=pbounds,
verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent
random_state=1,
)
optimizer.maximize(
init_points=2,
n_iter=3,
)
optimizer.maximize(
init_points=10,
n_iter=3,
)
optimizer.maximize(
init_points=2,
n_iter=9,
)
print(optimizer.max)
optimizer.set_bounds(new_bounds={"x": (-2, 3)})
optimizer.maximize(
init_points=0,
n_iter=5,
)
optimizer.probe(
params={"x": 0.5, "y": 0.7},
lazy=True,
)
print(optimizer.space.keys)
optimizer.probe(
params=[-0.3, 0.1],
lazy=True,
)
optimizer.maximize(init_points=0, n_iter=0)
from bayes_opt.observer import JSONLogger
from bayes_opt.event import Events
logger = JSONLogger(path="./bayesian_logs.json")
optimizer.subscribe(Events.OPTMIZATION_STEP, logger)
optimizer.maximize(
init_points=2,
n_iter=3,
)
from bayes_opt.util import load_logs
new_optimizer = BayesianOptimization(
f=black_box_function,
pbounds={"x": (-2, 2), "y": (-2, 2)},
verbose=2,
random_state=7,
)
print(len(new_optimizer.space))
load_logs(new_optimizer, logs=["./bayesian_logs.json"]);
print("New optimizer is now aware of {} points.".format(len(new_optimizer.space)))
new_optimizer.maximize(
init_points=0,
n_iter=10,
)
optimizer = BayesianOptimization(
f=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
from bayes_opt import UtilityFunction
utility = UtilityFunction(kind="ucb", kappa=2.5, xi=0.0)
next_point_to_probe = optimizer.suggest(utility)
print("Next point to probe is:", next_point_to_probe)
target = black_box_function(**next_point_to_probe)
print("Found the target value to be:", target)
optimizer.register(
params=next_point_to_probe,
target=target,
)
for _ in range(5):
next_point = optimizer.suggest(utility)
target = black_box_function(**next_point)
optimizer.register(params=next_point, target=target)
print(target, next_point)
print(optimizer.max)
def func_with_discrete_params(x, y, d):
# Simulate necessity of having d being discrete.
assert type(d) == int
return ((x + y + d) // (1 + d)) / (1 + (x + y) ** 2)
def function_to_be_optimized(x, y, w):
d = int(w)
return func_with_discrete_params(x, y, d)
optimizer = BayesianOptimization(
f=function_to_be_optimized,
pbounds={'x': (-10, 10), 'y': (-10, 10), 'w': (0, 5)},
verbose=2,
random_state=1,
)
optimizer.maximize(alpha=1e-3)
optimizer = BayesianOptimization(
f=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
optimizer.maximize(
init_points=1,
n_iter=5,
# What follows are GP regressor parameters
alpha=1e-3,
n_restarts_optimizer=5
)
optimizer.set_gp_params(normalize_y=True)
from bayes_opt.event import DEFAULT_EVENTS, Events
optimizer = BayesianOptimization(
f=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
class BasicObserver:
def update(self, event, instance):
Does whatever you want with the event and `BayesianOptimization` instance.
print("Event `{}` was observed".format(event))
my_observer = BasicObserver()
optimizer.subscribe(
event=Events.OPTMIZATION_STEP,
subscriber=my_observer,
callback=None, # Will use the `update` method as callback
)
def my_callback(event, instance):
print("Go nuts here!")
optimizer.subscribe(
event=Events.OPTMIZATION_START,
subscriber="Any hashable object",
callback=my_callback,
)
optimizer.maximize(init_points=1, n_iter=2)
DEFAULT_EVENTS
from bayes_opt import BayesianOptimization
from bayes_opt import UtilityFunction
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
%matplotlib inline
def target(x):
return np.exp(-(x - 2)**2) + np.exp(-(x - 6)**2/10) + 1/ (x**2 + 1)
x = np.linspace(-2, 10, 10000).reshape(-1, 1)
y = target(x)
plt.plot(x, y);
optimizer = BayesianOptimization(target, {'x': (-2, 10)}, random_state=27)
optimizer.maximize(init_points=2, n_iter=0, kappa=5)
def posterior(optimizer, x_obs, y_obs, grid):
optimizer._gp.fit(x_obs, y_obs)
mu, sigma = optimizer._gp.predict(grid, return_std=True)
return mu, sigma
def plot_gp(optimizer, x, y):
fig = plt.figure(figsize=(16, 10))
steps = len(optimizer.space)
fig.suptitle(
'Gaussian Process and Utility Function After {} Steps'.format(steps),
fontdict={'size':30}
)
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
axis = plt.subplot(gs[0])
acq = plt.subplot(gs[1])
x_obs = np.array([[res["params"]["x"]] for res in optimizer.res])
y_obs = np.array([res["target"] for res in optimizer.res])
mu, sigma = posterior(optimizer, x_obs, y_obs, x)
axis.plot(x, y, linewidth=3, label='Target')
axis.plot(x_obs.flatten(), y_obs, 'D', markersize=8, label=u'Observations', color='r')
axis.plot(x, mu, '--', color='k', label='Prediction')
axis.fill(np.concatenate([x, x[::-1]]),
np.concatenate([mu - 1.9600 * sigma, (mu + 1.9600 * sigma)[::-1]]),
alpha=.6, fc='c', ec='None', label='95% confidence interval')
axis.set_xlim((-2, 10))
axis.set_ylim((None, None))
axis.set_ylabel('f(x)', fontdict={'size':20})
axis.set_xlabel('x', fontdict={'size':20})
utility_function = UtilityFunction(kind="ucb", kappa=5, xi=0)
utility = utility_function.utility(x, optimizer._gp, 0)
acq.plot(x, utility, label='Utility Function', color='purple')
acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15,
label=u'Next Best Guess', markerfacecolor='gold', markeredgecolor='k', markeredgewidth=1)
acq.set_xlim((-2, 10))
acq.set_ylim((0, np.max(utility) + 0.5))
acq.set_ylabel('Utility', fontdict={'size':20})
acq.set_xlabel('x', fontdict={'size':20})
axis.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
acq.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
plot_gp(optimizer, x, y)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier as RFC
from sklearn.svm import SVC
from bayes_opt import BayesianOptimization
from bayes_opt.util import Colours
def get_data():
Synthetic binary classification dataset.
data, targets = make_classification(
n_samples=1000,
n_features=45,
n_informative=12,
n_redundant=7,
random_state=134985745,
)
return data, targets
def svc_cv(C, gamma, data, targets):
SVC cross validation.
This function will instantiate a SVC classifier with parameters C and
gamma. Combined with data and targets this will in turn be used to perform
cross validation. The result of cross validation is returned.
Our goal is to find combinations of C and gamma that maximizes the roc_auc
metric.
estimator = SVC(C=C, gamma=gamma, random_state=2)
cval = cross_val_score(estimator, data, targets, scoring='roc_auc', cv=4)
return cval.mean()
def rfc_cv(n_estimators, min_samples_split, max_features, data, targets):
Random Forest cross validation.
This function will instantiate a random forest classifier with parameters
n_estimators, min_samples_split, and max_features. Combined with data and
targets this will in turn be used to perform cross validation. The result
of cross validation is returned.
Our goal is to find combinations of n_estimators, min_samples_split, and
max_features that minimzes the log loss.
estimator = RFC(
n_estimators=n_estimators,
min_samples_split=min_samples_split,
max_features=max_features,
random_state=2
)
cval = cross_val_score(estimator, data, targets,
scoring='neg_log_loss', cv=4)
return cval.mean()
def optimize_svc(data, targets):
Apply Bayesian Optimization to SVC parameters.
def svc_crossval(expC, expGamma):
Wrapper of SVC cross validation.
Notice how we transform between regular and log scale. While this
is not technically necessary, it greatly improves the performance
of the optimizer.
C = 10 ** expC
gamma = 10 ** expGamma
return svc_cv(C=C, gamma=gamma, data=data, targets=targets)
optimizer = BayesianOptimization(
f=svc_crossval,
pbounds={"expC": (-3, 2), "expGamma": (-4, -1)},
random_state=1234,
verbose=2
)
optimizer.maximize(n_iter=10)
print("Final result:", optimizer.max)
def optimize_rfc(data, targets):
Apply Bayesian Optimization to Random Forest parameters.
def rfc_crossval(n_estimators, min_samples_split, max_features):
Wrapper of RandomForest cross validation.
Notice how we ensure n_estimators and min_samples_split are casted
to integer before we pass them along. Moreover, to avoid max_features
taking values outside the (0, 1) range, we also ensure it is capped
accordingly.
return rfc_cv(
n_estimators=int(n_estimators),
min_samples_split=int(min_samples_split),
max_features=max(min(max_features, 0.999), 1e-3),
data=data,
targets=targets,
)
optimizer = BayesianOptimization(
f=rfc_crossval,
pbounds={
"n_estimators": (10, 250),
"min_samples_split": (2, 25),
"max_features": (0.1, 0.999),
},
random_state=1234,
verbose=2
)
optimizer.maximize(n_iter=10)
print("Final result:", optimizer.max)
if __name__ == "__main__":
data, targets = get_data()
print(Colours.yellow("--- Optimizing SVM ---"))
optimize_svc(data, targets)
print(Colours.green("--- Optimizing Random Forest ---"))
optimize_rfc(data, targets)
import time
import random
from bayes_opt import BayesianOptimization
from bayes_opt.util import UtilityFunction, Colours
import asyncio
import threading
try:
import json
import tornado.ioloop
import tornado.httpserver
from tornado.web import RequestHandler
import requests
except ImportError:
raise ImportError(
"In order to run this example you must have the libraries: " +
"`tornado` and `requests` installed."
)
def black_box_function(x, y):
Function with unknown internals we wish to maximize.
This is just serving as an example, however, for all intents and
purposes think of the internals of this function, i.e.: the process
which generates its outputs values, as unknown.
time.sleep(random.randint(1, 7))
return -x ** 2 - (y - 1) ** 2 + 1
class BayesianOptimizationHandler(RequestHandler):
Basic functionality for NLP handlers.
_bo = BayesianOptimization(
f=black_box_function,
pbounds={"x": (-4, 4), "y": (-3, 3)}
)
_uf = UtilityFunction(kind="ucb", kappa=3, xi=1)
def post(self):
Deal with incoming requests.
body = tornado.escape.json_decode(self.request.body)
try:
self._bo.register(
params=body["params"],
target=body["target"],
)
print("BO has registered: {} points.".format(len(self._bo.space)), end="\n\n")
except KeyError:
pass
finally:
suggested_params = self._bo.suggest(self._uf)
self.write(json.dumps(suggested_params))
def run_optimization_app():
asyncio.set_event_loop(asyncio.new_event_loop())
handlers = [
(r"/bayesian_optimization", BayesianOptimizationHandler),
]
server = tornado.httpserver.HTTPServer(
tornado.web.Application(handlers)
)
server.listen(9009)
tornado.ioloop.IOLoop.instance().start()
def run_optimizer():
global optimizers_config
config = optimizers_config.pop()
name = config["name"]
colour = config["colour"]
register_data = {}
max_target = None
for _ in range(10):
status = name + " wants to register: {}.\n".format(register_data)
resp = requests.post(
url="http://localhost:9009/bayesian_optimization",
json=register_data,
).json()
target = black_box_function(**resp)
register_data = {
"params": resp,
"target": target,
}
if max_target is None or target > max_target:
max_target = target
status += name + " got {} as target.\n".format(target)
status += name + " will to register next: {}.\n".format(register_data)
print(colour(status), end="\n")
global results
results.append((name, max_target))
print(colour(name + " is done!"), end="\n\n")
if __name__ == "__main__":
ioloop = tornado.ioloop.IOLoop.instance()
optimizers_config = [
{"name": "optimizer 1", "colour": Colours.red},
{"name": "optimizer 2", "colour": Colours.green},
{"name": "optimizer 3", "colour": Colours.blue},
]
app_thread = threading.Thread(target=run_optimization_app)
app_thread.daemon = True
app_thread.start()
targets = (
run_optimizer,
run_optimizer,
run_optimizer
)
optimizer_threads = []
for target in targets:
optimizer_threads.append(threading.Thread(target=target))
optimizer_threads[-1].daemon = True
optimizer_threads[-1].start()
results = []
for optimizer_thread in optimizer_threads:
optimizer_thread.join()
for result in results:
print(result[0], "found a maximum value of: {}".format(result[1]))
ioloop.stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 최적화할 함수 정의
Step2: 2. 최적화 시작
Step3: n_iter
Step4: 최상의 조합은 optimizer.max로 확인 가능
Step5: 2.1 범위 수정
Step6: 3. 최적화 가이드
Step7: 4. 저장, 로딩, 재시작
Step8: 4.2 Loading progress
Step9: Advanced Part
Step10: UtilityFUnction 인스턴스 필요
Step11: suggest는 언제나 호출할 수 있음
Step12: 평가
Step13: 어떤 값이 관측되었는지 알려주기
Step14: 1.1 maximize loop
Step15: 2. Discrete 파라미터 다루기
Step16: 3.1 파라미터를 GP로 전달
Step17: maximize를 여러번 호출하거나 optimizing을 최대화 밖에서 할 경우엔 set_gp_params
Step19: Observer 계속!
Step20: 시각화
Step21: <img src="https
Step29: sklearn_example.py
Step33: 분산
|
12,213
|
<ASSISTANT_TASK:>
Python Code:
set1 = set('Moment of Truth')
set1
set2 = set()
set2.add('A')
set2
# set is similar to dictionary but containing only keys
setA = {'Apple','America','August'}
setA
set3 = set()
set3.add('Blue')
set3.add('Green')
print(set3)
set3.add('Blue')
print(set3)
# please note set is case-sensitive
colors = ['Blue','Green','Red','White','blue','Blue','White','Black']
unique_colors = set(colors)
print(unique_colors)
set4 = {'A',('B','C')}
set4
set5 = {'X',['Y','Z']} # will generate error
set5
# Mutable Set
set6 = set(['A','B','C','D','E'])
set6.add('F')
print(set6)
# Immutable Set
set7 = frozenset(['A','B','C'])
set7.add('D') # will generate error
set7
# Sets are unordered, to get ordered set use sorted function
set8 = sorted(set6)
set8
set6.clear()
set6
print(set4)
set4.remove('A') # will report an error if element is not present
print(set4)
set4.discard('A') # will not report an error if element is not present
print(set4)
set4.remove('A') # this will create an error
print(len(set4))
set4.add('A')
set4.add(('D','E'))
print(set4)
# add will not take any mutable object like list
set4.add(['1','2'])
set4 # this will create an error
# Union Operator
colorsA = {'Blue','Black','White','Red','Green'}
colorsB = {'Blue','White'}
colorsA.union(colorsB)
# Intersection operator
colorsA.intersection(colorsB)
x = {"a","b","c","d","e"}
x.pop()
x.pop()
# Difference between sets
x = {"a","b","c","d","e"}
y = {"b","c"}
z = {"c","d"}
x.difference(y)
x.difference(y).difference(z)
# instead of using the method difference, we can use the operator "-"
print(x - y)
print(x - y - z)
# difference_update removes all elements of another set from this set.
x = {"a","b","c","d","e"}
y = {"b","c"}
x.difference_update(y)
print(x)
# x.difference_update(y) is the same as "x = x - y"
x = {"a","b","c","d","e"}
y = {"b","c"}
x = x - y
print(x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set can contain only unique entries. If you try to duplicate an entry in the set
Step2: Sets will not allow any mutable objects. As you can see tuples are accepted as elements in a set while list is not which is a mutable object.
Step3: Mutable and Immutable Sets
Step4: Set Operations
|
12,214
|
<ASSISTANT_TASK:>
Python Code:
# возьмем лог, который "penalize higher values more than smaller values"
ts_log = np.log(rub["Adj Close"])
test_stationarity(ts_log)
# далее вычтем скользящее среднее
moving_avg = pd.rolling_mean(ts_log,50)
plt.plot(ts_log)
plt.plot(moving_avg, color='red')
ts_log_moving_avg_diff = ts_log - moving_avg
ts_log_moving_avg_diff.dropna(inplace=True)
test_stationarity(ts_log_moving_avg_diff)
expwighted_avg = pd.ewma(ts_log, halflife=50)
plt.plot(ts_log)
plt.plot(expwighted_avg, color='red')
ts_log_ewma_diff = ts_log - expwighted_avg
test_stationarity(ts_log_ewma_diff)
# Differencing - отличные результаты
ts_log_diff = ts_log - ts_log.shift(periods=1)
ts_log_diff.dropna(inplace=True)
test_stationarity(ts_log_diff)
index_q= ts_log.index
ts_log = pd.DataFrame(data=ts_log.values, index=pd.to_datetime(ts_log.index), columns=['usd/rub'])
ts_log_decompose = residual
ts_log_decompose.dropna(inplace=True)
test_stationarity(ts_log_decompose)
# посмотрим и выявим тренд + сезонность, и что останется от ряда, если всё это вычесть
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts_log, freq = 101)
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid
plt.subplot(411)
plt.plot(ts_log, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual, label='Residuals')
plt.legend(loc='best')
plt.tight_layout()
ts_log_decompose.dropna(inplace=True)
ts_log_decompose.plot()
#ACF and PACF plots:
from statsmodels.tsa.stattools import acf, pacf
lag_acf = acf(ts_log_diff, nlags=20)
lag_pacf = pacf(ts_log_diff, nlags=20, method='ols')
import matplotlib.ticker as ticker
plt.figure(figsize=(15,4))
tick_spacing = 1
fig, ax = plt.subplots(1,1)
#ax.plot(x,y)
ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
#Plot ACF:
plt.subplot(121)
plt.plot(lag_acf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.title('Autocorrelation Function')
#Plot PACF:
plt.subplot(122)
plt.plot(lag_pacf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.title('Partial Autocorrelation Function')
plt.tight_layout()
plt.figure(figsize=(50,4))
plt.plot(lag_acf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.show()
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(ts_log, order=(1, 1, 0))
results_AR = model.fit(disp=-1)
plt.plot(ts_log_diff)
plt.plot(results_AR.fittedvalues, color='red')
plt.title('RSS: %.4f'% sum((results_AR.fittedvalues-ts_log_diff)**2))
model = ARIMA(ts_log, order=(0, 1, 1))
results_MA = model.fit(disp=-1)
plt.plot(ts_log_diff)
plt.plot(results_MA.fittedvalues, color='red')
plt.title('RSS: %.4f'% sum((results_MA.fittedvalues-ts_log_diff)**2))
model = ARIMA(ts_log, order=(1, 1, 2))
results_ARIMA = model.fit(disp=-1)
plt.plot(ts_log_diff)
plt.plot(results_ARIMA.fittedvalues, color='red')
plt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues-ts_log_diff)**2))
predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True)
print predictions_ARIMA_diff.head()
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
print predictions_ARIMA_diff_cumsum.head()
predictions_ARIMA_log = pd.Series(ts_log.ix[0], index=ts_log.index)
predictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum,fill_value=0)
predictions_ARIMA_log.head()
predictions_ARIMA = np.exp(trend + seasonal)
plt.plot(rub["Adj Close"])
plt.plot(predictions_ARIMA)
#plt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-rub["Adj Close"])**2)/len(rub["Adj Close"])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Два способа убрать сезонность
|
12,215
|
<ASSISTANT_TASK:>
Python Code:
import sys
print('{0[0]}.{0[1]}'.format(sys.version_info))
pi = 3.1416
radio = 5
area= pi * radio**2
print(area)
color_list_1 = set(["White", "Black", "Red"])
color_list_2 = set(["Red", "Green"])
color_list_1 - color_list_2
path = 'C:/Users/Margarita/Documents/Mis_documentos/Biologia_EAFIT/Semestre_IX/Programacion/'
size = len (path)
guardar = ""
for i in range(3,size):
if path[i] != '/':
guardar = guardar + path[i]
else:
print(guardar)
guardar = ""
my_list = [5,7,8,9,17]
sum_list = sum (my_list)
print(sum_list)
elemento_a_insertar = 'E'
my_list = [1, 2, 3, 4]
elemento_a_insertar = 'E'
my_list = [1, 2, 3, 4]
size = len (my_list)
carpeta = []
for i in range(size):
carpeta = carpeta + [elemento_a_insertar,my_list[i]]
my_list = carpeta
print (my_list)
N = 3
my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']
N=3
lista=[]
listaa = []
my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']
size = len(my_list)
for i in range(N):
lista = lista + [listaa]
for i in range (size):
lista[i%N] = lista[i%N] + [my_list[i]]
print(lista)
list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]
list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]
size = len(list_of_lists)
carpeta = list_of_lists[1]
for i in range(size):
if sum(list_of_lists[i]) > sum(carpeta):
carpeta = list_of_lists[i]
print(carpeta)
N = 5
N = 5
diccio = {}
for i in range(1,N+1):
diccio [i]= i**2
print(diccio)
dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]
dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]
final= {}
for i in dictionary_list:
for k in i:
final[k] = i[k]
print(final)
dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}]
dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}]
for i in range(0,len(dictionary_list)):
dictionary_list[i]['cuadrado']= dictionary_list[i]['numero']**2
print(dictionary_list)
def diferencia_conjuntos(color_list_1, color_list_2):
print (color_list_1 - color_list_2)
# Implementar la función
diferencia_conjuntos(
color_list_1 = set(["White", "Black", "Red"]) ,
color_list_2 = set(["Red", "Green"]))
def max_list_of_lists(list_of_lists):
size = len(list_of_lists)
carpeta = list_of_lists[1]
for i in range(size):
if sum(list_of_lists[i]) > sum(carpeta):
carpeta = list_of_lists[i]
print(carpeta)
# Implementar la función
list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]
max_list_of_lists (list_of_lists)
def diccionario_cuadradovalor(N):
diccio = {}
final = {}
for i in range(1,N+1):
final = diccio [i]= i**2
print(diccio)
#Implementar la función:
N = 5
diccionario_cuadradovalor(N)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Calcule el área de un circulo de radio 5
Step2: 3. Escriba código que imprima todos los colores de que están en color_list_1 y no estan presentes en color_list_2
Step3: 4 Imprima una línea por cada carpeta que compone el Path donde se esta ejecutando python
Step4: Manejo de Listas
Step5: 6. Inserte un elemento_a_insertar antes de cada elemento de my_list
Step6: La salida esperada es una lista así
Step7: 7. Separe my_list en una lista de lista cada N elementos
Step8: Salida Epserada
Step9: 8. Encuentra la lista dentro de list_of_lists que la suma de sus elementos sea la mayor
Step10: Salida Esperada
Step11: Manejo de Diccionarios
Step12: Salida Esperada
Step13: 10. Concatene los diccionarios en dictionary_list para crear uno nuevo
Step14: Salida Esperada
Step15: 11. Añada un nuevo valor "cuadrado" con el valor de "numero" de cada diccionario elevado al cuadrado
Step16: Salida Esperada
Step17: Manejo de Funciones
Step18: 13. Defina y llame una función que reciva de parametro una lista de listas y solucione el problema 8
Step19: 14. Defina y llame una función que reciva un parametro N y resuleva el problema 9
|
12,216
|
<ASSISTANT_TASK:>
Python Code:
import seaborn as sns
%matplotlib inline
tips = sns.load_dataset('tips')
tips.head()
sns.distplot(tips['total_bill'])
# Safe to ignore warnings
sns.distplot(tips['total_bill'],kde=False,bins=30)
sns.jointplot(x='total_bill',y='tip',data=tips,kind='scatter')
sns.jointplot(x='total_bill',y='tip',data=tips,kind='hex')
sns.jointplot(x='total_bill',y='tip',data=tips,kind='reg')
sns.pairplot(tips)
sns.pairplot(tips,hue='sex',palette='coolwarm')
sns.rugplot(tips['total_bill'])
# Don't worry about understanding this code!
# It's just for the diagram below
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#Create dataset
dataset = np.random.randn(25)
# Create another rugplot
sns.rugplot(dataset);
# Set up the x-axis for the plot
x_min = dataset.min() - 2
x_max = dataset.max() + 2
# 100 equally spaced points from x_min to x_max
x_axis = np.linspace(x_min,x_max,100)
# Set up the bandwidth, for info on this:
url = 'http://en.wikipedia.org/wiki/Kernel_density_estimation#Practical_estimation_of_the_bandwidth'
bandwidth = ((4*dataset.std()**5)/(3*len(dataset)))**.2
# Create an empty kernel list
kernel_list = []
# Plot each basis function
for data_point in dataset:
# Create a kernel for each point and append to list
kernel = stats.norm(data_point,bandwidth).pdf(x_axis)
kernel_list.append(kernel)
#Scale for plotting
kernel = kernel / kernel.max()
kernel = kernel * .4
plt.plot(x_axis,kernel,color = 'grey',alpha=0.5)
plt.ylim(0,1)
# To get the kde plot we can sum these basis functions.
# Plot the sum of the basis function
sum_of_kde = np.sum(kernel_list,axis=0)
# Plot figure
fig = plt.plot(x_axis,sum_of_kde,color='indianred')
# Add the initial rugplot
sns.rugplot(dataset,c = 'indianred')
# Get rid of y-tick marks
plt.yticks([])
# Set title
plt.suptitle("Sum of the Basis Functions")
sns.kdeplot(tips['total_bill'])
sns.rugplot(tips['total_bill'])
sns.kdeplot(tips['tip'])
sns.rugplot(tips['tip'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: distplot
Step3: To remove the kde layer and just have the histogram use
Step4: jointplot
Step5: pairplot
Step6: rugplot
Step7: kdeplot
Step8: So with our tips dataset
|
12,217
|
<ASSISTANT_TASK:>
Python Code:
X = ["Some say the world will end in fire,",
"Some say in ice."]
len(X)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(X)
vectorizer.vocabulary_
X_bag_of_words = vectorizer.transform(X)
X_bag_of_words.shape
X_bag_of_words
X_bag_of_words.toarray()
vectorizer.get_feature_names()
vectorizer.inverse_transform(X_bag_of_words)
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer()
tfidf_vectorizer.fit(X)
import numpy as np
np.set_printoptions(precision=2)
print(tfidf_vectorizer.transform(X).toarray())
# look at sequences of tokens of minimum length 2 and maximum length 2
bigram_vectorizer = CountVectorizer(ngram_range=(2, 2))
bigram_vectorizer.fit(X)
bigram_vectorizer.get_feature_names()
bigram_vectorizer.transform(X).toarray()
gram_vectorizer = CountVectorizer(ngram_range=(1, 2))
gram_vectorizer.fit(X)
gram_vectorizer.get_feature_names()
gram_vectorizer.transform(X).toarray()
X
char_vectorizer = CountVectorizer(ngram_range=(2, 2), analyzer="char")
char_vectorizer.fit(X)
print(char_vectorizer.get_feature_names())
zen = Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
# %load solutions/11_ngrams.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: tf-idf Encoding
Step2: tf-idfs are a way to represent documents as feature vectors. tf-idfs can be understood as a modification of the raw term frequencies (tf); the tf is the count of how often a particular word occurs in a given document. The concept behind the tf-idf is to downweight terms proportionally to the number of documents in which they occur. Here, the idea is that terms that occur in many different documents are likely unimportant or don't contain any useful information for Natural Language Processing tasks such as document classification. If you are interested in the mathematical details and equations, see this external IPython Notebook that walks you through the computation.
Step3: Often we want to include unigrams (single tokens) AND bigrams, wich we can do by passing the following tuple as an argument to the ngram_range parameter of the CountVectorizer function
Step4: Character n-grams
Step6: <div class="alert alert-success">
|
12,218
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-2', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
12,219
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math
def radius(x):
length of a vector
if len(x.shape) == 1:
return math.sqrt(np.inner(x,x))
# elif len(x.shape) == 2:
def potential(pos):
potential, defined as a negative number
r = radius(pos)
y1 = 1+r*r
return -1.0/math.sqrt(y1)
def angmomz(pos,vel):
Angular momentum in Z
return pos[0]*vel[1] - pos[1]*vel[0]
def energy(pos,vel):
Kinetic and Potential energy
return 0.5*np.inner(vel,vel) + potential(pos)
def force(pos):
force/acceleration (in our units mass=1 scale-length=1)
# note we might be able to use sympy
r = radius(pos)
y2 = 1.0/math.sqrt(1+r*r)
return -pos*y2*y2*y2
def step0(pos,vel, dt):
step0: simple first order Euler
old = pos
pos = pos + dt*vel
vel = vel + dt*force(old)
return (pos,vel)
def step1(pos,vel, dt):
step1: simple first order Euler - updating position first
pos = pos + dt*vel
vel = vel + dt*force(pos)
return (pos,vel)
def step2(pos,vel, dt):
step2: simple first order Euler - updating velocity first
vel = vel + dt*force(pos)
pos = pos + dt*vel
return (pos,vel)
def step4(pos,vel,dt):
step4: Runge Kutta 4
# not implemented yet
return None
def show_stats(data):
Show some stats of a numpy array
m = data.mean()
s = data.std()
dmin = data.min()
dmax = data.max()
rmin = (dmin-m)/s
rmax = (dmax-m)/s
print("Mean/Std:",m,s)
print("Min/Max:",dmin,dmax)
print("Rmin/Rmax:",rmin,rmax)
print("Goodness: ",s/m)
x0 = 1.0 # initial X coordinate
v0 = 0.1 # initial Y launch velocity (0.5946 would be a circular orbit)
n = 200 # number of steps to take
dt = 0.1 # integration time step
step = step1 # pick an integration method
print(step.__doc__)
# Derived variables for the remainder
t = 0.0 # always start at t=0
pos = np.array([x0, 0.0, 0.0]) # keeps the current pos
vel = np.array([0.0, v0, 0.0]) # and vel
e = energy(pos,vel)
j = angmomz(pos,vel)
time = np.zeros(1) # time array (we'll append to this)
time[0] = t
phase = np.concatenate(([t,e,j],pos,vel)).reshape(1,9) # watch this peculiar
print("e0 =",e)
print("phase = ",phase)
# at x0=1.0 this should be the correct speed for a circular orbit
print("v0_circular=",1/math.pow(2.0,0.75))
%%time
for i in range(n):
(pos,vel) = step(pos,vel,dt)
t = t + dt
e = energy(pos,vel)
j = angmomz(pos,vel)
#print(i,pos,vel)
p = np.concatenate(([t,e,j],pos,vel)).reshape(1,9)
phase = np.concatenate((phase, p),axis=0)
time = np.append(time,t)
#print(phase)
plt.scatter(phase[:,3],phase[:,4],c=time)
plt.axis('equal')
plt.title("Orbit")
x = phase[:,3]
y = phase[:,4]
rad = np.sqrt(x*x+y*y)-1
plt.scatter(phase[:,0],rad)
plt.scatter(phase[:,0], phase[:,1])
plt.title("Conserving Energy?")
show_stats(phase[:,1])
plt.scatter(phase[:,0], phase[:,2])
plt.title("Conserving Angular Momentum?")
show_stats(phase[:,2])
try:
import cPickle as pickle
print("using cPickle")
except:
import pickle
print("using pickle")
# write it
pickle.dump(phase,open("orbit1.p","wb"))
# read it again
phase2 = pickle.load(open("orbit1.p","rb"))
print(phase[0])
print(phase2[0])
from scipy.integrate import odeint
def ofunc(y,t):
function to integrate
Note we are re-using the force() function from the first part of this notebook
pos = y[0:3]
vel = y[3:]
return np.concatenate((vel,force(pos)))
n=200
phase0 = np.array([x0,0,0, 0,v0,0]) # initial conditions
times = np.arange(0.0,(n+1)*dt,dt) # requested times where we want a solution
# times = np.linspace(0.0,n*dt,n+1)
print(ofunc(phase0,0.0))
%%time
orbit = odeint(ofunc, phase0, times)
plt.scatter(orbit[:,0],orbit[:,1],c=times)
plt.axis('equal')
#plt.scatter(phase[:,3],phase[:,4])
plt.title("Orbit")
# plot the old one again
plt.scatter(phase[:,3],phase[:,4],c=time)
plt.axis('equal')
plt.title("Orbit")
# compare the last
p1 = phase[-1,3:]
p2 = orbit[-1,:]
#
print(phase[-1,0],p1)
print(times[-1],p2)
print(0.0,p1-p2)
# e = energy(pos,vel)
# j = angmomz(pos,vel)
et = np.zeros(len(times))
jt = np.zeros(len(times))
for i in range(len(times)):
pos=orbit[i,:3]
vel=orbit[i,3:]
et[i] = energy(pos,vel)
jt[i] = angmomz(pos,vel)
plt.plot(times,et)
plt.show()
plt.plot(times,jt)
plt.show()
show_stats(et)
show_stats(jt)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: The Plummer potential for mass $M_p$ and core radius $r_c$ is given by
Step11: Integrator
Step13: Helper functions
Step14: Initial conditions
Step15: Integrate
Step16: Saving data
Step18: Questions
Step19: Energy and Angular Momentum conservation?
|
12,220
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
pip install --user apache-beam[gcp]==2.16.0
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
import apache_beam as beam
print(beam.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'num_workers': 4,
'max_num_workers': 5
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the command again if you are getting oauth2client error.
Step2: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step4: <h2> Save the query from earlier </h2>
Step6: <h2> Create ML dataset using Dataflow </h2>
Step7: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
|
12,221
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Install JAX. This custom build raises the TPU timeout threshold, because the
# default limit of 2 minutes is too short for sampling very long sequences.
!gsutil cp gs://trax-ml/reformer/jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl .
!gsutil cp gs://trax-ml/reformer/jax-0.1.59-cp36-none-manylinux2010_x86_64.whl .
!pip install --upgrade -q ./jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl
!pip install --upgrade -q ./jax-0.1.59-cp36-none-manylinux2010_x86_64.whl
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
!pip install --upgrade -q gin git+https://github.com/google/trax.git@v1.2.3
from tensorflow.compat.v1.io.gfile import GFile
import gin
import os
import jax
import trax
from trax.models.beam_search import Search
from trax.supervised import inputs
import numpy as np
import jax.numpy as jnp
from scipy.special import softmax
%matplotlib inline
from matplotlib import pyplot as plt
# Normally we train on the full imagenet64 training set, which is quite large so
# we won't be loading it from this notebook. Instead, let's just load a few PNG
# images to use in our data pipeline.
DATA = []
for i in range(8):
img = plt.imread(GFile('gs://trax-ml/reformer/img{}.png'.format(i), 'rb'))
# Convert from RGBA floating-point to RGB integer representation.
img = np.asarray(img[:, :, :3] * 255, dtype=np.int32)
DATA.append(img)
# We can examine one of the images to make sure we've loaded it correctly.
plt.figure(figsize=(1.5, 1.5))
plt.axis('off')
plt.imshow(DATA[0])
# We'll be using a pre-trained 12-layer Reformer model.
# First, load the config (which sets all needed hyperparameters).
!gsutil cp gs://trax-ml/reformer/imgnet64/config.gin ./config.gin
gin.parse_config_file('./config.gin')
# Now we construct a ReformerLM instance and load the pre-trained weights.
# The 'predict' mode configures the model to accept single tokens at a time,
# instead of feeding in a complete image all at once.
model_infer = trax.models.ReformerLM(mode='predict')
model_infer.init_from_file(
'gs://trax-ml/reformer/imgnet64/model.pkl', weights_only=True)
sampling_decoder = Search(
trax.models.ReformerLM,
model_infer.weights,
temperature=1.0,
max_decode_len=32*64*3,
)
flat_prompt = []
for i, img in enumerate(DATA[:trax.fastmath.device_count()]):
img = img.reshape((-1, 64, 3))[:32, :, :]
flat_prompt.append(img.reshape((-1,)))
prompt = np.stack(flat_prompt, 0)
print("Prompt:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(prompt[i].reshape((-1, 64, 3)), aspect='equal')
plt.show()
seqs, scores = sampling_decoder.decode(targets_prefix=prompt, batch_size=8)
print("Sampled completions:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(seqs[i, -1].reshape((-1, 64, 3)), aspect='equal')
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
img = jnp.concatenate([prompt[i], seqs[i, -1]], -1)
plt.imshow(img.reshape((-1, 64, 3)), aspect='equal')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reformer
Step2: Load example data and model
Step3: Sample from the model
Step4: Sampling is an inherently serial process and will take up to 9 minutes to run. A good chunk of that time will be spent on JIT-compiling the code, though, so the code cell below will finish faster when re-run for a second time.
|
12,222
|
<ASSISTANT_TASK:>
Python Code:
#urllib is used to download the utils file from deeplearning.net
from urllib import request
response = request.urlopen('http://deeplearning.net/tutorial/code/utils.py')
content = response.read()
target = open('utils.py', 'wb')
target.write(content)
target.close()
#Import the math function for calculations
import math
#Tensorflow library. Used to implement machine learning models
import tensorflow as tf
#Numpy contains helpful functions for efficient mathematical calculations
import numpy as np
#Image library for image manipulation
from PIL import Image
#import Image
#Utils file
from utils import tile_raster_images
#Class that defines the behavior of the RBM
class RBM(object):
def __init__(self, input_size, output_size, epochs=5, learning_rate=1, batchsize=100):
#Defining the hyperparameters
self._input_size = input_size #Size of input
self._output_size = output_size #Size of output
self.epochs = epochs #Amount of training iterations
self.learning_rate = learning_rate #The step used in gradient descent
self.batchsize = batchsize #The size of how much data will be used for training per sub iteration
#Initializing weights and biases as matrices full of zeroes
self.w = np.zeros([input_size, output_size], np.float32) #Creates and initializes the weights with 0
self.hb = np.zeros([output_size], np.float32) #Creates and initializes the hidden biases with 0
self.vb = np.zeros([input_size], np.float32) #Creates and initializes the visible biases with 0
#Fits the result from the weighted visible layer plus the bias into a sigmoid curve
def prob_h_given_v(self, visible, w, hb):
#Sigmoid
return tf.nn.sigmoid(tf.matmul(visible, w) + hb)
#Fits the result from the weighted hidden layer plus the bias into a sigmoid curve
def prob_v_given_h(self, hidden, w, vb):
return tf.nn.sigmoid(tf.matmul(hidden, tf.transpose(w)) + vb)
#Generate the sample probability
def sample_prob(self, probs):
return tf.nn.relu(tf.sign(probs - tf.random_uniform(tf.shape(probs))))
#Training method for the model
def train(self, X):
#Create the placeholders for our parameters
_w = tf.placeholder("float", [self._input_size, self._output_size])
_hb = tf.placeholder("float", [self._output_size])
_vb = tf.placeholder("float", [self._input_size])
prv_w = np.zeros([self._input_size, self._output_size], np.float32) #Creates and initializes the weights with 0
prv_hb = np.zeros([self._output_size], np.float32) #Creates and initializes the hidden biases with 0
prv_vb = np.zeros([self._input_size], np.float32) #Creates and initializes the visible biases with 0
cur_w = np.zeros([self._input_size, self._output_size], np.float32)
cur_hb = np.zeros([self._output_size], np.float32)
cur_vb = np.zeros([self._input_size], np.float32)
v0 = tf.placeholder("float", [None, self._input_size])
#Initialize with sample probabilities
h0 = self.sample_prob(self.prob_h_given_v(v0, _w, _hb))
v1 = self.sample_prob(self.prob_v_given_h(h0, _w, _vb))
h1 = self.prob_h_given_v(v1, _w, _hb)
#Create the Gradients
positive_grad = tf.matmul(tf.transpose(v0), h0)
negative_grad = tf.matmul(tf.transpose(v1), h1)
#Update learning rates for the layers
update_w = _w + self.learning_rate *(positive_grad - negative_grad) / tf.to_float(tf.shape(v0)[0])
update_vb = _vb + self.learning_rate * tf.reduce_mean(v0 - v1, 0)
update_hb = _hb + self.learning_rate * tf.reduce_mean(h0 - h1, 0)
#Find the error rate
err = tf.reduce_mean(tf.square(v0 - v1))
#Training loop
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#For each epoch
for epoch in range(self.epochs):
#For each step/batch
for start, end in zip(range(0, len(X), self.batchsize),range(self.batchsize,len(X), self.batchsize)):
batch = X[start:end]
#Update the rates
cur_w = sess.run(update_w, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
cur_hb = sess.run(update_hb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
cur_vb = sess.run(update_vb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
prv_w = cur_w
prv_hb = cur_hb
prv_vb = cur_vb
error=sess.run(err, feed_dict={v0: X, _w: cur_w, _vb: cur_vb, _hb: cur_hb})
print('Epoch: {} --> Reconstruction error={}'.format(epoch, error))
self.w = prv_w
self.hb = prv_hb
self.vb = prv_vb
#Create expected output for our DBN
def rbm_outpt(self, X):
input_X = tf.constant(X)
_w = tf.constant(self.w)
_hb = tf.constant(self.hb)
out = tf.nn.sigmoid(tf.matmul(input_X, _w) + _hb)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
return sess.run(out)
#Getting the MNIST data provided by Tensorflow
from tensorflow.examples.tutorials.mnist import input_data
#Loading in the mnist data
mnist = input_data.read_data_sets("../../data/MNIST/", one_hot=True)
trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images,\
mnist.test.labels
RBM_hidden_sizes = [500, 200 , 50 ] #create 2 layers of RBM with size 400 and 100
#Since we are training, set input as training data
inpX = trX
#Create list to hold our RBMs
rbm_list = []
#Size of inputs is the number of inputs in the training set
input_size = inpX.shape[1]
#For each RBM we want to generate
for i, size in enumerate(RBM_hidden_sizes):
print('RBM: {} {} --> {}'.format(i, input_size, size))
rbm_list.append(RBM(input_size, size))
input_size = size
#For each RBM in our list
for rbm in rbm_list:
print('New RBM:')
#Train a new one
rbm.train(inpX)
#Return the output layer
inpX = rbm.rbm_outpt(inpX)
import numpy as np
import math
import tensorflow as tf
class NN(object):
def __init__(self, sizes, X, Y):
#Initialize hyperparameters
self._sizes = sizes
self._X = X
self._Y = Y
self.w_list = []
self.b_list = []
self._learning_rate = 1.0
self._momentum = 0.0
self._epoches = 10
self._batchsize = 100
input_size = X.shape[1]
#initialization loop
for size in self._sizes + [Y.shape[1]]:
#Define upper limit for the uniform distribution range
max_range = 4 * math.sqrt(6. / (input_size + size))
#Initialize weights through a random uniform distribution
self.w_list.append(
np.random.uniform( -max_range, max_range, [input_size, size]).astype(np.float32))
#Initialize bias as zeroes
self.b_list.append(np.zeros([size], np.float32))
input_size = size
#load data from rbm
def load_from_rbms(self, dbn_sizes,rbm_list):
#Check if expected sizes are correct
assert len(dbn_sizes) == len(self._sizes)
for i in range(len(self._sizes)):
#Check if for each RBN the expected sizes are correct
assert dbn_sizes[i] == self._sizes[i]
#If everything is correct, bring over the weights and biases
for i in range(len(self._sizes)):
self.w_list[i] = rbm_list[i].w
self.b_list[i] = rbm_list[i].hb
#Training method
def train(self):
#Create placeholders for input, weights, biases, output
_a = [None] * (len(self._sizes) + 2)
_w = [None] * (len(self._sizes) + 1)
_b = [None] * (len(self._sizes) + 1)
_a[0] = tf.placeholder("float", [None, self._X.shape[1]])
y = tf.placeholder("float", [None, self._Y.shape[1]])
#Define variables and activation functoin
for i in range(len(self._sizes) + 1):
_w[i] = tf.Variable(self.w_list[i])
_b[i] = tf.Variable(self.b_list[i])
for i in range(1, len(self._sizes) + 2):
_a[i] = tf.nn.sigmoid(tf.matmul(_a[i - 1], _w[i - 1]) + _b[i - 1])
#Define the cost function
cost = tf.reduce_mean(tf.square(_a[-1] - y))
#Define the training operation (Momentum Optimizer minimizing the Cost function)
train_op = tf.train.MomentumOptimizer(
self._learning_rate, self._momentum).minimize(cost)
#Prediction operation
predict_op = tf.argmax(_a[-1], 1)
#Training Loop
with tf.Session() as sess:
#Initialize Variables
sess.run(tf.global_variables_initializer())
#For each epoch
for i in range(self._epoches):
#For each step
for start, end in zip(
range(0, len(self._X), self._batchsize), range(self._batchsize, len(self._X), self._batchsize)):
#Run the training operation on the input data
sess.run(train_op, feed_dict={
_a[0]: self._X[start:end], y: self._Y[start:end]})
for j in range(len(self._sizes) + 1):
#Retrieve weights and biases
self.w_list[j] = sess.run(_w[j])
self.b_list[j] = sess.run(_b[j])
print("Accuracy rating for epoch " + str(i) + ": " + str(np.mean(np.argmax(self._Y, axis=1) ==
sess.run(predict_op, feed_dict={_a[0]: self._X, y: self._Y}))))
nNet = NN(RBM_hidden_sizes, trX, trY)
nNet.load_from_rbms(RBM_hidden_sizes,rbm_list)
nNet.train()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constructing the Layers of RBMs
Step2: The MNIST Dataset
Step3: Creating the Deep Belief Network
Step4: RBM Train
Step5: Now we can convert the learned representation of input data into a supervised prediction, e.g. a linear classifier. Specifically, we use the output of the last hidden layer of the DBN to classify digits using a shallow Neural Network.
Step6: Now let's execute our code
|
12,223
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
# Implement forward path
# TensorFlow figures out backprop
w = tf.Variable(0, dtype=tf.float32)
cost = tf.add(tf.add(w ** 2, tf.multiply(-10.0, w)), 25)
learning_rate = 0.01
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)
print(session.run(w))
session.run(train)
print(session.run(w))
for _ in range(1000):
session.run(train)
print(session.run(w))
# This is equivalent to
# tf.add(tf.add(w ** 2, tf.multiply(-10.0, w)), 25)
cost = w ** 2 - 10 * w + 25
coefficients = np.array([[1.], [-10.], [25.]])
x = tf.placeholder(tf.float32, [3, 1])
cost = x[0][0] * w ** 2 + x[1][0] * w + x[2][0]
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)
for _ in range(1000):
session.run(train, feed_dict={x: coefficients})
print(session.run(w))
with tf.Session() as session:
session.run(init)
for _ in range(1000):
session.run(train, feed_dict={x: coefficients})
print(session.run(w))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Taken from Andrew Ng's Coursera Deep Learning series
Step2: An alternative way of identifying the cost is to use overloaded operations
Step3: Training data (with input)
|
12,224
|
<ASSISTANT_TASK:>
Python Code:
#read data
df = pd.read_fwf('linear_regression_demo/brain_body.txt')
x_values = df[['Brain']]
y_values = df[['Body']]
#train model on data
body_reg = linear_model.LinearRegression()
body_reg.fit(x_values, y_values)
#visualize results
plt.scatter(x_values, y_values)
plt.plot(x_values, body_reg.predict(x_values))
plt.show()
#read data
df = pd.read_csv('linear_regression_demo/challenge_dataset.txt', names=['Data','Outcome'])
x_values = df[['Data']]
y_values = df[['Outcome']]
#train model on data
reg = linear_model.LinearRegression()
reg.fit(x_values, y_values)
#visualize results
plt.scatter(x_values, y_values)
plt.plot(x_values, reg.predict(x_values))
plt.show()
df['Predictions'] = reg.predict(x_values)
df["Pred_Error"] = df['Predictions'] - df['Outcome']
df.head()
# calculating the avg error
e = 0
for i in df['Pred_Error']:
e += abs(i)
e / len(df['Pred_Error'])
import pandas as pd
from sklearn.linear_model import LinearRegression
# Assign the dataframe to this variable.
# TODO: Load the data
bmi_life_data = pd.read_csv('bmi_and_life_expectancy.csv')
print(bmi_life_data.shape)
bmi_life_data.head()
# Make and fit the linear regression model
#TODO: Fit the model and Assign it to bmi_life_model
x_vals = bmi_life_data[['BMI']]
y_vals = bmi_life_data[['Life expectancy']]
bmi_life_model = LinearRegression()
bmi_life_model.fit(x_vals, y_vals)
plt.scatter(x_vals, y_vals)
plt.plot(x_vals, bmi_life_model.predict(x_vals))
plt.show()
# Mak a prediction using the model
# TODO: Predict life expectancy for a BMI value of 21.07931
laos_life_exp = bmi_life_model.predict(21.07931)
laos_life_exp
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
# Load the data from the the boston house-prices dataset
boston_data = load_boston()
print(boston_data.data[0], boston_data.target[0])
x = boston_data['data']
y = boston_data['target']
# Make and fit the linear regression model
# TODO: Fit the model and Assign it to the model variable
model = LinearRegression()
model.fit(x,y)
# Make a prediction using the model
sample_house = [[2.29690000e-01, 0.00000000e+00, 1.05900000e+01, 0.00000000e+00, 4.89000000e-01,
6.32600000e+00, 5.25000000e+01, 4.35490000e+00, 4.00000000e+00, 2.77000000e+02,
1.86000000e+01, 3.94870000e+02, 1.09700000e+01]]
# TODO: Predict housing price for the sample_house
prediction = model.predict(sample_house)
print(prediction)
#Step 1 - collect our data
df = pd.read_csv('linear_regression_live/data.csv', header=None)
df.head()
#collect data using numpy
points = np.genfromtxt('linear_regression_live/data.csv', delimiter=',')
points[:5]
# lets see the data
plt.scatter(df[0], df[1])
plt.show()
learning_rate = 0.0001
initial_b = 0
initial_m = 0
num_iterations = 1000
def compute_error_for_line_given_points(b, m , points):
totalError = 0 #initialize error at 0
for i in range(0, len(points)): #for every point
x = points[i, 0] #get x val
y = points[i, 1] #get y val
totalError += (y - (m*x + b)) **2
return totalError / float(len(points))
def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
b = starting_b
m = starting_m
#gradient descent
for i in range(num_iterations):
#update b & m with new more accurate b and m
b, m = step_gradient(b, m, np.array(points), learning_rate)
return [b,m]
def step_gradient(b_current, m_current, points, learningRate):
b_gradient = 0
m_gradient = 0
N = float(len(points))
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
#direction with respect to b and m
#computing partial deriavitives of our error function
b_gradient += -(2/N) * (y - ((m_current * x) + b_current))
m_gradient += -(2/N) * x * (y - ((m_current * x) + b_current))
#update b and m values using partial derivates
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
return [new_b, new_m]
print('starting gradient descent at b = {0}, m = {1}, error = {2}'.format(initial_b, initial_m, compute_error_for_line_given_points(initial_b, initial_m, points)))
[b, m] = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations)
print('ending point after {0} iterations at b = {1}, m = {2}, error = {3}'.format(num_iterations, b, m, compute_error_for_line_given_points(b, m, points)))
b,m
x_vals = df[[0]]
y_vals = df[[1]]
live_model = LinearRegression()
live_model.fit(x_vals, y_vals)
plt.scatter(x_vals, y_vals)
plt.plot(x_vals, live_model.predict(x_vals))
plt.show()
run gradient_descent.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Siraj's Week 1 challange
Step2: So now we have simple trained dataset. now to make a prediction.
Step3: Linear Regression Quiz
Step4: Programming Quiz
Step5: Siraj's Linear Regression live course
Step6: Step 2 - define our hyperparameters for the eq y = mx + b (slope formula)
Step7: Step 3
|
12,225
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import HTML
def Complex(a, b): # constructor
return (a,b)
def real(c): # method
return c[0]
def imag(c):
return c[1]
def str_complex(c):
return "{0}+{1}i".format(c[0], c[1])
c1 = Complex(1,2) # constructor
print(real(c1), " ", str_complex(c1))
c1[0]
c1[0]=2
def Complex2(a, b): # constructor
def dispatch(message): # capture a and b at constructor-run time
if message=="real":
return a
elif message=='imag':
return b
elif message=="str":
return "{0}+{1}i".format(a, b)
return dispatch
z=Complex2(1,2)
print(z("real"), " ", z("imag"), " ", z("str"))
def Complex3(a, b):
in_a=a
in_b=b
def dispatch(message, value=None):
nonlocal in_a, in_b
if message=='set_real' and value != None:
in_a = value
elif message=='set_imag' and value != None:
in_b = value
elif message=="real":
return in_a
elif message=='imag':
return in_b
elif message=="str":
return "{0}+{1}i".format(in_a, in_b)
return dispatch
c3=Complex3(1,2)
print(c3("real"), " ", c3("imag"), " ", c3("str"))
c3('set_real', 2)
print(c3("real"), " ", c3("imag"), " ", c3("str"))
class ComplexClass():
def __init__(self, a, b):
self.real = a
self.imaginary = b
HTML('<iframe width="800" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=class%20ComplexClass%28%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20__init__%28self,%20a,%20b%29%3A%0A%20%20%20%20%20%20%20%20self.real%20%3D%20a%0A%20%20%20%20%20%20%20%20self.imaginary%20%3D%20b%0A%0Ac1%20%3D%20ComplexClass%281,2%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
c1 = ComplexClass(1,2)
print(c1, c1.real)
print(vars(c1), " ",type(c1))
c1.real=5.0
print(c1, " ", c1.real, " ", c1.imaginary)
class Animal():
def __init__(self, name):
self.name = name
def make_sound(self):
raise NotImplementedError
class Dog(Animal):
def make_sound(self):
return "Bark"
class Cat(Animal):
def __init__(self, name):
self.name = "A very interesting cat: {}".format(name)
def make_sound(self):
return "Meow"
a0 = Animal("David")
print(a0.name)
a0.make_sound()
a1 = Dog("Snoopy")
a2 = Cat("Hello Kitty")
animals = [a1, a2]
for a in animals:
print(a.name)
print(isinstance(a, Animal))
print(a.make_sound())
print('--------')
print(a1.make_sound, " ", Dog.make_sound)
print(a1.make_sound())
print('----')
print(Dog.make_sound(a1))
Dog.make_sound()
HTML('<iframe width="800" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=class%20Animal%28%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20__init__%28self,%20name%29%3A%0A%20%20%20%20%20%20%20%20self.name%20%3D%20name%0A%20%20%20%20%20%20%20%20%0A%20%20%20%20def%20make_sound%28self%29%3A%0A%20%20%20%20%20%20%20%20raise%20NotImplementedError%0A%20%20%20%20%0Aclass%20Dog%28Animal%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20make_sound%28self%29%3A%0A%20%20%20%20%20%20%20%20return%20%22Bark%22%0A%20%20%20%20%0Aclass%20Cat%28Animal%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20__init__%28self,%20name%29%3A%0A%20%20%20%20%20%20%20%20self.name%20%3D%20%22A%20very%20interesting%20cat%3A%20%7B%7D%22.format%28name%29%0A%20%20%20%20%20%20%20%20%0A%20%20%20%20def%20make_sound%28self%29%3A%0A%20%20%20%20%20%20%20%20return%20%22Meow%22%0A%0Aa1%20%3D%20Dog%28%22Snoopy%22%29%0Aa2%20%3D%20Cat%28%22Hello%20Kitty%22%29%0Aanimals%20%3D%20%5Ba1,%20a2%5D%0Afor%20a%20in%20animals%3A%0A%20%20%20%20print%28a.name%29%0A%20%20%20%20print%28isinstance%28a,%20Animal%29%29%0A%20%20%20%20print%28a.make_sound%28%29%29%0A%20%20%20%20print%28\'--------\'%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
class Animal():
def __init__(self, name):
self.name=name
print("Name is", self.name)
class Mouse(Animal):
def __init__(self, name):
self.animaltype="prey"
super().__init__(name)
print("Created %s as %s" % (self.name, self.animaltype))
class Cat(Animal):
pass
a1 = Mouse("Tom")
print(vars(a1))
a2 = Cat("Jerry")
print(vars(a2))
HTML('<iframe width="800" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=class%20Animal%28%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20__init__%28self,%20name%29%3A%0A%20%20%20%20%20%20%20%20self.name%3Dname%0A%20%20%20%20%20%20%20%20print%28%22Name%20is%22,%20self.name%29%0A%20%20%20%20%20%20%20%20%0Aclass%20Mouse%28Animal%29%3A%0A%20%20%20%20def%20__init__%28self,%20name%29%3A%0A%20%20%20%20%20%20%20%20self.animaltype%3D%22prey%22%0A%20%20%20%20%20%20%20%20super%28%29.__init__%28name%29%0A%20%20%20%20%20%20%20%20print%28%22Created%20%25s%20as%20%25s%22%20%25%20%28self.name,%20self.animaltype%29%29%0A%20%20%20%20%0Aclass%20Cat%28Animal%29%3A%0A%20%20%20%20pass%0A%0Aa1%20%3D%20Mouse%28%22Tom%22%29%0Aa2%20%3D%20Cat%28%22Jerry%22%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
# Both implement the "Animal" Protocol, which consists of the one make_sound function
class Dog():
def make_sound(self):
return "Bark"
class Cat():
def make_sound(self):
return "Meow"
a1 = Dog()
a2 = Cat()
animals = [a1, a2]
for a in animals:
print(isinstance(a, Animal), " ", a.make_sound())
class Animal():
def __init__(self, name):
self.name=name
def __repr__(self):
class_name = type(self).__name__
return "{0!s}({1.name!r})".format(class_name, self)
r = Animal("David")
r
print(r)
repr(r)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Motiviation
Step2: But things aren't hidden so I can get through the interface
Step3: Because I used a tuple, and a tuple is immutable, I can't change this complex number once it's created.
Step4: Objects thru closures
Step5: This looks pretty good so far.
Step6: Python Classes and instance variables
Step7: __init__ is a special method run automatically by Python.
Step8: Inheritance and Polymorphism
Step9: Animal is the superclass (a.k.a the base class).
Step10: How does this all work?
Step11: Calling a superclasses initializer
Step12: Interfaces
Step13: The Python Data Model
|
12,226
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import itertools
from scipy import stats
from statsmodels.stats.descriptivestats import sign_test
from statsmodels.stats.weightstats import zconfint
%pylab inline
weight_data = pd.read_csv('weight.txt', sep = '\t', header = 0)
weight_data.head()
pylab.figure(figsize=(12,4))
pylab.subplot(1,2,1)
pylab.grid()
pylab.hist(weight_data.Before, color = 'r')
pylab.xlabel('Before')
pylab.subplot(1,2,2)
pylab.grid()
pylab.hist(weight_data.After, color = 'b')
pylab.xlabel('After')
pylab.show()
weight_data.describe()
print '95%% confidence interval for median weight before therapy: [%f, %f]' % zconfint(weight_data.Before)
print '95%% confidence interval for median weight after therapy: [%f, %f]' % zconfint(weight_data.After)
pylab.hist(weight_data.After - weight_data.Before)
pylab.show()
print "M: %d, p-value: %f" % sign_test(weight_data.After - weight_data.Before)
stats.wilcoxon(weight_data.After, weight_data.Before)
stats.wilcoxon(weight_data.After - weight_data.Before)
def permutation_t_stat_1sample(sample, mean):
t_stat = sum(map(lambda x: x - mean, sample))
return t_stat
def permutation_zero_distr_1sample(sample, mean, max_permutations = None):
centered_sample = map(lambda x: x - mean, sample)
if max_permutations:
signs_array = set([tuple(x) for x in 2 * np.random.randint(2, size = (max_permutations,
len(sample))) - 1 ])
else:
signs_array = itertools.product([-1, 1], repeat = len(sample))
distr = [sum(centered_sample * np.array(signs)) for signs in signs_array]
return distr
pylab.hist(permutation_zero_distr_1sample(weight_data.After - weight_data.Before, 0.,
max_permutations = 10000))
pylab.show()
def permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
t_stat = permutation_t_stat_1sample(sample, mean)
zero_distr = permutation_zero_distr_1sample(sample, mean, max_permutations)
if alternative == 'two-sided':
return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'less':
return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'greater':
return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr)
print "p-value: %f" % permutation_test(weight_data.After - weight_data.Before, 0.,
max_permutations = 1000)
print "p-value: %f" % permutation_test(weight_data.After - weight_data.Before, 0.,
max_permutations = 50000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Загрузка данных
Step2: Двухвыборочные критерии для связных выборок
Step3: Критерий знаков
Step4: Критерий знаковых рангов Вилкоксона
Step5: Перестановочный критерий
|
12,227
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import numpy
from matplotlib import pyplot
%matplotlib notebook
dt = 1e-5
dx = 1e-2
x = numpy.arange(0,1+dx,dx)
y = numpy.zeros_like(x)
y = x * (1 - x)
def update_heat(y, dt, dx):
dydt = numpy.zeros_like(y)
dydt[1:-1] = dt/dx**2 * (y[2:] + y[:-2] - 2*y[1:-1])
return dydt
Nsteps = 10000
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.html'] = 'html5'
y = numpy.zeros_like(x)
y = x * (1 - x)
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0,0.3))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(100):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=100, interval=100, blit=True)
y = numpy.sin(4.0*numpy.pi*x)**4
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
Nsteps = 10000
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
y = numpy.zeros_like(x)
y = numpy.sin(4.0*numpy.pi*x)**4
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0,1))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(20):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=50, interval=100, blit=True)
dt = 1e-4
Nsteps = 100
y = numpy.sin(4.0*numpy.pi*x)**4
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
y = numpy.zeros_like(x)
y = numpy.sin(4.0*numpy.pi*x)**4
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(-1,3))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(1):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=50, interval=100, blit=True)
ni = 0.1
G = 0.1
def f(y):
p = y[0,:]
n = y[1,:]
f_vector = numpy.zeros_like(y)
f_vector[:,:] = ni**2 - n*p + G
return f_vector
def g(y):
p = y[0,:]
n = y[1,:]
g_vector = numpy.zeros_like(y)
g_vector[0,:] = p
g_vector[1,:] = n
return g_vector
def h(y):
p = y[0,:]
n = y[1,:]
h_vector = numpy.zeros_like(y)
h_vector[0,:] = 1.0/p
h_vector[1,:] = 1.0/n
return h_vector
def update_term(y, dt, dx):
dydt = numpy.zeros_like(y)
f_vector = f(y)
g_vector = g(y)
h_vector = h(y)
dydt[:,2:-2] += dt * f_vector[:,2:-2]
dydt[:,2:-2] -= dt/(4.0*dx**2)*(g_vector[:,3:-1]*h_vector[:,4:] -\
(g_vector[:,3:-1] + g_vector[:,1:-3])*h_vector[:,2:-2] + \
g_vector[:,1:-3]*h_vector[:,:-4])
return dydt
dx = 0.05
dt = 1e-7
Nsteps = 10000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()
for n in range(Nsteps):
update = update_term(y, dt, dx)
y += update
y[:,1] = y[:,2]
y[:,0] = y[:,1]
y[:,-2] = y[:,-3]
y[:,-1] = y[:,-2]
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()
dx = 0.01
dt = 1e-8
Nsteps = 100000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0.09,0.11))
line1, = ax.plot([], [], label="$p$")
line2, = ax.plot([], [], label="$n$")
pyplot.legend()
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(1000):
update = update_term(y, dt, dx)
y += update
y[:,1] = y[:,2]
y[:,0] = y[:,1]
y[:,-2] = y[:,-3]
y[:,-1] = y[:,-2]
y += update_heat(y, dt, dx)
line1.set_data(x, y[0,:])
line2.set_data(x, y[1,:])
return line1, line2
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=100, interval=100, blit=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: The solution looks good - smooth, the initial profile is diffusing nicely. Try with something a bit more complex, such as $y(0, x) = \sin^4(4 \pi x)$ to see the diffusive effects
Step3: All the features are smoothing out, as expected.
Step4: However, we used a really small timestep to get these results. It would be less numerical work if we increased the timestep. Let's try only $100$ steps with $\Delta t = 10^{-4}$
Step5: This doesn't look good - it's horribly unstable, with the results blowing up very fast.
Step6: Now set the initial data to be
|
12,228
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
splitText = reviews[i].split(' ')
ct = Counter(splitText)
if labels[i] == 'POSITIVE':
positive_counts.update(ct)
total_counts.update(ct)
else:
negative_counts.update(ct)
total_counts.update(ct)
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
total_counts.most_common()
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
for (k, v) in total_counts.items():
if v < 100:
continue
pos_neg_ratios[k] = positive_counts[k] / float(negative_counts[k] + 1)
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
pos_neg_ratios.most_common()
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# TODO: Convert ratios to logs
for (k, v) in pos_neg_ratios.items():
pos_neg_ratios[k] = np.log(v)
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
from IPython.display import Image
Image(filename='sentiment_network_2.png')
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1, vocab_size))
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
splitText = review.split(' ')
for i in splitText:
layer_0[0][word2index[i]] += 1
print(reviews[0])
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
if label == 'POSITIVE':
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
####################################################
total_counts = Counter()
for i in range(len(reviews)):
splitText = reviews[i].split(' ')
ct = Counter(splitText)
total_counts.update(ct)
review_vocab = set(total_counts.keys())
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
####################################################
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
####################################################
for i in range(len(labels)):
if labels[i] not in label_vocab:
label_vocab.add(labels[i])
for label in labels:
label_vocab.add(label)
####################################################
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i in range(len(self.review_vocab)):
self.word2index[self.review_vocab[i]] = i
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i in range(len(self.label_vocab)):
self.label2index[self.label_vocab[i]] = i
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
###############################self.weights_1_2 = np.zeros((self.hidden_nodes, self.output_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0
splitText = review.split(' ')
for word in splitText:
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if label == 'POSITIVE':
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1.0 / (1.0 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
######################################return self.sigmoid(output) * (1.0 - self.sigmoid(output))
return output * (1.0 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
#########################delta_weight_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
#########################delta_weight_1_2 = np.zeros((self.hidden_nodes, self.output_nodes))
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
######################################################update_input_layer(review)
self.update_input_layer(review)
t = self.get_target_for_label(label)
##hidden_input = np.dot(self.layer_0, self.weights_0_1)
hidden_input = self.layer_0.dot(self.weights_0_1)
hidden_output = hidden_input
##layer3_input = np.dot(hidden_output, self.weights_1_2)
layer3_input = hidden_output.dot(self.weights_1_2)
output = self.sigmoid(layer3_input)
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
error = output - t
##########################error_term_o = error * self.sigmoid_output_2_derivative(layer3_input)
error_term_o = error * self.sigmoid_output_2_derivative(output)
##########################error_term_h = self.weights_1_2 * error_term_o * 1.0
error_term_h = np.dot(self.weights_1_2, error_term_o) * 1.0
#print("test:")
#print(np.dot(error_term_o, hidden_output).shape)
#print(delta_weight_1_2.shape)
######################delta_weight_1_2 += np.dot(error_term_o, hidden_output).transpose()
######################delta_weight_0_1 += np.dot(error_term_h, self.layer_0).transpose()
self.weights_1_2 -= self.learning_rate * np.dot(error_term_o, hidden_output).transpose()
self.weights_0_1 -= self.learning_rate * np.dot(error_term_h, self.layer_0).transpose()
if output >= 0.5 and label == 'POSITIVE':
correct_so_far += 1
elif output < 0.5 and label == 'NEGATIVE':
correct_so_far += 1
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
update_input_layer(review)
input_layer = self.layer_0
hidden_input = np.dot(input_layer, self.weights_0_1)
hidden_output = hidden_input
final_input = np.dot(hidden_output, self.weights_1_2)
final_output = self.sigmoid(final_input)
if final_output >= 0.5:
return 'POSITIVE'
else:
return 'NEGATIVE'
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
####################################################
total_counts = Counter()
for i in range(len(reviews)):
splitText = reviews[i].split(' ')
ct = Counter(splitText)
total_counts.update(ct)
review_vocab = set(total_counts.keys())
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
####################################################
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
####################################################
for i in range(len(labels)):
if labels[i] not in label_vocab:
label_vocab.add(labels[i])
for label in labels:
label_vocab.add(label)
####################################################
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i in range(len(self.review_vocab)):
self.word2index[self.review_vocab[i]] = i
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i in range(len(self.label_vocab)):
self.label2index[self.label_vocab[i]] = i
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
###############################self.weights_1_2 = np.zeros((self.hidden_nodes, self.output_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0
splitText = review.split(' ')
for word in splitText:
if (word in self.word2index.keys()):#######################################################
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if label == 'POSITIVE':
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1.0 / (1.0 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
######################################return self.sigmoid(output) * (1.0 - self.sigmoid(output))
return output * (1.0 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
#########################delta_weight_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
#########################delta_weight_1_2 = np.zeros((self.hidden_nodes, self.output_nodes))
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
######################################################update_input_layer(review)
self.update_input_layer(review)
t = self.get_target_for_label(label)
##hidden_input = np.dot(self.layer_0, self.weights_0_1)
hidden_input = self.layer_0.dot(self.weights_0_1)
hidden_output = hidden_input
##layer3_input = np.dot(hidden_output, self.weights_1_2)
layer3_input = hidden_output.dot(self.weights_1_2)
output = self.sigmoid(layer3_input)
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
error = output - t
##########################error_term_o = error * self.sigmoid_output_2_derivative(layer3_input)
error_term_o = error * self.sigmoid_output_2_derivative(output)
##########################error_term_h = self.weights_1_2 * error_term_o * 1.0
error_term_h = np.dot(self.weights_1_2, error_term_o) * 1.0
#print("test:")
#print(np.dot(error_term_o, hidden_output).shape)
#print(delta_weight_1_2.shape)
######################delta_weight_1_2 += np.dot(error_term_o, hidden_output).transpose()
######################delta_weight_0_1 += np.dot(error_term_h, self.layer_0).transpose()
self.weights_1_2 -= self.learning_rate * np.dot(error_term_o, hidden_output).transpose()
self.weights_0_1 -= self.learning_rate * np.dot(error_term_h, self.layer_0).transpose()
if output >= 0.5 and label == 'POSITIVE':
correct_so_far += 1
elif output < 0.5 and label == 'NEGATIVE':
correct_so_far += 1
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
#print(testing_reviews[i])
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
#############################################
self.update_input_layer(review.lower())
input_layer = self.layer_0
hidden_input = np.dot(input_layer, self.weights_0_1)
hidden_output = hidden_input
final_input = np.dot(hidden_output, self.weights_1_2)
final_output = self.sigmoid(final_input)
#print(final_output)
if final_output[0][0] >= 0.5:
return 'POSITIVE'
else:
return 'NEGATIVE'
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
self.layer_1 = np.zeros((1, hidden_nodes))
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews_raw) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
#############################################
training_reviews = []
for review in training_reviews_raw:
t_review = set()
for word in review.split(' '):
if word in self.word2index.keys():
t_review.add(self.word2index[word])
training_reviews.append(list(t_review))
#############################################
for i in range(len(training_reviews_raw)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
self.layer_1 = self.layer_0.dot(self.weights_0_1)
#################################################
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
#################################################
for index in review:#################################################
self.weights_0_1 -= layer_1_delta[0] * self.learning_rate
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
self.layer_1 *= 0
ind = set()
for word in review.lower().split(' '):
if word in self.word2index.keys():
ind.add(self.word2index[word])
index = list(ind)
for i in index:
self.layer_1 += self.weights_0_1[i]
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
from IPython.display import Image
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Step12: End of Project 1.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step33: End of Project 2.
Step34: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step35: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
Step36: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step37: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step38: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step39: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
Step47: Project 4
Step48: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step49: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step50: End of Project 4.
Step60: Project 5
Step61: Run the following cell to recreate the network and train it once again.
Step62: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step63: End of Project 5.
Step64: Project 6
Step65: Run the following cell to train your network with a small polarity cutoff.
Step66: And run the following cell to test it's performance. It should be
Step67: Run the following cell to train your network with a much larger polarity cutoff.
Step68: And run the following cell to test it's performance.
Step69: End of Project 6.
|
12,229
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
12,230
|
<ASSISTANT_TASK:>
Python Code:
%tensorflow_version 1.x
import tensorflow as tf
print(tf.__version__)
# Silence deprecation warnings for now.
tf.logging.set_verbosity(tf.logging.ERROR)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
print('GPU device not found')
gpu = False
else:
print('Found GPU at: {}'.format(device_name))
gpu = True
!git clone https://github.com/google/ffn.git
%cd ffn
from google.protobuf import text_format
from ffn.inference import inference
from ffn.inference import inference_pb2
# Download the example datasets.
!mkdir -p third_party/neuroproof_examples
!gsutil rsync -r -x ".*.gz" gs://ffn-flyem-fib25/ third_party/neuroproof_examples/
config = '''image {
hdf5: "third_party/neuroproof_examples/training_sample2/grayscale_maps.h5:raw"
}
image_mean: 128
image_stddev: 33
checkpoint_interval: 1800
seed_policy: "PolicyPeaks"
model_checkpoint_path: "models/fib25/model.ckpt-27465036"
model_name: "convstack_3d.ConvStack3DFFNModel"
model_args: "{\\"depth\\": 12, \\"fov_size\\": [33, 33, 33], \\"deltas\\": [8, 8, 8]}"
segmentation_output_dir: "results/fib25/training2"
inference_options {
init_activation: 0.95
pad_value: 0.05
move_threshold: 0.9
min_boundary_dist { x: 1 y: 1 z: 1}
segment_threshold: 0.6
min_segment_size: 1000
}'''
req = inference_pb2.InferenceRequest()
_ = text_format.Parse(config, req)
runner = inference.Runner()
runner.start(req)
canvas, alignment = runner.make_canvas((0, 0, 0), (250, 250, 250))
# Create a single segment, starting from the specified origin point.
if gpu:
vis_update = 20
else:
vis_update = 1
canvas.segment_at((125, 125, 125), # zyx
dynamic_image=inference.DynamicImage(),
vis_update_every=vis_update)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up FFN code and sample data
Step2: Run inference
|
12,231
|
<ASSISTANT_TASK:>
Python Code:
%%writefile requirements.txt
joblib~=1.0
numpy~=1.20
scikit-learn~=0.24
google-cloud-storage>=1.26.0,<2.0.0dev
# Required in Docker serving container
%pip install -U -r requirements.txt
# For local FastAPI development and running
%pip install -U "uvicorn[standard]>=0.12.0,<0.14.0" fastapi~=0.63
# AI Platform (Classic) client library
%pip install -U google-api-python-client
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# Get your Google Cloud project ID from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
try:
PROJECT_ID = shell_output[0]
except IndexError:
PROJECT_ID = None
print("Project ID:", PROJECT_ID)
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING") and not os.getenv(
"GOOGLE_APPLICATION_CREDENTIALS"
):
%env GOOGLE_APPLICATION_CREDENTIALS ''
REGION = "us-central1" # @param {type:"string"}
MODEL_ARTIFACT_DIR = "custom-container-prediction-model" # @param {type:"string"}
REPOSITORY = "custom-container-prediction-sklearn" # @param {type:"string"}
IMAGE = "sklearn-fastapi-server" # @param {type:"string"}
MODEL_NAME = "sklearn_custom_container" # @param {type:"string"}
VERSION_NAME = "v1" # @param {type:"string"}
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
%mkdir app
%%writefile app/preprocess.py
import numpy as np
class MySimpleScaler(object):
def __init__(self):
self._means = None
self._stds = None
def preprocess(self, data):
if self._means is None: # during training only
self._means = np.mean(data, axis=0)
if self._stds is None: # during training only
self._stds = np.std(data, axis=0)
if not self._stds.all():
raise ValueError("At least one column has standard deviation of 0.")
return (data - self._means) / self._stds
%cd app/
import pickle
import joblib
from preprocess import MySimpleScaler
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
iris = load_iris()
scaler = MySimpleScaler()
X = scaler.preprocess(iris.data)
y = iris.target
model = RandomForestClassifier()
model.fit(X, y)
joblib.dump(model, "model.joblib")
with open("preprocessor.pkl", "wb") as f:
pickle.dump(scaler, f)
!gsutil cp model.joblib preprocessor.pkl {BUCKET_NAME}/{MODEL_ARTIFACT_DIR}/
%cd ..
%%writefile app/main.py
from fastapi import FastAPI, Request
import joblib
import json
import numpy as np
import pickle
import os
from google.cloud import storage
from preprocess import MySimpleScaler
from sklearn.datasets import load_iris
app = FastAPI()
gcs_client = storage.Client()
with open("preprocessor.pkl", 'wb') as preprocessor_f, open("model.joblib", 'wb') as model_f:
gcs_client.download_blob_to_file(
f"{os.environ['AIP_STORAGE_URI']}/preprocessor.pkl", preprocessor_f
)
gcs_client.download_blob_to_file(
f"{os.environ['AIP_STORAGE_URI']}/model.joblib", model_f
)
with open("preprocessor.pkl", "rb") as f:
preprocessor = pickle.load(f)
_class_names = load_iris().target_names
_model = joblib.load("model.joblib")
_preprocessor = preprocessor
@app.get(os.environ['AIP_HEALTH_ROUTE'], status_code=200)
def health():
return {}
@app.post(os.environ['AIP_PREDICT_ROUTE'])
async def predict(request: Request):
body = await request.json()
instances = body["instances"]
inputs = np.asarray(instances)
preprocessed_inputs = _preprocessor.preprocess(inputs)
outputs = _model.predict(preprocessed_inputs)
return {"predictions": [_class_names[class_num] for class_num in outputs]}
%%writefile app/prestart.sh
#!/bin/bash
export PORT=$AIP_HTTP_PORT
%%writefile instances.json
{
"instances": [
[6.7, 3.1, 4.7, 1.5],
[4.6, 3.1, 1.5, 0.2]
]
}
# NOTE: Copy in credentials to run locally, this step can be skipped for deployment
%cp $GOOGLE_APPLICATION_CREDENTIALS app/credentials.json
%%writefile Dockerfile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY ./app /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
!docker build \
--tag={REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} \
.
!docker rm local-iris
!docker run -d -p 80:8080 \
--name=local-iris \
-e AIP_HTTP_PORT=8080 \
-e AIP_HEALTH_ROUTE=/health \
-e AIP_PREDICT_ROUTE=/predict \
-e AIP_STORAGE_URI={BUCKET_NAME}/{MODEL_ARTIFACT_DIR} \
-e GOOGLE_APPLICATION_CREDENTIALS=credentials.json \
{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
!curl localhost/health
!curl -X POST \
-d @instances.json \
-H "Content-Type: application/json; charset=utf-8" \
localhost/predict
!docker stop local-iris
!gcloud beta artifacts repositories create {REPOSITORY} \
--repository-format=docker \
--location=$REGION
!gcloud auth configure-docker {REGION}-docker.pkg.dev
!docker push {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
!gcloud beta ai-platform models create $MODEL_NAME \
--region=$REGION \
--enable-logging \
--enable-console-logging
!echo "deploymentUri: {BUCKET_NAME}/{MODEL_ARTIFACT_DIR}" > config.yaml
!gcloud beta ai-platform versions create $VERSION_NAME \
--region=$REGION \
--model=$MODEL_NAME \
--machine-type=n1-standard-4 \
--config=config.yaml \
--image={REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
!curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @instances.json \
https://{REGION}-ml.googleapis.com/v1/projects/{PROJECT_ID}/models/{MODEL_NAME}/versions/{VERSION_NAME}:predict
from google.api_core.client_options import ClientOptions
from googleapiclient import discovery
client_options = ClientOptions(api_endpoint=f"https://{REGION}-ml.googleapis.com")
service = discovery.build("ml", "v1", client_options=client_options)
response = (
service.projects()
.predict(
name=f"projects/{PROJECT_ID}/models/{MODEL_NAME}/versions/{VERSION_NAME}",
body={"instances": [[6.7, 3.1, 4.7, 1.5], [4.6, 3.1, 1.5, 0.2]]},
)
.execute()
)
if "error" in response:
raise RuntimeError(response["error"])
else:
print(response)
!gcloud beta ai-platform predict \
--region=$REGION \
--model=$MODEL_NAME \
--json-request=instances.json
# Delete the model version
!gcloud ai-platform versions delete $VERSION_NAME \
--region=$REGION \
--model=$MODEL_NAME \
--quiet
# Delete the model
!gcloud ai-platform models delete $MODEL_NAME \
--region=$REGION \
--quiet
# Delete the container image from Artifact Registry
!gcloud artifacts docker images delete \
--quiet \
--delete-tags \
{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel
Step2: Before you begin
Step3: Otherwise, set your project ID here.
Step4: Authenticate your Google Cloud account
Step5: Configure project and resource names
Step6: REGION - Used for operations
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Write your pre-processor
Step10: Train and store model with pre-processor
Step11: Upload model artifacts and custom code to Cloud Storage
Step12: Build a FastAPI server
Step13: Add pre-start script
Step14: Store test instances to use later
Step15: Build and push container to Artifact Registry
Step16: Write the Dockerfile, using tiangolo/uvicorn-gunicorn-fastapi as a base image. This will automatically run FastAPI for you using Gunicorn and Uvicorn. Visit the FastAPI docs to read more about deploying FastAPI with Docker.
Step17: Build the image and tag the Artifact Registry path that you will push to.
Step18: Run and test the container locally (optional)
Step19: Push the container to artifact registry
Step20: Deploy to AI Platform (Classic)
Step21: Create the model version
Step22: Send predictions
Step23: Using Python SDK
Step24: Using gcloud CLI
Step25: Cleaning up
|
12,232
|
<ASSISTANT_TASK:>
Python Code:
%pip install 'firebase_admin>=4.1.0'
%pip install 'tensorflow>=2.1.0'
import ipywidgets
uploader = ipywidgets.FileUpload(
accept='.json',
multiple=False
)
service_acct_file = {}
def handle_upload(change):
service_acct_file['name'] = next(iter(change['owner'].value))
service_acct_file['data'] = change['owner'].value[service_acct_file['name']]['content']
with open(service_acct_file['name'], 'wb') as f:
f.write(service_acct_file['data'])
print('Uploaded {}'.format(service_acct_file['name']))
uploader.observe(handle_upload, names='data')
display(uploader)
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.realpath(service_acct_file['name'])
storage_bucket = input('Storage bucket (no "gs://"): ')
import firebase_admin
from firebase_admin import ml
firebase_admin.initialize_app(options={'storageBucket': storage_bucket})
import tensorflow as tf
# Create a simple Keras model.
x = [-1, 0, 1, 2, 3, 4]
y = [-3, -1, 1, 3, 5, 7]
model_binary = tf.keras.models.Sequential(
[tf.keras.layers.Dense(units=1, input_shape=[1])])
model_binary.compile(optimizer='sgd', loss='mean_squared_error')
model_binary.fit(x, y, epochs=3)
# This takes the Keras model, converts it to a TFLite model, and uploads it to
# your bucket as my_model.tflite
source = ml.TFLiteGCSModelSource.from_keras_model(model_binary, 'my_model.tflite')
print(source.gcs_tflite_uri)
import os
import sys
py_bin_dir = os.path.dirname(sys.executable)
os.environ['PATH'] = '{}:{}'.format(os.environ['PATH'], py_bin_dir)
model_format = ml.TFLiteFormat(model_source=source)
sdk_model_1 = ml.Model(display_name="my_model_1", model_format=model_format)
firebase_model_1 = ml.create_model(sdk_model_1)
if firebase_model_1.validation_error:
raise Exception(firebase_model_1.validation_error)
print(firebase_model_1.as_dict())
model_id = firebase_model_1.model_id
firebase_model_1 = ml.publish_model(model_id)
tf.saved_model.save(tf.keras.applications.MobileNet(), '/tmp/saved_model/1')
# This takes the saved model directory, converts it to TFLite and writes it to your bucket as my_model_2.tflite
source2 = ml.TFLiteGCSModelSource.from_saved_model('/tmp/saved_model/1', 'my_model_2.tflite')
model_format2 = ml.TFLiteFormat(model_source=source2)
firebase_model_1.model_format = model_format2
firebase_model_1.tags = ['tag1', 'tag2'] # replaces any existing tags with these tags.
firebase_model_2 = ml.update_model(firebase_model_1)
if firebase_model_2.validation_error:
raise Exception(firebase_model_2.validation_error)
print(firebase_model_2.as_dict())
firebase_model_2 = ml.publish_model(model_id)
print(firebase_model_2.as_dict())
firebase_model_get = ml.get_model(model_id)
print(firebase_model_get.as_dict())
firebase_models_list = ml.list_models()
iterator = firebase_models_list.iterate_all()
for m in iterator:
print(m.as_dict())
list_model_1 = ml.create_model(ml.Model(display_name='my_model_2', tags=['tag2', 'tag3'], model_format=model_format))
list_model_2 = ml.create_model(ml.Model(display_name='my_model_3', tags=['tag3'], model_format=model_format))
list_model_3 = ml.create_model(ml.Model(display_name='cat_model_1', tags=['cat'], model_format=model_format))
list_model_4 = ml.create_model(ml.Model(display_name='cat_model_2', tags=['cat'], model_format=model_format))
list_model_5 = ml.create_model(ml.Model(display_name='new_cat_model_007', tags=['cat'], model_format=model_format))
list_model_2 = ml.publish_model(list_model_2.model_id)
list_model_4 = ml.publish_model(list_model_4.model_id)
firebase_models_list_2 = ml.list_models(page_size=3)
for m in firebase_models_list_2.models:
print (m.as_dict())
firebase_models_list_3 = firebase_models_list_2.get_next_page()
for m in firebase_models_list_3.models:
print (m.as_dict())
firebase_models_list = ml.list_models(list_filter='display_name=cat_model_1')
for m in firebase_models_list.models:
print (m.as_dict())
firebase_models_list = ml.list_models(list_filter='display_name:cat_*')
for m in firebase_models_list.models:
print (m.as_dict())
firebase_models_list = ml.list_models(list_filter='tags: cat')
for m in firebase_models_list.models:
print (m.as_dict())
firebase_models_list = ml.list_models(list_filter='state.published = true')
for m in firebase_models_list.models:
print (m.as_dict())
firebase_models_list = ml.list_models(list_filter='(display_name: cat_* OR tags: tag3) AND NOT state.published = true')
for m in firebase_models_list.models:
print (m.as_dict())
ml.delete_model(model_id)
ml.delete_model(list_model_1.model_id)
ml.delete_model(list_model_2.model_id)
ml.delete_model(list_model_3.model_id)
ml.delete_model(list_model_4.model_id)
ml.delete_model(list_model_5.model_id)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set up a Firebase project
Step2: 4. Set your Google Application Credentials location
Step3: 5. Initialize Firebase Admin
Step4: 6. Train your model
Step5: 7. Convert & upload your model
Step6: If you get a toco_from_protos
Step7: 8. Create a Model object
Step8: 9. Add the model to your Firebase project
Step9: 10. Publish the model
Step10: Now that you've published the model, you can use it in your apps.
Step11: 12. Create a second model source and model format from the new model
Step12: 13. Modify the local model and then call the API Update
Step13: 14. Publish the model_format2 model
Step14: 15. Get the model
Step15: 16. List the models
Step16: 17. Make more models so we can show lists better
Step17: 18. Publish some of them
Step18: 19. Listing with page size
Step19: 20. Listing the next page
Step20: When you retrieve the final page, get_next_page() returns None.
Step21: Filter by display name prefix (note that only prefix matching is supported; you can't do general wildcard matching)
Step22: Filter by tag
Step23: Filter by publish state
Step24: Combine filters
Step25: 22. Clean up
|
12,233
|
<ASSISTANT_TASK:>
Python Code:
print('Hello, world!')
2 + 2
import numpy
a_integer = 5
a_float = 1.41421356237
a_integer + a_float
a_number = a_integer + a_float
print(a_number)
a_string = 'How you doing, world?'
print(a_string)
a_integer + a_string
print(a_integer, a_string)
str(a_integer) + a_string
a_float = 3.1417
print(a_float)
who
# This is a comment
This is a multi-
line comment
9/3
9/4.0
9%4
2**6
64**(0.5)
a = True
b = False
2 == 2.0
2 != 2.000001
6 > 10
if 6 > 4:
print('Six is greater than four!')
if 6 > 10:
print('Six is greater than ten!')
else:
print('Six is not greater than ten!')
a = 25
if a%2 == 0:
print('This number is even!')
elif a%3 == 0:
print('This number is divisable by three!')
elif a%5 == 0:
print('This number is divisable by five!')
else:
print(str(a) + ' is not divisable by 2, 3 or 5!')
b = 0
while b < 5:
print(b)
b = b + 1
b
while True:
pass
a = 25
b = 1
c = 2
while b != 0:
b = a%c
if b == 0:
print(str(a) + ' is divisable by ' + str(c))
else:
c += 1
a = 29
b = 1
c = 2
while b != 0:
b = a%c
if a == c:
print(str(a) + ' is a prime number.')
elif b == 0:
print(str(a) + ' is divisable by ' + str(c))
else:
c += 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Define Variables
Step3: Comments are handy to temporarily turn some lines on or off and to document Python files. In Jupyter notebooks using markdown cells gives more control.
Step4: Logical Operations
Step5: Exercise
|
12,234
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import numpy as np
import seaborn as sns
import pandas as pd
from Lec08 import *
plot_svc();
x = np.linspace(-1, 1, 100);
plt.plot(x, x**2)
plt.xlabel("$y-\hat{f}$", size=18);
x = np.linspace(-1, 1, 100);
plt.plot(x, np.abs(x));
plt.xlabel("$y-\hat{f}$", size=18);
plt.ylabel("$|y-\hat{f}|$", size=18);
x = np.linspace(-6, 6, 100);
plt.plot(x, 1/(1 + np.exp(-x)));
plt.xlabel("$-y\hat{f}$", size=18);
plt.ylabel("$\sigma(-y\hat{f})$", size=18);
x = np.linspace(-6, 6, 100);
plt.plot(x, np.log2(1 + np.exp(-x)));
plt.xlabel("$y\hat{f}$", size=18);
plt.ylabel("$\log(1 + \exp(-y\hat{f}))$", size=18);
x = np.linspace(-6, 6, 100);
plt.plot(x, np.where(x < 1, 1 - x, 0));
plt.xlabel("$y\hat{f}$", size=18); plt.ylabel("$\max(0,1-y\hat{f})$", size=18);
x = np.linspace(-3, 3, 100);
plt.plot(x, np.exp(-x));
plt.xlabel("$y\hat{f}$", size=18);
plt.ylabel("$\exp(-y\hat{f})$", size=18);
# adapted from http://scikit-learn.org/stable/auto_examples/linear_model/plot_sgd_loss_functions.html
def plot_loss_functions():
xmin, xmax = -4, 4
xx = np.linspace(xmin, xmax, 100)
plt.plot(xx, xx ** 2, 'm-',
label="Quadratic loss")
plt.plot([xmin, 0, 0, xmax], [1, 1, 0, 0], 'k-',
label="Zero-one loss")
plt.plot(xx, 1/(1 + np.exp(xx)), 'b-',
label="Sigmoid loss")
plt.plot(xx, np.where(xx < 1, 1 - xx, 0), 'g-',
label="Hinge loss")
plt.plot(xx, np.log2(1 + np.exp(-xx)), 'r-',
label="Log loss")
plt.plot(xx, np.exp(-xx), 'c-',
label="Exponential loss")
plt.ylim((0, 8))
plt.legend(loc="best")
plt.xlabel(r"Decision function $f(x)$")
plt.ylabel("$L(y, f)$")
# Demonstrate some loss functions
plot_loss_functions()
import pylab as pl
RANGEXS = np.linspace(0., 2., 300)
TRUEYS = np.sin(np.pi * RANGEXS)
def plot_fit(x, y, p, show,color='k'):
xfit = RANGEXS
yfit = np.polyval(p, xfit)
if show:
axes = pl.gca()
axes.set_xlim([min(RANGEXS),max(RANGEXS)])
axes.set_ylim([-2.5,2.5])
pl.scatter(x, y, facecolors='none', edgecolors=color)
pl.plot(xfit, yfit,color=color)
pl.hold('on')
pl.xlabel('x')
pl.ylabel('y')
def calc_errors(p):
x = RANGEXS
errs = []
for i in x:
errs.append(abs(np.polyval(p, i) - np.sin(np.pi * i)) ** 2)
return errs
def calculate_bias_variance(poly_coeffs, input_values_x, true_values_y):
# poly_coeffs: a list of polynomial coefficient vectors
# input_values_x: the range of xvals we will see
# true_values_y: the true labels/targes for y
# First we calculate the mean polynomial, and compute the predictions for this mean poly
mean_coeffs = np.mean(poly_coeffs, axis=0)
mean_predicted_poly = np.poly1d(mean_coeffs)
mean_predictions_y = np.polyval(mean_predicted_poly, input_values_x)
# Then we calculate the error of this mean poly
bias_errors_across_x = (mean_predictions_y - true_values_y) ** 2
# To consider the variance errors, we need to look at every output of the coefficients
variance_errors = []
for coeff in poly_coeffs:
predicted_poly = np.poly1d(coeff)
predictions_y = np.polyval(predicted_poly, input_values_x)
# Variance error is the average squared error between the predicted values of y
# and the *average* predicted value of y
variance_error = (mean_predictions_y - predictions_y)**2
variance_errors.append(variance_error)
variance_errors_across_x = np.mean(np.array(variance_errors),axis=0)
return bias_errors_across_x, variance_errors_across_x
from matplotlib.pylab import cm
def polyfit_sin(degree=0, iterations=100, num_points=5, show=True):
total = 0
l = []
coeffs = []
errs = [0] * len(RANGEXS)
colors=cm.rainbow(np.linspace(0,1,iterations))
for i in range(iterations):
np.random.seed()
x = np.random.choice(RANGEXS,size=num_points) # Pick random points from the sinusoid
y = np.sin(np.pi * x)
p = np.polyfit(x, y, degree)
y_poly = [np.polyval(p, x_i) for x_i in x]
plot_fit(x, y, p, show,color=colors[i])
total += sum(abs(y_poly - y) ** 2) # calculate Squared Error (Squared Error)
coeffs.append(p)
errs = np.add(calc_errors(p), errs)
return total / iterations, errs / iterations, np.mean(coeffs, axis = 0), coeffs
def plot_bias_and_variance(biases,variances,range_xs,true_ys,mean_predicted_ys):
pl.plot(range_xs, mean_predicted_ys, c='k')
axes = pl.gca()
axes.set_xlim([min(range_xs),max(range_xs)])
axes.set_ylim([-3,3])
pl.hold('on')
pl.plot(range_xs, true_ys,c='b')
pl.errorbar(range_xs, mean_predicted_ys, yerr = biases, c='y', ls="None", zorder=0,alpha=1)
pl.errorbar(range_xs, mean_predicted_ys, yerr = variances, c='r', ls="None", zorder=0,alpha=0.1)
pl.xlabel('x')
pl.ylabel('y')
# polyfit_sin() generates 5 samples of the form (x,y) where y=sin(2*pi*x)
# then it tries to fit a degree=0 polynomial (i.e. a constant func.) to the data
# Ignore return values for now, we will return to these later
_, _, _, _ = polyfit_sin(degree=0, iterations=1, num_points=5, show=True)
# Estimate two points of sin(pi * x) with a constant 5 times
_, _, _, _ = polyfit_sin(0, 5)
# Estimate two points of sin(pi * x) with a constant 100 times
_, _, _, _ = polyfit_sin(0, 25)
MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(0, 100,num_points = 3,show=False)
biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS)
plot_bias_and_variance(biases,variances,RANGEXS,TRUEYS,np.polyval(np.poly1d(mean_coeffs), RANGEXS))
poly_degree = 0
results_list = []
MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(
poly_degree, 500,num_points = 5,show=False)
biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS)
sns.barplot(x='type', y='error',hue='poly_degree', data=pd.DataFrame([
{'error':np.mean(biases), 'type':'bias','poly_degree':0},
{'error':np.mean(variances), 'type':'variance','poly_degree':0}]))
MSE, _, _, _ = polyfit_sin(degree=3, iterations=1)
_, _, _, _ = polyfit_sin(degree=3,iterations=5,num_points=5,show=True)
# Estimate two points of sin(pi * x) with a line 50 times
_, _, _, _ = polyfit_sin(degree=3, iterations=50)
MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(3,500,show=False)
biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS)
plot_bias_and_variance(biases,variances,RANGEXS,TRUEYS,np.polyval(np.poly1d(mean_coeffs), RANGEXS))
results_list = []
for poly_degree in [0,1,3]:
MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(poly_degree,500,num_points=5,show=False)
biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS)
results_list.append({'error':np.mean(biases),
'type':'bias', 'poly_degree':poly_degree})
results_list.append({'error':np.mean(variances),
'type':'variance', 'poly_degree':poly_degree})
sns.barplot(x='type', y='error',hue='poly_degree',data=pd.DataFrame(results_list))
# Image from Andrew Ng's Stanford CS229 lecture titled "Advice for applying machine learning"
from IPython.display import Image
Image(filename='images/HighVariance.png', width=800, height=600)
# Testing error still decreasing as the training set size increases. Suggests increasing the training set size.
# Large gap Between Training and Test Error.
# Image from Andrew Ng's Stanford CS229 lecture titled "Advice for applying machine learning"
from IPython.display import Image
Image(filename='images/HighBias.png', width=800, height=600)
# Training error is unacceptably high.
# Small gap between training error and testing error.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: EECS 445
Step2: Separating Hyperplanes
Step3: Absolute Loss
Step4: 0-1 Loss
Step5: Logistic Loss
Step6: Hinge Loss
Step7: Exponential Loss
Step8: Loss Functions
Step9: Break time!
Step10: Let's return to fitting polynomials
Step11: We can do this over many datasets
Step12: What about over lots more datasets?
Step13: Decomposition
Step14: Let's now fit degree=3 polynomials
Step15: Let's now fit degree=3 polynomials
Step16: Let's now fit degree=3 polynomials
Step17: $$\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}S[\hat{f}]]}^2}{\text{Bias}^2}$$
Step18: Bias Variance Tradeoff
|
12,235
|
<ASSISTANT_TASK:>
Python Code:
from math import fabs
def bisection(x1, x2, f1, f2, fh, sizevec):
This function finds the root of a function using bisection.
Parameters
----------
x1 : float
lower bound
x2 : float
upper bound
f1 : float
function value at lower bound
f2 : float
function value at upper bound
f1 * f2 must be < 0 in order to contain a root.
Currently this is left up to the user to check.
fh : function handle
should be of form f = fh(x)
where f is the function value
sizevec : list
input an empty array and the interval size
will be appended at each iteration
Returns
-------
xroot : float
root of function fh
# divide interval in half
x = 0.5*(x1 + x2)
# save in iteration history
sizevec.append(x2-x1)
# if interval is small, then we have converged
if (fabs(x2 - x1) < 1e-6):
return x
# evaluate function at the new point (midpoint of interval)
f = fh(x)
# determine which side of the interval are root is in
if (f*f1 < 0): # left brack applies
x2 = x
f2 = f
else: # right bracket applies
x1 = x
f1 = f
# recursively call bisection with our new interval
return bisection(x1, x2, f1, f2, fh, sizevec)
%matplotlib inline
import numpy as np
from math import pi
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def func(d):
return 45*pi*d**2 + 17200.0/d
def deriv(d):
return 90*pi*d - 17200.0/d**2
# choose starting interval
d1 = 1.0
d2 = 10.0
# evalaute function
g1 = deriv(d1)
g2 = deriv(d2)
# check that our bracket is ok
assert(g1*g2 < 0)
# find optimal point
size = []
dopt = bisection(d1, d2, g1, g2, deriv, size)
# plot function
dvec = np.linspace(d1, d2, 200)
plt.figure()
plt.plot(dvec, func(dvec)/1e3)
plt.plot(dopt, func(dopt)/1e3, 'r*', markersize=12)
plt.xlabel('diameter (m)')
plt.ylabel('cost (thousands of dollars)')
# plot convergence history (interval size)
plt.figure()
plt.semilogy(size)
plt.xlabel('iteration')
plt.ylabel('interval size')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bracketing Methods (Bisection example)
Step2: We are interseted in optimization, so we don't want to find the root of our
|
12,236
|
<ASSISTANT_TASK:>
Python Code:
import sys, os
import re
from os import listdir
from os.path import isfile, join
def fromFileToCSV (folderpath,csvfilename) :
files = [f for f in listdir(folderpath) if isfile(join(folderpath, f))]
random.shuffle(files)
for filepath in files:
if filepath.endswith(".png"):
label = re.findall("^(\d+)_",filepath)
csvLine = filepath+","+str(label[0])
print csvLine
with open(folderpath+csvfilename, "a") as myfile:
myfile.write(csvLine)
myfile.write("\n")
if __name__ == '__main__':
if len(sys.argv) == 3:
fromFileToCSV(sys.argv[1],sys.argv[2])
import sys, os
import shutil
import re
import random
from os import listdir
from os.path import isfile, join
def file_to_files (original_file_path):
if not os.path.exists("lines_folder"):
os.makedirs("lines_folder")
file_counter = 0
my_file = open(original_file_path)
for line in my_file.readlines():
file_counter += 1
my_new_file = open("lines_folder/"+str(file_counter)+'_processed_tweet.txt', 'a')
my_new_file.write(line)
my_new_file.close()
my_file.close()
return "lines_folder/"
def from_folder_to_folders (original_folder_path, percentageFolder1, percentageFolder2, percentageFolder3):
Shuffle the files in original_folder_path to have different files in each of the three folders
files = [f for f in listdir(original_folder_path) if isfile(join(original_folder_path,f))]
random.shuffle(files)
nbFilesFolder1 = int((float(percentageFolder1)/100)*len(files))
nbFilesFolder2 = int((float(percentageFolder2)/100)*len(files))
nbFilesFolder3 = int((float(percentageFolder3)/100)*len(files))
if not os.path.exists(original_folder_path+"Folder1"):
os.makedirs(original_folder_path+"Folder1")
if not os.path.exists(original_folder_path+"Folder2"):
os.makedirs(original_folder_path+"Folder2")
if not os.path.exists(original_folder_path+"Folder3"):
os.makedirs(original_folder_path+"Folder3")
for j,filepath in enumerate(files):
sourceFolder = os.path.join(original_folder_path,filepath)
if (j > nbFilesFolder1 and j < nbFilesFolder1+nbFilesFolder2):
print "copying the files to folder 2"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder,original_folder_path+"Folder2/")
elif (j > nbFilesFolder1+nbFilesFolder2 and j < len(files)):
print "copying the files to folder 3"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder,original_folder_path+"Folder3/")
else:
print "copytin the files to folder 1"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder, original_folder_path+"Folder1/")
def main():
from_folder_to_folders(file_to_files("data/preprocessedP.txt"), 50, 30, 20)
if __name__ == '__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nous créeons ensuite notre fonction que nous appellerons fromFileToCSV. Cette fonction prend deux arguements
Step2: La variable files est une liste qui le nom de tous les fichiers sous le chemin stocker dans la variable folderpath.
Step3: Exercice 2
Step5: \begin{equation}
|
12,237
|
<ASSISTANT_TASK:>
Python Code:
# Load library
import numpy as np
# Create two vectors
vector_a = np.array([1,2,3])
vector_b = np.array([4,5,6])
# Calculate dot product
np.dot(vector_a, vector_b)
# Calculate dot product
vector_a @ vector_b
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Two Vectors
Step2: Calculate Dot Product (Method 1)
Step3: Calculate Dot Product (Method 2)
|
12,238
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
size = 50
x = np.array((np.random.randint(1,10,size), np.random.randint(1,10,size))).T
print x
mu = np.average(x, 0) # This performs the average over the two main dimensions
mu
x0_bar = 0
x1_bar = 0
for xi in x:
x0_bar += xi[0]
x1_bar += xi[1]
x0_bar /= float(size)
x1_bar /= float(size)
np.array((x0_bar, x1_bar))
sigma = np.std(x, 0) # This performs the standard deviation over the two main dimensions
sigma
sigma2 = np.array([0.0,0.0])
for xi in x:
sigma2 += np.power(xi - mu, 2)
np.sqrt(sigma2/size)
np.var(x, 0)
np.power(sigma, 2)
np.cov(x.T)
np.dot((x-mu).T, x-mu)/size
plt.scatter(x.T[0], x.T[1])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unweighted Mean
Step2: To verify...
Step3: Standard Deviation
Step4: This time we will verify using vectorized code...
Step5: Variance
Step6: To verify we will simply square the standard deviation found in the last section
Step7: Covariance
|
12,239
|
<ASSISTANT_TASK:>
Python Code:
%%bash
# Install packages to test model locally.
apt-get update
apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig libffi-dev
pip install gym
pip install gym[atari]
pip install opencv-python
apt update && apt install -y libsm6 libxext6
apt-get install -y libxrender-dev
pip install keras
%%bash
rm -r rl_on_gcp/my_model
gcloud ml-engine local train \
--module-name=trainer.trainer \
--package-path=${PWD}/rl_on_gcp/trainer \
--\
--steps=100000\
--start_train=100\
--buffer_size=100\
--save_model=True\
--model_dir='my_model'
%%bash
JOBNAME=rl_breakout_$(date -u +%y%m%d_%H%M%S)
REGION='us-central1'
BUCKET='dqn-breakout'
gcloud ml-engine jobs submit training $JOBNAME \
--package-path=$PWD/rl_on_gcp/trainer \
--module-name=trainer.trainer \
--region=$REGION \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU\
--runtime-version=1.9 \
--\
--steps=5000000\
--start_train=10000\
--buffer_size=1000000\
--save_model=True\
--model_dir='gs://dqn-breakout/models/'
%%writefile hyperparam.yaml
trainingInput:
scaleTier: BASIC_GPU
hyperparameters:
maxTrials: 40
maxParallelTrials: 5
enableTrialEarlyStopping: False
goal: MAXIMIZE
hyperparameterMetricTag: reward
params:
- parameterName: update_target
type: INTEGER
minValue: 500
maxValue: 5000
scaleType: UNIT_LOG_SCALE
- parameterName: init_eta
type: DOUBLE
minValue: 0.8
maxValue: 0.95
scaleType: UNIT_LOG_SCALE
- parameterName: learning_rate
type: DOUBLE
minValue: 0.00001
maxValue: 0.001
scaleType: UNIT_LOG_SCALE
- parameterName: batch_size
type: DISCRETE
discreteValues:
- 4
- 16
- 32
- 64
- 128
- 256
- 512
%%bash
JOBNAME=rl_breakout_hp_$(date -u +%y%m%d_%H%M%S)
REGION='us-central1'
BUCKET='dqn-breakout'
gcloud ml-engine jobs submit training $JOBNAME \
--package-path=$PWD/rl_on_gcp/trainer \
--module-name=trainer.trainer \
--region=$REGION \
--staging-bucket=gs://$BUCKET \
--config=hyperparam.yaml \
--runtime-version=1.10 \
--\
--steps=100000\
--start_train=10000\
--buffer_size=10000\
--model_dir='gs://dqn-breakout/models/hp/'
from google.datalab.ml import TensorBoard as tb
tb.start('gs://crawles-sandbox/rl_on_gcp/hp/4/')
!gsutil ls gs://crawles-sandbox/rl_on_gcp/hp/4/
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First run locally to make sure everything is working.
Step2: Run on ML-Engine
Step3: TODO
Step4: Launch tensorboard
|
12,240
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
#load the dataset, introduce the structure
data=pd.read_csv("data/kepler_q1_sgn21.csv")
data.head()
plt.hist(data['Transit'])
#show how to plot a light curve
#lc = np.loadtxt("data/kplr007349773-2009166043257_llc.tab")
#lc = np.loadtxt("data/kplr007431833-2009166043257_llc.tab")
#lc = np.loadtxt("data/kplr007672940-2009166043257_llc.tab")
lc = np.loadtxt("data/kplr007810483-2009166043257_llc.tab")
#plt.plot(lc[:,0],lc[:,1],'.')
plt.plot(lc[:,0],lc[:,3],'.')
plt.ylabel("Magnitude")
plt.xlabel("Time-2454833 (Day)")
plt.gca().invert_yaxis()
plt.show()
# we create 40 separable points
rng = np.random.RandomState(0)
n_samples_1 = 1000
n_samples_2 = 100
X = np.r_[1.5 * rng.randn(n_samples_1, 2),
0.5 * rng.randn(n_samples_2, 2) + [2, 2]]
y = [0] * (n_samples_1) + [1] * (n_samples_2)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.show()
# Split the data into a training set and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Run classifier, using a model that is too regularized (C too low) to see
# the impact on the results
#classifier = svm.SVC(kernel='linear', C=1)
classifier = svm.SVC(kernel='linear', C=1,class_weight={1: 10})
y_pred = classifier.fit(X_train, y_train).predict(X_test)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(2)
plt.xticks(tick_marks, ["0","1"], rotation=45)
plt.yticks(tick_marks, ["0","1"])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix')
plt.show()
from sklearn.metrics import precision_recall_curve, precision_score,recall_score,f1_score
classifier = svm.SVC(kernel='linear', C=1)
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
y_pred = classifier.fit(X_train, y_train).predict(X_test)
print precision_score(y_test,y_pred), recall_score(y_test,y_pred), f1_score(y_test,y_pred)
precision, recall, _ = precision_recall_curve(y_test,y_score)
plt.plot(recall,precision)
classifier = svm.SVC(kernel='linear', C=1,class_weight={1: 10})
y_score2 = classifier.fit(X_train, y_train).decision_function(X_test)
y_pred2 = classifier.fit(X_train, y_train).predict(X_test)
print precision_score(y_test,y_pred2), recall_score(y_test,y_pred2),f1_score(y_test,y_pred2)
precision2, recall2, _ = precision_recall_curve(y_test,y_score2)
plt.plot(recall2,precision2)
plt.ylabel("Precision")
plt.xlabel("Recall")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Classification problem with Unbalanced classes
Step2: Precision, Recall and F measures
|
12,241
|
<ASSISTANT_TASK:>
Python Code:
from ipyparallel import Client
cluster = Client()
dview = cluster[:]
dview.use_dill()
lview = cluster.load_balanced_view()
len(dview)
# import os
# from scripts.hpc05 import HPC05Client
# os.environ['SSH_AUTH_SOCK'] = os.path.join(os.path.expanduser('~'), 'ssh-agent.socket')
# cluster = HPC05Client()
%%px --local
import sys
import os
# CHANGE THE LINE BELOW INTO THE CORRECT FOLDER!
sys.path.append(os.path.join(os.path.expanduser('~'), 'orbitalfield'))
import kwant
import numpy as np
from fun import *
def gap_and_decay(lead, p, val, tol=1e-4):
gap = find_gap(lead, p, val, tol)
decay_length = find_decay_length(lead, p, val)
return gap, decay_length
import holoviews as hv
import holoviews_rc
hv.notebook_extension()
%%px --local
# angle = 0 # WIRE WITH SC ON TOP
angle = 45 # WIRE WITH SC ON SIDE
p = make_params(t_interface=7/8*constants.t, Delta=68.4, r1=50, r2=70,
orbital=True, angle=angle, A_correction=True, alpha=100) #r2=70
p.V = lambda x, y, z: 2 / 50 * z
lead = make_3d_wire_external_sc(a=constants.a, r1=p.r1, r2=p.r2, angle=p.angle)
# WIRE WITH CONSTANT GAP
# lead = make_3d_wire()
# p = make_params(V=lambda x, y, z: 0, orbital=True)
# give an array of angles that you want to use
# thetas = np.array([0, np.tan(1/10), 0.5 * np.pi - np.tan(1/10), 0.5 * np.pi])
# phis = np.array([0, np.tan(1/10), 0.5 * np.pi - np.tan(1/10), 0.5 * np.pi])
thetas = np.array([0.5 * np.pi])
phis = np.array([0])
# the range of magnetic field and chemical potential
Bs = np.linspace(0, 2, 400)
mu_mesh = np.linspace(0, 35, 400)
# creates a 3D array with all values of magnetic field for all specified angles
pos = spherical_coords(Bs.reshape(-1, 1, 1), thetas.reshape(1, -1, 1), phis.reshape(1, 1, -1))
pos_vec = pos.reshape(-1, 3)
mus_output = lview.map_sync(lambda B: find_phase_bounds(lead, p, B, num_bands=40), pos_vec)
mus, vals, mask = create_mask(Bs, thetas, phis, mu_mesh, mus_output)
N = len(vals)
step = N // (len(phis) * len(thetas))
print(N, step)
import holoviews_rc
from itertools import product
from math import pi
kwargs = {'kdims': [dimensions.B, dimensions.mu],
'extents': bnds(Bs, mu_mesh),
'label': 'Topological boundaries',
'group': 'Lines'}
angles = list(product(enumerate(phis), enumerate(thetas)))
boundaries = {(theta / pi, phi / pi): hv.Path((Bs, mus[i, j, :, ::2]), **kwargs)
for (i, phi), (j, theta) in angles}
BlochSpherePlot.bgcolor = 'white'
sphere = {(theta / pi, phi / pi): BlochSphere([[1, 0, 0], spherical_coords(1, theta, phi)], group='Sphere')
for (i, phi), (j, theta) in angles}
hv.HoloMap(boundaries, **dimensions.angles) + hv.HoloMap(sphere, **dimensions.angles)
full_phase_diagram = False
import os.path
import sys
fname_list = []
for i, n in enumerate(range(0, N, step)):
fname = "tempdata/" + str(n)+"-"+str((i+1)*step)+".dat"
fname_list.append(fname)
if not os.path.isfile(fname): # check if file already exists
lview.results.clear()
cluster.results.clear()
cluster.metadata.clear()
print(fname)
sys.stdout.flush()
if full_phase_diagram:
gaps_and_decays_output = lview.map_async(lambda val: gap_and_decay(lead, p, val[:-1] + (True,)), vals[n:(i+1) * step])
else:
gaps_and_decays_output = lview.map_async(lambda val: gap_and_decay(lead, p, val), vals[n:(i+1) * step])
gaps_and_decays_output.wait_interactive()
np.savetxt(fname, gaps_and_decays_output.result())
print(n, (i+1) * step)
cluster.shutdown(hub=True)
gaps_and_decay_output = np.vstack([np.loadtxt(fname) for fname in fname_list])
gaps_output, decay_length_output = np.array(gaps_and_decay_output).T
gaps = np.array(gaps_output).reshape(mask.shape)
gaps[1:, 0] = gaps[0, 0]
decay_lengths = np.array(decay_length_output).reshape(mask.shape)
decay_lengths[1:, 0] = decay_lengths[0, 0]
if full_phase_diagram:
gaps = gaps*(mask*2 - 1)
decay_lengths = decay_lengths*(mask*2 - 1)
gaps_output = gaps.reshape(-1)
decay_length_output = decay_lengths.reshape(-1)
fname = 'data/test.h5'
save_data(fname, Bs, thetas, phis, mu_mesh, mus_output, gaps_output, decay_length_output, p, constants)
%%output size=200
%%opts Image [colorbar=False] {+axiswise} (clims=(0, 0.1))
phase_diagram = create_holoviews(fname)
(phase_diagram.Phase_diagram.Band_gap.hist()
+ phase_diagram.Phase_diagram.Inverse_decay_length
+ phase_diagram.Sphere.I).cols(2)
%%opts Image [colorbar=True]
phase_diagram.Phase_diagram.Band_gap
phase_diagram.cdims
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This next cell is for internal use with our cluster at the department, a local ipcluster will work
Step2: Make sure to add the correct path like
Step3: Uncomment the lines for the wire that you want to use.
Step4: You can specify the angles that you want to calculate in thetas and phis.
Step5: Check whether the correct angles were used and see the phase boundaries
Step6: Calculate full phase diagram
Step7: The next cell calculates the gaps and decay lengths.
Step8: Save
Step9: Check how the phase diagram looks
|
12,242
|
<ASSISTANT_TASK:>
Python Code:
print 'The default path: '+fp.fhd_base()
fp.set_fhd_base(os.getcwd().strip('scripts')+'katalogss/data')
print 'Our path: '+fp.fhd_base()
fhd_run = 'mock_run'
s = '%sfhd_%s'%(fp.fhd_base(),fhd_run)
!ls -R $s
obsids = fp.get_obslist(fhd_run)
obsids
comps = fp.fetch_comps(fhd_run, obsids=obsids)
meta = fp.fetch_meta(fhd_run,obsids=obsids)
kgs_out = '%sfhd_%s/katalogss/'%(fp.fhd_base(),fhd_run)
if not os.path.exists(kgs_out): os.mkdir(kgs_out)
print 'saving %scomponent_arrays.p'
pickle.dump([comps,meta], open(kgs_out+'components.p','w'))
comps
#comps, meta = pickle.load(open(kgs_out+'components.p'))
beamxx, beamyy, residual = [fp.get_maps(fhd_run, obsids=obsids, imtype=imtype)
for imtype in ('Beam_XX','Beam_YY','uniform_Residual_I')]
pix2beam = fp.pixarea_maps(fhd_run, obsids=obsids, map_dir=kgs_out+'area_maps/')
for o in obsids: residual[o].data*=pix2beam[o]
#clustering parameters
eps_factor = 0.25
min_samples = 1
catalog={}
for obsid in obsids:
cmps = pd.DataFrame(comps[obsid])
cmps = kg.clip_comps(cmps)
beam = beamxx[obsid].copy()
beam.data = np.sqrt(np.mean([beamxx[obsid].data**2, beamyy[obsid].data**2],axis=0))
eps = eps_factor * meta[obsid]['beam_width']
cmps = kg.cluster_sources(cmps, eps, min_samples)
catalog[obsid] = kg.catalog_sources(cmps, meta[obsid], residual[obsid], beam)
cat = catalog[obsid]
cat.head(10)
wcs = WCS(resi.header)
plt.figure(figsize=(10,8))
ax = plt.subplot(111,projection=wcs)
x,y = wcs.wcs_world2pix(cat.ra, cat.dec,1)
ax.scatter(x,y,s=cat.flux,edgecolor='none')
lon,lat = ax.coords
lon.set_axislabel('RA',fontsize=16)
lat.set_axislabel('Dec',fontsize=16)
ax.set_title(obsid,fontsize=20)
pickle.dump(catalog, open(kgs_out+'catalogs.p','w'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We also need to define the version specifying the name of the run. This is equivalent to the case string in eor_firstpass_versions.pro, or the string following "fhd_" in the output directory name.
Step2: Next we want to define the list of obsids we want to process. You can do this manually, or you can easily grab all obsids with deconvolution output for your run using
Step3: Next we'll grab the source components and metadata for each obsid in the list. Note that if you don't supply the list of obsids, it will automatically run for all obsids.
Step4: This returns a dictionary of dictionaries. Since it takes some work to run, let's cache it a new katalogss output directory.
Step5: If you need to come back to this later, restore it with
Step6: Now we want to get the beam and residual maps. These are stored in fits files, and we read them into HDU objects with data and header attributes.
Step7: To convert the residual maps from Jy/pixel to Jy/beam, we need the map of pixel areas in units of beam.
Step8: Now we're ready to start source finding using the katalogss module.
|
12,243
|
<ASSISTANT_TASK:>
Python Code:
from larray import *
age_category = Axis(["0-9", "10-17", "18-66", "67+"], "age_category")
age_category
age_category = Axis("age_category=0-9,10-17,18-66,67+")
age_category
a = Axis('a=a0,a1,a2,a3')
a
a = Axis('a=a0..a3')
a
arr = zeros("a=a0..a2; b=b0,b1; c=c0..c5")
arr
immigration = load_example_data('demography_eurostat').immigration
immigration.info
# since the labels 'Belgium' and 'Netherlands' also exists in the 'citizenship' axis,
# we need to explicitly specify that we want to make a selection over the 'country' axis
immigration_subset = immigration[X.country['Belgium', 'Netherlands'], 'Female', 2015:]
immigration_subset
immigration_subset = immigration['country[Belgium,Netherlands]', 'Female', 2015:]
immigration_subset
immigration.mean((X.time[2014::2] >> 'even_years', X.time[::2] >> 'odd_years'), 'citizenship')
immigration.mean('time[2014::2] >> even_years; time[::2] >> odd_years', 'citizenship')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The LArray library offers two syntaxes to build axes and make selections and aggregations.
Step2: The second one consists of using strings that are parsed.
Step3: <div class="alert alert-warning">
Step4: The special syntax start..stop generates a sequence of labels
Step5: When creating an array, it is possible to define several axes in the same string using ;
Step6: Selection
Step7: an example of a selection using the Pythonic syntax is
Step8: Using the String syntax, the same selection becomes
Step9: Aggregation
Step10: Using the String syntax, the same aggregation becomes
|
12,244
|
<ASSISTANT_TASK:>
Python Code:
from importlib import reload
import xml_parser
reload(xml_parser)
from xml_parser import Xml_parser
#Xml_parser = Xml_parser().collect_data("../pmi_data")
authorID_to_titles = utils.load_pickle("../pmi_data/authorID_to_publications.p")
authorID_to_count = {k:len(v['titles']) for k,v in tqdm(authorID_to_titles.items())}
fig = plt.figure()
ax = fig.add_subplot(111)
data = list(authorID_to_count.values())
binwidth = int((max(data)-min(data))/20)
ax.hist(data,
bins=range(min(data), max(data) + binwidth, binwidth))
plt.show()
print("Max {0}, min {1}".format(max(data), min(data)))
def get_author_with_more_than(data, max_):
more_than = [k for k, v in data.items() if v >max_]
print("Authors with more than {0}: {1} ({2}%)".format(max_, len(more_than), round(len(more_than)/len(data)*100,4)))
get_author_with_more_than(authorID_to_count, 1010)
get_author_with_more_than(authorID_to_count, 500)
get_author_with_more_than(authorID_to_count, 300)
get_author_with_more_than(authorID_to_count, 200)
get_author_with_more_than(authorID_to_count, 190)
get_author_with_more_than(authorID_to_count, 50)
authors_to_titles_clean= {author:v['titles'] for author, v in tqdm(authorID_to_titles.items()) if len(v['titles'])<=190}
authorID_to_count = {k:len(titles) for k,titles in tqdm(authors_to_titles_clean.items())}
fig = plt.figure()
ax = fig.add_subplot(111)
data = list(authorID_to_count.values())
binwidth = int((max(data)-min(data))/20)
ax.hist(data,
bins=range(min(data), max(data) + binwidth, binwidth))
plt.show()
print("Max {0}, min {1}".format(max(data), min(data)))
utils.pickle_data(authors_to_titles_clean, "../pmi_data/authorID_to_publications_clean.p")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Observation of the data
Step2: We can observe that most of the authors have between 1 and 50 publications. A few author have around 50 and 200 publications which seems reasanable. Howerwer, the maximum is mode than 1000 which is to much. Let's investigate the outliers
Step3: We will use 190 as a threshold. We won't remove to much authors (0.1%) and above 190 publications it seems too much
|
12,245
|
<ASSISTANT_TASK:>
Python Code:
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
%pip install gcsfs==0.8
PROJECT_ID=!(gcloud config get-value core/project)
PROJECT_ID=PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
# TO DO: Your code goes here to create the BQ table validation split.
# TO DO: Your code goes here to export the validation table to the Cloud Storage bucket.
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TO DO: Your code goes here to score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TO DO: Your code goes here
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TO DO: Your code goes here
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TO DO: ADD YOUR REGION \
--job-dir=# TO DO: ADD YOUR JOB-DIR \
--master-image-uri=# TO DO: ADD YOUR IMAGE-URI \
--scale-tier=# TO DO: ADD YOUR SCALE-TIER \
--config # TO DO: ADD YOUR CONFIG PATH \
-- \
# TO DO: Complete the command
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
response['trainingOutput']['trials'][0]
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
!gsutil ls $JOB_DIR
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TO DO: You code goes here
model_version = 'v01'
!gcloud # TO DO: Complete the command \
--model=# TO DO: ADD YOUR MODEL NAME \
--origin=# TO DO: ADD YOUR PATH \
--runtime-version=# TO DO: ADD YOUR RUNTIME \
--framework=# TO DO: ADD YOUR FRAMEWORK \
--python-version=# TO DO: ADD YOUR PYTHON VERSION \
--region # TO DO: ADD YOUR REGION
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
!gcloud # TO DO: Complete the command
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the command in the cell below to install gcsfs package.
Step2: Prepare lab dataset
Step3: Next, create the BigQuery dataset and upload the Covertype csv data into a table.
Step4: Configure environment settings
Step5: HINT
Step6: Explore the Covertype dataset
Step7: Create training and validation splits
Step8: Use the bq extract command to export the BigQuery training table to GCS at $TRAINING_FILE_PATH.
Step9: Create a validation split
Step10: Develop a training application
Step11: Convert all numeric features to float64
Step12: Run the pipeline locally.
Step13: Calculate the trained model's accuracy.
Step14: Prepare the hyperparameter tuning application.
Step15: Write the tuning script.
Step16: Package the script into a docker image.
Step17: Build the docker image.
Step18: Submit an AI Platform hyperparameter tuning job
Step19: Start the hyperparameter tuning job.
Step20: Monitor the job.
Step21: NOTE
Step22: The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
Step23: Retrain the model with the best hyperparameters
Step24: NOTE
Step25: Deploy the model to AI Platform Prediction
Step26: Create a model version
Step27: Serve predictions
Step28: Invoke the model
|
12,246
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
# Create a tensorflow constant
hello = tf.constant("Hello World!")
# Print this variable as is
print(hello)
# Create a new session
sess = tf.Session()
# Print the constant
print("Printing using Session.run()")
print(sess.run(hello))
# Also
print("Printing using eval() function")
print(hello.eval(session=sess))
# run addition and multiplication operations
a = tf.constant(25, tf.float32)
b = tf.constant(5, tf.float32)
with tf.Session() as sess:
print("A = %f"%sess.run(a))
print("B = %f"%sess.run(b))
print("A + B = %f"%sess.run(a+b))
print("A * B = %f"%sess.run(a*b))
print("A / B = %f"%sess.run(a/b))
print("A + B using tf add = %f"%sess.run(tf.add(a, b)))
print("A * B using tf multiply = %f"%sess.run(tf.multiply(a, b)))
# Run addition and multiplication with placeholders
c = tf.placeholder(tf.float32, shape=())
d = tf.placeholder(tf.float32, shape=())
sum = tf.add(c, d)
prod = tf.multiply(c, d)
with tf.Session() as sess:
print("Operations by feeding values")
print("C = %f"%sess.run(c, feed_dict={c: 4}))
print("D = %f"%sess.run(d, feed_dict={d: 6}))
print("Sum = %f"%sess.run(sum, feed_dict={c: 4, d: 6}))
print("Prod = %f"%sess.run(prod, feed_dict={c: 4, d: 6}))
# Matrix operations with placeholders
import numpy as np
mat1 = tf.placeholder(tf.float32, shape=(2,2))
mat2 = tf.placeholder(tf.float32, shape=(2,1))
matmul = tf.matmul(mat1, mat2)
with tf.Session() as sess:
print("Matrix multiplication using python lists as feed dict values")
print(sess.run(matmul, feed_dict={ mat1: [[1,2],[2,1]], mat2: [[1],[2]]}))
print("Matrix multiplication using numpyarrays as feed dict values")
print(sess.run(matmul, feed_dict={ mat1: np.array([[1,2],[2,1]]), mat2: np.array([[1],[2]])}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Oops! That is not what we wanted! This is because the variable hello hasn't been evaluated yet. Tensorflow needs a session to run the graph in!
Step2: Tensorflow math operations!
Step3: Tensorflow placeholder is a promise to provide a value later (supplied/fed at execution time). For placeholders, an optional argument shape can be used to make sure the input dimensions matches the required tensor dimensions.
|
12,247
|
<ASSISTANT_TASK:>
Python Code:
from oemof.solph import EnergySystem
import pandas as pd
# initialize energy system
energysystem = EnergySystem(timeindex=pd.date_range('1/1/2016',
periods=168,
freq='H'))
# import example data with scaled demands and feedin timeseries of renewables
# as dataframe
data = pd.read_csv("data/example_data.csv", sep=",", index_col='timeindex', parse_dates=['timeindex'])
from oemof.solph import Bus, Flow, Sink, Source, Transformer
### BUS
# create electricity bus
b_el = Bus(label="b_el")
energysystem.add(b_el)
# add excess sink to help avoid infeasible problems
energysystem.add(Sink(label="excess_el",
inputs={b_el: Flow()}))
energysystem.add(Source(label="shortage_el",
outputs={b_el: Flow(variable_costs=1000)}))
### DEMAND
# add electricity demand
energysystem.add(Sink(label="demand_el",
inputs={b_el: Flow(nominal_value=85,
actual_value=data['demand_el'],
fixed=True)}))
### SUPPLY
# add wind and pv feedin
energysystem.add(Source(label="wind",
outputs={b_el: Flow(actual_value=data['wind'],
nominal_value=60,
fixed=True)}))
energysystem.add(Source(label="pv",
outputs={b_el: Flow(actual_value=data['pv'],
nominal_value=200,
fixed=True)}))
from oemof.solph import Model
from oemof.outputlib import processing, views
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def optimize(energysystem):
### optimize
# create operational model
om = Model(energysystem=energysystem)
# solve using the cbc solver
om.solve(solver='cbc',
solve_kwargs={'tee': False})
# save LP-file
om.write('sector_coupling.lp', io_options={'symbolic_solver_labels': True})
# generic result object
#results = processing.results(om=om)
return om
def plot(om, bus_label):
Plots...
Parameters
----------
om : oemof.solph.models.OperationalModel
bus_label : String
Label of bus to be plotted.
#ToDo: remove this once #387 is solved
def get_flows_to_and_from_bus(node_results_flows):
"Function to devide flows into ingoing and outgoing flows."
flows = list(node_results_flows.columns)
to_flows = []
from_flows = []
for flow in flows:
if flow[0][0] == bus_label:
from_flows.append(flow)
elif flow[0][1] == bus_label:
to_flows.append(flow)
else:
print("{} is neither from nor to bus.".format(flow))
return (to_flows, from_flows)
# node_results is a dictionary keyed by 'scalars' and 'sequences'
# holding respective data in a pandas Series and DataFrame.
node_results = views.node(om.results(), bus_label)
node_results_flows = node_results['sequences']
to_flows, from_flows = get_flows_to_and_from_bus(node_results_flows)
# set up plot
f, (ax1, ax2) = plt.subplots(2, sharex=True, sharey=True)
node_results_flows[to_flows].plot(kind='bar', stacked=True, ax=ax1)
node_results_flows[from_flows].plot(kind='bar', stacked=True, ax=ax2)
ax1.set_title('Flows to and from {}'.format(bus_label))
ax1.legend(loc='upper right', bbox_to_anchor=(1.7, 1.0))
ax2.legend(loc='upper right', bbox_to_anchor=(1.7, 1.0))
ax1.set_ylabel('Power')
ax2.set_xlabel('Time')
ax2.set_ylabel('Power')
# set x-tick-labels
dates = node_results_flows.index
tick_distance = int(len(dates) / 7) - 1
ax2.set_xticks(range(0, len(dates), tick_distance), minor=False)
ax2.set_xticklabels([item.strftime('%d-%m-%Y')
for item in dates.tolist()[0::tick_distance]],
rotation=90, minor=False)
plt.show()
om = optimize(energysystem)
plot(om, 'b_el')
# add gas bus
b_gas = Bus(label="b_gas",
balanced=False)
energysystem.add(b_gas)
# add gas power plant
energysystem.add(Transformer(label="pp_gas",
inputs={b_gas: Flow(summed_max_flow=200)},
outputs={b_el: Flow(nominal_value=40,
variable_costs=40)},
conversion_factors={b_el: 0.50}));
om = optimize(energysystem)
plot(om, 'b_el')
# add heat bus
b_heat = Bus(label="b_heat",
balanced=True)
energysystem.add(b_heat)
# add heat demand
energysystem.add(Sink(label="demand_th",
inputs={b_heat: Flow(nominal_value=60,
actual_value=data['demand_th'],
fixed=True)}))
# add heater rod
energysystem.add(Transformer(label="heater_rod",
inputs={b_el: Flow()},
outputs={b_heat: Flow(variable_costs=10)},
conversion_factors={b_heat: 0.98}));
om = optimize(energysystem)
plot(om, 'b_el')
plot(om, 'b_heat')
# COP can be calculated beforehand, assuming the heat reservoir temperature
# is infinite random timeseries for COP
import numpy as np
COP = np.random.uniform(low=3.0, high=5.0, size=(168,))
# add heater rod
#Transformer(label="heater_rod",
# inputs={b_el: Flow()},
# outputs={b_heat: Flow(variable_costs=10)},
# conversion_factors={b_heat: 0.98});
# add heat pump
energysystem.add(Transformer(label="heat_pump",
inputs={b_el: Flow()},
outputs={b_heat: Flow(nominal_value=20,
variable_costs=10)},
conversion_factors={b_heat: COP}));
om = optimize(energysystem)
plot(om, 'b_heat')
# add CHP with fixed ratio of heat and power (back-pressure turbine)
energysystem.add(Transformer(label='pp_chp',
inputs={b_gas: Flow()},
outputs={b_el: Flow(nominal_value=30,
variable_costs=42),
b_heat: Flow(nominal_value=40)},
conversion_factors={b_el: 0.3,
b_heat: 0.4}));
from oemof.solph.components import ExtractionTurbineCHP
# add CHP with variable ratio of heat and power (extraction turbine)
energysystem.add(ExtractionTurbineCHP(label='pp_chp_extraction',
inputs={b_gas: Flow()},
outputs={b_el: Flow(nominal_value=30,
variable_costs=42),
b_heat: Flow(nominal_value=40)},
conversion_factors={b_el: 0.3,
b_heat: 0.4},
conversion_factor_full_condensation={b_el: 0.5}));
om = optimize(energysystem)
plot(om, 'b_el')
om = optimize(energysystem)
plot(om, 'b_heat')
from oemof.solph.components import GenericStorage as Storage
charging_power = 20
bev_battery_cap = 50
# add mobility bus
b_bev = Bus(label="b_bev",
balanced=True)
energysystem.add(b_bev)
# add transformer to transport electricity from grid to mobility sector
energysystem.add(Transformer(label="transport_el_bev",
inputs={b_el: Flow()},
outputs={b_bev: Flow(variable_costs=10,
nominal_value=charging_power,
max=data['bev_charging_power'])},
conversion_factors={b_bev: 1.0}))
# add BEV storage
energysystem.add(Storage(label='bev_storage',
inputs={b_bev: Flow()},
outputs={b_bev: Flow()},
nominal_capacity=bev_battery_cap,
capacity_min=data['bev_cap_min'],
capacity_max=data['bev_cap_max'],
capacity_loss=0.00,
initial_capacity=None,
inflow_conversion_factor=1.0,
outflow_conversion_factor=1.0,
nominal_input_capacity_ratio=1.0,
nominal_output_capacity_ratio=1.0))
# add sink for leaving vehicles
energysystem.add(Sink(label="leaving_bev",
inputs={b_bev: Flow(nominal_value=bev_battery_cap,
actual_value=data['bev_sink'],
fixed=True)}))
# add source for returning vehicles
energysystem.add(Source(label="returning_bev",
outputs={b_bev: Flow(nominal_value=bev_battery_cap,
actual_value=data['bev_source'],
fixed=True)}));
om = optimize(energysystem)
plot(om, 'b_bev')
plot(om, 'b_el')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import input data
Step2: Add entities to energy system
Step4: Optimize energy system and plot results
Step5: Adding the gas sector
Step6: Adding the heat sector
Step7: Adding a heat pump
Step8: Adding a combined heat and power plant
Step9: Adding the mobility sector
|
12,248
|
<ASSISTANT_TASK:>
Python Code:
from fretbursts import *
sns = init_notebook(apionly=True)
print('seaborn version: ', sns.__version__)
# Tweak here matplotlib style
import matplotlib as mpl
mpl.rcParams['font.sans-serif'].insert(0, 'Arial')
mpl.rcParams['font.size'] = 12
%config InlineBackend.figure_format = 'retina'
url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
download_file(url, save_dir='./data')
full_fname = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
d = loader.photon_hdf5(full_fname)
loader.alex_apply_period(d)
d.calc_bg(bg.exp_fit, time_s=1000, tail_min_us=(800, 4000, 1500, 1000, 3000))
d.burst_search(min_rate_cps=8e3)
ds = d.select_bursts(select_bursts.size, add_naa=True, th1=40)
ds_no_Aonly = ds.select_bursts(select_bursts.size, th1=30)
ds_fret = ds_no_Aonly.select_bursts(select_bursts.naa, th1=30)
alex_jointplot(ds)
alex_jointplot(ds_fret)
d_fret_2 = bext.burst_search_and_gate(d)
d_fret_22 = bext.burst_search_and_gate(d, m=(8, 10), F=(5, 12))
ds_fret_2 = d_fret_2.select_bursts(select_bursts.size, th1=30)
alex_jointplot(ds_fret_2)
ds_fret_22 = d_fret_22.select_bursts(select_bursts.size, th1=30)
alex_jointplot(ds_fret_22)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get and process data
Step2: Filtering method
Step3: DCBS Method
Step4: The function bext.burst_search_and_gate()
Step5: Before plotting we still need to select bursts by size
|
12,249
|
<ASSISTANT_TASK:>
Python Code:

from IPython.display import Image
Image(filename='img/colorcontext.png', width=400)
from IPython.display import YouTubeVideo
YouTubeVideo('2G8V00SkTvY')
def f(x):
a docstring
return x**2
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Cover Slide 2
Step3: Headline Subslide
|
12,250
|
<ASSISTANT_TASK:>
Python Code:
import essentia.streaming as ess
import essentia
audio_file = '../../../test/audio/recorded/mozart_c_major_30sec.wav'
# Initialize algorithms we will use.
loader = ess.MonoLoader(filename=audio_file)
framecutter = ess.FrameCutter(frameSize=4096, hopSize=2048, silentFrames='noise')
windowing = ess.Windowing(type='blackmanharris62')
spectrum = ess.Spectrum()
spectralpeaks = ess.SpectralPeaks(orderBy='magnitude',
magnitudeThreshold=0.00001,
minFrequency=20,
maxFrequency=3500,
maxPeaks=60)
# Use default HPCP parameters for plots.
# However we will need higher resolution and custom parameters for better Key estimation.
hpcp = ess.HPCP()
# Use pool to store data.
pool = essentia.Pool()
# Connect streaming algorithms.
loader.audio >> framecutter.signal
framecutter.frame >> windowing.frame >> spectrum.frame
spectrum.spectrum >> spectralpeaks.spectrum
spectralpeaks.magnitudes >> hpcp.magnitudes
spectralpeaks.frequencies >> hpcp.frequencies
hpcp.hpcp >> (pool, 'tonal.hpcp')
# Run streaming network.
essentia.run(loader)
# Plots configuration.
import matplotlib.pyplot as plt
from pylab import plot, show, figure, imshow
plt.rcParams['figure.figsize'] = (15, 6)
# Plot HPCP.
imshow(pool['tonal.hpcp'].T, aspect='auto', origin='lower', interpolation='none')
plt.title("HPCPs in frames (the 0-th HPCP coefficient corresponds to A)")
show()
from essentia.standard import ChordsDetection
# Using a 2 seconds window over HPCP matrix to estimate chords
chords, strength = ChordsDetection(hopSize=2048, windowSize=2)(pool['tonal.hpcp'])
print(chords)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's plot the resulting HPCP
Step2: Now we can run a naive estimation of chords with 2-second sliding window over the computed HPCPgram
|
12,251
|
<ASSISTANT_TASK:>
Python Code:
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
data_path = "./data/train_transformed.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
## read in zip code data
data_path_zip = "./data/2016_zips.csv"
zips = pd.read_csv(data_path_zip, header=0, sep ='\t', usecols = [0,5,6], names = ["GEOID", "INTPTLAT", "INTPTLONG"], dtype ={'GEOID': int, 'INTPTLAT': float, 'INTPTLONG': float})
sf_zips = zips[(zips['GEOID'] > 94000) & (zips['GEOID'] < 94189)]
len(sf_zips)
###mapping longitude/latitude to zipcodes
def dist(lat1, long1, lat2, long2):
#return np.sqrt((lat1-lat2)**2+(long1-long2)**2)
return abs(lat1-lat2)+abs(long1-long2)
def find_zipcode(lat, long):
distances = sf_zips.apply(lambda row: dist(lat, long, row["INTPTLAT"], row["INTPTLONG"]), axis=1)
return sf_zips.loc[distances.idxmin(), "GEOID"]
#x_data['zipcode'] = 0
#for i in range(0, 1):
# x_data['zipcode'][i] = x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)
x_data['zipcode']= x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)
x_data[:10]
### read in school data
data_path_schools = "./data/pubschls.csv"
schools = pd.read_csv(data_path_schools,header=0, sep ='\t', usecols = ["CDSCode","StatusType", "School", "EILCode", "EILName", "Zip", "Latitude", "Longitude"], dtype ={'CDSCode': str, 'StatusType': str, 'School': str, 'EILCode': str,'EILName': str,'Zip': str, 'Latitude': float, 'Longitude': float})
schools = schools[(schools["StatusType"] == 'Active')]
x_data_sub= x_data[0:5]
### find closest school
def dist(lat1, long1, lat2, long2):
return np.sqrt((lat1-lat2)**2+(long1-long2)**2)
def find_closest_school(lat, long):
distances = schools.apply(lambda row: dist(lat, long, row["Latitude"], row["Longitude"]), axis=1)
return min(distances)
x_data['closest_school'] = x_data_sub.apply(lambda row: find_closest_school(row['y'], row['x']), axis=1)
# Impute missing values with mean values:
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
X = X_raw
# Scale the data between 0 and 1:
#X = MinMaxScaler().fit_transform(X_raw)
# Shuffle data to remove any underlying pattern that may exist:
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
# Separate training, dev, and test data:
test_data, test_labels = X[800000:], y[800000:]
dev_data, dev_labels = X[700000:800000], y[700000:800000]
train_data, train_labels = X[:700000], y[:700000]
mini_train_data, mini_train_labels = X[:75000], y[:75000]
mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]
#the submission format requires that we list the ID of each example?
#this is to remember the order of the IDs after shuffling
#(not used for anything right now)
allIDs = np.array(list(df.axes[0]))
allIDs = allIDs[shuffle]
testIDs = allIDs[800000:]
devIDs = allIDs[700000:800000]
trainIDs = allIDs[:700000]
#this is for extracting the column names for the required submission format
sampleSubmission_path = "./data/sampleSubmission.csv"
sampleDF = pd.read_csv(sampleSubmission_path)
allColumns = list(sampleDF.columns)
featureColumns = allColumns[1:]
#this is for extracting the test data for our baseline submission
real_test_path = "./data/test_transformed.csv"
testDF = pd.read_csv(real_test_path, header=0)
real_test_data = testDF
test_complete = real_test_data.fillna(real_test_data.mean())
Test_raw = test_complete.as_matrix()
TestData = MinMaxScaler().fit_transform(Test_raw)
#here we remember the ID of each test data point
#(in case we ever decide to shuffle the test data for some reason)
testIDs = list(testDF.axes[0])
train_data[:5]
## Data sanity checks
print(train_data[:1])
print(train_labels[:1])
##Neural Network
import theano
from theano import tensor as T
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
print (theano.config.device) # We're using CPUs (for now)
print (theano.config.floatX )# Should be 64 bit for CPUs
np.random.seed(0)
from IPython.display import display, clear_output
numFeatures = train_data[1].size
numTrainExamples = train_data.shape[0]
numTestExamples = test_data.shape[0]
print ('Features = %d' %(numFeatures))
print ('Train set = %d' %(numTrainExamples))
print ('Test set = %d' %(numTestExamples))
class_labels = list(set(train_labels))
print(class_labels)
numClasses = len(class_labels)
print(train_labels[:5])
##binarize the class labels
def binarizeY(data):
binarized_data = np.zeros((data.size,39))
for j in range(0,data.size):
feature = data[j]
i = class_labels.index(feature)
binarized_data[j,i]=1
return binarized_data
train_labels_b = binarizeY(train_labels)
test_labels_b = binarizeY(test_labels)
numClasses = train_labels_b[1].size
print ('Classes = %d' %(numClasses))
print ('\n', train_labels_b[:5, :], '\n')
print (train_labels[:10], '\n')
#1) Parameters
numFeatures = train_data.shape[1]
numHiddenNodeslayer1 = 50
numHiddenNodeslayer2 = 30
w_1 = theano.shared(np.asarray((np.random.randn(*(numFeatures, numHiddenNodeslayer1))*0.01)))
w_2 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer1, numHiddenNodeslayer2))*0.01)))
w_3 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer2, numClasses))*0.01)))
params = [w_1, w_2, w_3]
#2) Model
X = T.matrix()
Y = T.matrix()
srng = RandomStreams()
def dropout(X, p=0.):
if p > 0:
X *= srng.binomial(X.shape, p=1 - p)
X /= 1 - p
return X
def model(X, w_1, w_2, w_3, p_1, p_2, p_3):
return T.nnet.softmax(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(X, p_1), w_1)),p_2), w_2)),p_3),w_3))
y_hat_train = model(X, w_1, w_2, w_3, 0.2, 0.5,0.5)
y_hat_predict = model(X, w_1, w_2, w_3, 0., 0., 0.)
## (3) Cost function
#cost = T.mean(T.sqr(y_hat - Y))
cost = T.mean(T.nnet.categorical_crossentropy(y_hat_train, Y))
## (4) Objective (and solver)
alpha = 0.01
def backprop(cost, w):
grads = T.grad(cost=cost, wrt=w)
updates = []
for wi, grad in zip(w, grads):
updates.append([wi, wi - grad * alpha])
return updates
update = backprop(cost, params)
train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)
y_pred = T.argmax(y_hat_predict, axis=1)
predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)
miniBatchSize = 10
def gradientDescent(epochs):
for i in range(epochs):
for start, end in zip(range(0, len(train_data), miniBatchSize), range(miniBatchSize, len(train_data), miniBatchSize)):
cc = train(train_data[start:end], train_labels_b[start:end])
clear_output(wait=True)
print ('%d) accuracy = %.4f' %(i+1, np.mean(np.argmax(test_labels_b, axis=1) == predict(test_data))) )
gradientDescent(50)
### How to decide what # to use for epochs? epochs in this case are how many rounds?
### plot costs for each of the 50 iterations and see how much it decline.. if its still very decreasing, you should
### do more iterations; otherwise if its looking like its flattening, you can stop
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DDL to construct table for SQL transformations
Step2: Note
Step3: Model Prototyping
|
12,252
|
<ASSISTANT_TASK:>
Python Code:
xy = np.random.multivariate_normal([0,0], [[10,7],[7,10]],1000)
plt.plot(xy[:,0],xy[:,1],"o")
plt.show()
pca = PCA(n_components=2)
xy_pca = pca.fit(xy)
plt.plot(xy[:,0],xy[:,1],"o")
scalar = xy_pca.explained_variance_[0]
plt.plot([0,xy_pca.components_[0,0]*scalar/2],[0,xy_pca.components_[0,1]*scalar/2],color="red")
plt.plot([0,-xy_pca.components_[0,0]*scalar/2],[0,-xy_pca.components_[0,1]*scalar/2],color="red")
scalar = xy_pca.explained_variance_[1]
plt.plot([0,xy_pca.components_[1,0]*scalar/2],[0,xy_pca.components_[1,1]*scalar/2],color="yellow")
plt.plot([0,-xy_pca.components_[1,0]*scalar/2],[0,-xy_pca.components_[1,1]*scalar/2],color="yellow")
xy_trans = xy_pca.transform(xy)
fig, ax = plt.subplots(1,2,figsize=(10,5))
ax[0].plot(xy[:,0],xy[:,1],"o")
ax[0].set_xlabel("x")
ax[0].set_ylabel("y")
ax[0].set_xlim((-15,15)); ax[0].set_ylim((-15,15))
ax[1].plot(xy_trans[:,0],xy_trans[:,1],"o")
ax[1].set_xlabel("PCA1")
ax[1].set_ylabel("PCA2")
ax[1].set_xlim((-15,15)); ax[1].set_ylim((-15,15))
plt.show()
print("Variation explained:")
print("First component: {:.3f}".format(xy_pca.explained_variance_ratio_[0]))
print("Second component: {:.3f}".format(xy_pca.explained_variance_ratio_[1]))
def pca_wrapper(xy_pairs):
Take an array of x/y data and perform a principle component analysis.
fig, ax = plt.subplots(1,2,figsize=(10,5))
ax[0].plot(xy_pairs[:,0],xy_pairs[:,1],"o")
ax[0].set_xlim((-18,18))
ax[0].set_ylim((-18,18))
ax[0].set_title("raw x,y data")
ax[0].set_xlabel("x")
ax[0].set_ylabel("y")
# Perform the PCA fit
pca = PCA(n_components=2)
z = pca.fit(xy_pairs)
# Transforom the data onto the new PCA axes
new_xy_pairs = z.transform(xy_pairs)
# Plot the PCA data
ax[1].plot(new_xy_pairs[:,0],new_xy_pairs[:,1],"o")
ax[1].set_title("PCA transformed data")
ax[1].set_xlim((-18,18))
ax[1].set_ylim((-18,18))
ax[1].set_xlabel("PCA1")
ax[1].set_ylabel("PCA2")
print("Variation explained:")
print("First component: {:.3f}".format(pca.explained_variance_ratio_[0]))
print("Second component: {:.3f}".format(pca.explained_variance_ratio_[1]))
d1 = np.random.multivariate_normal([0,0], [[10,1],[1,10]],1000)
pca_wrapper(d1)
d2 = np.random.multivariate_normal([0,0], [[10,5],[5,10]],1000)
pca_wrapper(d2)
d3 = np.random.multivariate_normal([0,0], [[10,9],[9,10]],1000)
pca_wrapper(d3)
iris = datasets.load_iris()
obs = iris.data
species = iris.target
mean = obs.mean(axis=0)
std = obs.std(axis=0)
obs = (obs - mean)/std
def plot_slice(obs_r,axis_i,axis_j):
Define a helper function.
plt.plot(obs_r[species == 0,axis_i],obs_r[species == 0,axis_j],"o",color='navy')
plt.plot(obs_r[species == 1,axis_i],obs_r[species == 1,axis_j],"o",color='turquoise')
plt.plot(obs_r[species == 2,axis_i],obs_r[species == 2,axis_j],"o",color='darkorange')
plt.xlabel(axis_i)
plt.ylabel(axis_j)
plt.show()
plot_slice(obs,axis_i=0,axis_j=1)
pca = PCA(n_components=4)
obs_pca = pca.fit(obs)
obs_trans = obs_pca.transform(obs)
plot_slice(obs_trans,axis_i=0,axis_j=1)
for r in obs_pca.explained_variance_ratio_:
print("{:.3f}".format(r))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a Principle Component Analysis (PCA) object
Step2: num_components is the number of axes on which you spread the data out. You can only have as many components as you have axes (2 in this case).
Step3: Does the PCA, finding the primary axes of variation.
Step4: What does the following do?
Step5: Transforms x and y onto the PCA axes.
Step6: What does the following do?
Step8: Describes how much variation each PCA axis captures.
Step9: How does fraction variation relate to skew in the data?
Step10: The stronger the covariation between parameters, the more readily the PCA can reduce dimensionality.
Step12: The mean, standard deviation business normalizes the data so the values are all on the same scale.
Step13: Species separate on some axes, but not all axes
Step14: Do PCA
Step15: What is different about PCA axes?
Step16: All of that separating power is jammed into the first axis.
|
12,253
|
<ASSISTANT_TASK:>
Python Code:
folder = os.path.join('..', 'data')
newsbreaker.init(os.path.join(folder, 'topic_model'), 'topic_model.pkl', 'vocab.txt')
entries = load_entries(folder)
entries_dict = defaultdict(list)
for entry in entries:
entries_dict[entry.feed].append(entry)
client = MongoClient()
db = client.newstagger
def get_entry(s):
feedname, index = s.split('|')
try:
index = int(index)
except ValueError:
raise KeyError('Malformed entry %s' % s)
for feed, l in entries_dict.items():
if feed.name == feedname:
for entry in l:
if entry.index == index:
return entry
else:
break
raise KeyError('Entry %s not found' % s)
coefs = []
Y = []
tests = list(db.pairs.find())
for test in tests:
base = get_entry(test['base'])
e1 = get_entry(test['e1'])
e2 = get_entry(test['e2'])
coefs.append(
[
[
base.what_distance(e1),
base.who_distance(e1),
base.where_distance(e1)
],
[
base.what_distance(e2),
base.who_distance(e2),
base.where_distance(e2)
]
]
)
Y.append(float(test['res']))
with open('X.txt', 'w') as f:
f.write('\n'.join(str(x) for x in coefs))
with open('Y.txt', 'w') as f:
f.write('\n'.join(str(x) for x in Y))
import json
with open('tests.json', 'w') as f:
f.write(
json.dumps(
[
{ k: v for k, v in d.items() if k != '_id' }
for d in tests
],
indent=2
)
)
from collections import Counter
def what_without_ne(entry):
entry.doc(tag=True, parse=False, entity=True)
avoid_ent_cats = set(entry.who_ne_cats)
avoid_ent_cats.update(entry.where_ne_cats)
avoid_ents = [
(ent.start, ent.end)
for ent in entry.doc.ents
if ent.label_ in avoid_ent_cats
]
words = []
doc_words = list(entry.doc)
while doc_words and avoid_ents:
i = doc_words[0].i
low, high = avoid_ents[0]
if i < low:
words.append(doc_words.pop(0))
elif low <= i and i < high:
doc_words.pop(0) # but don't save it, since is part of NE
else: # low < high <= i
avoid_ents.pop(0) # delete ent, since we overpassed it
words += doc_words # no more ents to filter with
counter = Counter(
word.lower_
for word in words
)
entry._what = entry.topic_model.model.transform(
np.array([ counter[word] for word in entry.topic_model.vocab ])
)
not_ne_what_coefs = []
for test in tests:
base = get_entry(test['base'])
what_without_ne(base)
e1 = get_entry(test['e1'])
what_without_ne(e1)
e2 = get_entry(test['e2'])
what_without_ne(e2)
not_ne_what_coefs.append(
[
base.what_distance(e1),
base.what_distance(e2)
]
)
with open('X.txt') as f:
coefs = [eval(x) for x in f.read().split('\n')]
with open('Y.txt') as f:
Y = [float(x) for x in f.read().split('\n')]
with open('tests.json') as f:
tests = json.loads(f.read())
X_copy = list(coefs); Y_copy = list(Y)
X = np.array(
[
[
v1[i] - v2[i]
for i in range(3)
]
for v1, v2 in coefs
]
)
Y = np.array(Y)
X_not_ne_what = np.array(
[
[
not_ne_what_coefs[n][0] - not_ne_what_coefs[n][1],
row[1], row[2]
]
for n, row in enumerate(X)
]
)
def sigmoid(x, gamma=1.):
return 1.0 / (1.0 + np.exp(-gamma * x))
def cost(theta, X=None, Y=None): # theta is np.array
return np.sum(
(sigmoid(np.dot(X, np.abs(theta))) - Y) ** 2
) / len(X)
grad_cost = grad(cost)
class SGD:
def __init__(self, learning=0.5, max_iters=10**5, prec=10**-3):
self.learning = learning
self.max_iters = max_iters
self.prec = prec
self.theta = None
self._iters = None
self._costs = None
def get_params(self, deep=True):
return {
'learning': self.learning,
'max_iters': self.max_iters,
'prec': self.prec
}
@property
def iters(self):
if self._iters is None:
raise Exception('SGD must be fitted to access iters')
return self._iters
@iters.setter
def iters(self, value): self._iters = value
@property
def costs(self):
if self._costs is None:
raise Exception('SGD must be fitted to access costs')
return self._costs
@costs.setter
def costs(self, value): self._costs = value
def fit(self, X, Y):
self.iters = 0
self.costs = []
theta = np.random.random(3)
while self.iters < self.max_iters:
self.iters += 1
self.costs.append(cost(theta, X=X, Y=Y))
prev_theta = theta.copy()
theta -= self.learning * grad_cost(theta, X=X, Y=Y)
if np.linalg.norm(theta - prev_theta) < self.prec:
break
self.costs.append(cost(theta, X=X, Y=Y))
self.theta = theta
return self
def score(self, X, Y):
return sum(
(not ((pred > 0.) ^ (cls > 0.))) if pred != 0. else 0.
for pred, cls in zip(np.dot(X, self.theta), Y)
) / len(Y)
class WhatSGD(SGD):
def fit(self, X, Y):
self.theta = np.array([1., 0., 0.])
return self
threshold = int(len(X) * 0.9)
X_train, X_test = X[:threshold], X[threshold:]
Y_train, Y_test = Y[:threshold], Y[threshold:]
trained_sgd = SGD()
trained_sgd.fit(X_train, Y_train)
pd.Series(trained_sgd.costs).plot() # error on each iteration
X_not_what = X.copy()
for row in X_not_what:
row[0] = 0
sgd = SGD()
what_sgd = WhatSGD()
sgd_not_what = SGD()
rows = []
for i in range(2, 20 + 1):
rows.append(
[
cross_validation.cross_val_score(
sgd, X, Y, cv=i
),
cross_validation.cross_val_score(
sgd, X_not_ne_what, Y, cv=i
),
cross_validation.cross_val_score(
what_sgd, X, Y, cv=i
),
cross_validation.cross_val_score(
what_sgd, X_not_ne_what, Y, cv=i
)
]
)
for n, i in enumerate(range(2, 20 + 1)):
rows[n].append(
cross_validation.cross_val_score(
sgd_not_what, X_not_what, Y, cv=i
)
)
df = pd.DataFrame(
[[s.mean() for s in row] for row in rows],
columns=['sgd, what with NE', 'sgd, what without NE', 'what with NE', 'what without NE', 'sgd without what'],
index=[100 - 100 // i for i in range(2, 20 + 1)]
)
df.plot(ylim=(0, 1))
df.plot()
df.mean()
df[df.index > 75].mean()
df[df.index > 90].mean()
scores = cross_validation.cross_val_score(
sgd, X, Y, cv=10
)
scores, scores.mean()
scores = cross_validation.cross_val_score(
sgd, X_not_ne_what, Y, cv=10
)
scores, scores.mean()
scores = cross_validation.cross_val_score(
what_sgd, X, Y, cv=10
)
scores, scores.mean()
scores = cross_validation.cross_val_score(
what_sgd, X_not_ne_what, Y, cv=10
)
scores, scores.mean()
sgd_not_what = SGD()
scores = cross_validation.cross_val_score(
sgd_not_what, X_not_what, Y, cv=10
)
scores, scores.mean()
sgd = SGD()
sgd.fit(X, Y)
cost(sgd.theta, X, Y), cost(sgd.theta / sgd.theta.sum(), X, Y)
sgd.theta, sgd.theta / sgd.theta.sum()
sgd = SGD()
sgd.fit(X_not_ne_what, Y)
sgd.theta, sgd.theta / sgd.theta.sum()
sgd_not_what = SGD()
sgd_not_what.fit(X_not_what, Y)
(sgd_not_what.theta - np.array([sgd_not_what.theta[0], 0., 0.])) / sgd_not_what.theta[1:].sum()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Algorithm
Step2: Save coefs (X) and Y, along with tests (to know what each row refers to) to work with it later
Step3: What without NEs
Step4: SGD
Step5: Simple trial
Step6: Run cross validation for each model
Step7: Scores of each model
Step8: Final results
Step9: The one not normalised gets less error due to the fact that bigger absolute values make sigmoid closer to 1 or 0, thus reducing the error. Anyhow, the desired value is the one normalised (values must sum 1)
Step10: Notice how with not-ne-what What becomes slightly more important (maybe because it doesn't lose accuracy due to NE mistreatment?). Even if the resulting accuracy is the same but more stable, the other system makes What be more easily computed in terms of performance, so it would still be the selected approach, because the algorithm is slow enough already. In next steps, when the algorithm is optimised, working with not-ne-what would be a good idea.
|
12,254
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np, matplotlib.pyplot as plt, pandas as pd, pymc as mc
import dismod_mr
model = dismod_mr.data.load('pd_sim_data')
model.keep(areas=['europe_western'], sexes=['female', 'total'])
summary = model.input_data.groupby('data_type')['value'].describe()
np.round(summary,3).sort_values('count', ascending=False)
model.get_data('smr').value.mean()
groups = model.get_data('p').groupby('area')
print(np.round_(groups['value'].describe(),3).sort_values('50%', ascending=False))
countries = ['ESP', 'GBR']
c = {}
for i, c_i in enumerate(countries):
c[i] = groups.get_group(c_i)
ax = None
plt.figure(figsize=(10,4))
for i, c_i in enumerate(countries):
ax = plt.subplot(1,2,1+i, sharey=ax, sharex=ax)
dismod_mr.plot.data_bars(c[i])
plt.xlabel('Age (years)')
plt.ylabel('Prevalence (per 1)')
plt.title(c_i)
plt.axis(ymin=-.001, xmin=-5, xmax=105)
plt.subplots_adjust(wspace=.3)
# remove fixed effects for this example, I will return to them below
model.input_data = model.input_data.filter(regex='(?!x_)')
model.vars += dismod_mr.model.asr(model, 'p')
%time dismod_mr.fit.asr(model, 'p')
# plot age-specific prevalence estimates over data bars
plt.figure(figsize=(10,4))
dismod_mr.plot.data_bars(model.get_data('p'), color='grey', label='Simulated PD Data')
pred = dismod_mr.model.predict_for(model, model.parameters['p'], 'all', 'female', 2005,
'GBR', 'female', 2005, 1.,
model.vars['p'], 0., 1.) # TODO: simplify this method!
hpd = mc.utils.hpd(pred, .05)
plt.plot(np.arange(101), pred.mean(axis=0), 'k-', linewidth=2, label='Posterior Mean')
plt.plot(np.arange(101), hpd[0,:], 'k--', linewidth=1, label='95% HPD interval')
plt.plot(np.arange(101), hpd[1,:], 'k--', linewidth=1)
plt.xlabel('Age (years)')
plt.ylabel('Prevalence (per 1)')
plt.grid()
plt.legend(loc='upper left')
plt.axis(ymin=-.001, xmin=-5, xmax=105);
p_only = model # store results for future comparison
model.parameters['p']['parameter_age_mesh']
df = pd.DataFrame(index=[alpha_i.__name__ for alpha_i in model.vars['p']['alpha']],
columns=['mean', 'lb', 'ub'])
for alpha_i in model.vars['p']['alpha']:
trace = alpha_i.trace()
hpd = mc.utils.hpd(trace, .05)
df.loc[alpha_i.__name__] = (np.mean(trace), hpd[0], hpd[1])
print(np.round(df.astype(float),3).sort_values('mean', ascending=False))
model.get_data('p').sort_values('age_start').filter(['age_start', 'age_end', 'area', 'value']).head(15)
pred = model.vars['p']['p_pred'].trace()
obs = np.array(model.vars['p']['p_obs'].value)
ess = np.array(model.vars['p']['p_obs'].parents['n'])
plt.figure(figsize=(10,4))
sorted_indices = obs.argsort().argsort()
jitter = mc.rnormal(0, .1**-2, len(pred))
for i,s_i in enumerate(sorted_indices):
plt.plot(s_i+jitter, pred[:, i], 'ko', mew=0, alpha=.25, zorder=-99)
plt.errorbar(sorted_indices, obs, yerr=1.96*np.sqrt(obs*(1-obs)/ess), fmt='ks', mew=1, mec='white', ms=5)
plt.xticks([])
plt.xlabel('Measurement')
plt.ylabel('Prevalence (%)\n', ha='center')
plt.yticks([0, .02, .04, .06, .08], [0, 2, 4, 6, 8])
plt.axis([25.5,55.5,-.01,.1])
plt.grid()
plt.title('Posterior Predictive distribution')
model = dismod_mr.data.load('pd_sim_data')
crosswalks = list(model.input_data.filter(like='x_cv').columns)
groups = model.get_data('p').groupby(crosswalks)
crosswalks
np.round(groups['value'].describe(),3).unstack()['mean'].fillna('-')
plt.figure(figsize=(10,6))
plt.subplot(2,2,1); dismod_mr.plot.data_bars(model.get_data('p')); plt.xlabel('Age (years)'); plt.ylabel('Prevalence')
plt.subplot(2,2,2); dismod_mr.plot.data_bars(model.get_data('i')); plt.xlabel('Age (years)'); plt.ylabel('Incidence')
plt.subplot(2,2,3); dismod_mr.plot.data_bars(model.get_data('csmr')); plt.xlabel('Age (years)'); plt.ylabel('Cause-specific mortality')
plt.subplot(2,2,4); dismod_mr.plot.data_bars(model.get_data('smr')); plt.xlabel('Age (years)'); plt.ylabel('Standardized \nmortality ratio');
model.input_data.columns
model.vars += dismod_mr.model.consistent(model)
%time dismod_mr.fit.consistent(model)
plt.figure(figsize=(10,6))
plt.subplot(2,2,1); dismod_mr.plot.data_bars(model.get_data('p')); plt.xlabel('Age (years)'); plt.ylabel('Prevalence')
plt.subplot(2,2,2); dismod_mr.plot.data_bars(model.get_data('i')); plt.xlabel('Age (years)'); plt.ylabel('Incidence')
plt.subplot(2,2,3); dismod_mr.plot.data_bars(model.get_data('csmr')); plt.xlabel('Age (years)'); plt.ylabel('Cause-specific mortality')
plt.subplot(2,2,4); dismod_mr.plot.data_bars(model.get_data('smr')); plt.xlabel('Age (years)'); plt.ylabel('Standardized \nmortality ratio')
param_list = [dict(type='p', title='(a)', ylabel='Prevalence (%)', yticks=([0, .01, .02], [0, 1, 2]), axis=[30,101,-0.001,.025]),
dict(type='i', title='(b)', ylabel='Incidence \n(per 1000 PY)', yticks=([0, .001,.002, .003, .004], [0, 1, 2, 3, 4]), axis=[30,104,-.0003,.0055]),
dict(type='pf', title='(c)', ylabel='Cause-specific mortality \n(per 1000 PY)', yticks=([0, .001,.002], [0, 1, 2]), axis=[30,104,-.0002,.003]),
dict(type='smr', title='(d)', ylabel='Standardized \nmortality ratio', yticks=([1, 2, 3,4, ], [1, 2,3, 4]), axis=[35,104,.3,4.5]),
]
for i, params in enumerate(param_list):
ax = plt.subplot(2,2,i+1)
if params['type'] == 'pf': dismod_mr.plot.data_bars(model.get_data('csmr'), color='grey')
else: dismod_mr.plot.data_bars(model.get_data(params['type']), color='grey')
if params['type'] == 'smr': model.pred = dismod_mr.model.predict_for(model, model.parameters.get('smr', {}), 'all', 'female', 2005,
'GBR', 'female', 2005, 1., model.vars['smr'], 0., 100.).T
else : model.pred = dismod_mr.model.predict_for(model, model.parameters[params['type']],
'all', 'female', 2005,
'GBR', 'female', 2005, 1., model.vars[params['type']], 0., 1.).T
plt.plot(np.arange(101), model.pred.mean(axis=1), 'k-', linewidth=2, label='Posterior Mean')
hpd = mc.utils.hpd(model.pred.T, .05)
plt.plot(np.arange(101), hpd[0], 'k-', linewidth=1, label='95% HPD interval')
plt.plot(np.arange(101), hpd[1], 'k-', linewidth=1)
plt.xlabel('Age (years)')
plt.ylabel(params['ylabel']+'\n\n', ha='center')
plt.axis(params.get('axis', [-5,105,-.005,.06]))
plt.yticks(*params.get('yticks', ([0, .025, .05], [0, 2.5, 5])))
plt.title(params['title'])
plt.grid()
plt.subplots_adjust(hspace=.35, wspace=.35, top=.97)
p_with = model
p1 = dismod_mr.model.predict_for(p_only, model.parameters['p'],
'all', 'total', 'all',
'GBR', 'female', 2005, 1.,
p_only.vars['p'], 0., 1.)
p2 = dismod_mr.model.predict_for(p_with, model.parameters['p'],
'all', 'total', 'all',
'GBR', 'female', 2005, 1.,
p_with.vars['p'], 0., 1.)
plt.plot(p1.mean(axis=0), 'k--', linewidth=2, label='Only prevalence')
plt.plot(p2.mean(axis=0), 'k-', linewidth=2, label='All available')
plt.xlabel('Age (years)')
plt.ylabel('Prevalence (%)\n\n', ha='center')
plt.yticks([0, .01, .02], [0, 1, 2])
plt.axis([30,101,-0.001,.025])
plt.legend(loc='upper left')
plt.grid()
plt.subplots_adjust(top=.97, bottom=.16)
plt.hist(100*p1[:,80], density=True, histtype='step', label='Only prevalence', linewidth=3, color=np.array([239., 138., 98., 256.])/256)
plt.hist(100*p2[:,80], density=True, histtype='step', label='All available', linewidth=3, color=np.array([103, 169, 207, 256.])/256)
plt.title('PD prevalence at age 80')
plt.xlabel('Prevalence (%)\n\n', ha='center')
plt.ylabel('Probability Density')
plt.legend(loc='upper right')
plt.grid()
plt.subplots_adjust(bottom=.16)
!date
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DisMod-MR uses the integrative systems modeling (ISM) approach to produce simultaneous
Step2: Of the 348 rows of data, here is how the values breakdown by data type
Step3: More than half of the available data for this region is prevalence data. I'll take a closer look
Step4: In the original dataset, there was a wide range in median values, which reflects a combination of country-to-country variation and compositional bias. Simulating data has reduced this substantially, but there is still six-fold variation between ESP and GBR.
Step5: A model for age-specific parameters when measurements have heterogeneous age groups
Step6: This estimate shows the nonlinear increase in prevalence as a function of age, where the slope of the
Step7: A sparse grid allows faster computation, but a dense grid allows more expressive age pattens. Choosing
Step8: The fourth feature of the model which I want to draw attention to here is the negative binomial model of data,
Step9: The negative binomial model has an appropriately skewed distribution, where prevalence measurements
Step10: Additional features of DisMod-MR
Step11: Incorporating data on parameters other than prevalence
Step12: The most notable difference between the estimates from this model and from the model
Step13: Because the data is so noisy, the differences between the mean estimates of these different models are not significant; the posterior distributions
Step14: Conclusion
|
12,255
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print 'Before batch normalization:'
print ' means: ', a.mean(axis=0)
print ' stds: ', a.std(axis=0)
# Means should be close to zero and stds close to one
print 'After batch normalization (gamma=1, beta=0)'
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print ' mean: ', a_norm.mean(axis=0)
print ' std: ', a_norm.std(axis=0)
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print 'After batch normalization (nontrivial gamma, beta)'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print dx_num
print dx
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print 'dx difference: ', rel_error(dx1, dx2)
print 'dgamma difference: ', rel_error(dgamma1, dgamma2)
print 'dbeta difference: ', rel_error(dbeta1, dbeta2)
print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
if reg == 0: print
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-1
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gcf().set_size_inches(10, 15)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Batch Normalization
Step2: Batch normalization
Step3: Batch Normalization
Step4: Batch Normalization
Step5: Fully Connected Nets with Batch Normalization
Step6: Batchnorm for deep networks
Step7: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
Step8: Batch normalization and initialization
|
12,256
|
<ASSISTANT_TASK:>
Python Code:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
%pip install apache-beam[gcp]==2.13.0
import apache_beam as beam
print(beam.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
# TODO create logic for no_ultrasound where we only know whether its a single baby or multiple (but not how many multiple)
no_ultrasound['is_male'] = 'Unknown'
if # TODO create logic check for multiples
no_ultrasound['plurality'] = 'Multiple(2+)'
else: # TODO create logic check for single
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'max_num_workers': 6
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After installing Apache Beam, restart your kernel by selecting "Kernel" from the menu and clicking "Restart kernel..."
Step2: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step4: <h2> Save the query from earlier </h2>
Step6: <h2> Create ML dataset using Dataflow </h2>
Step7: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
|
12,257
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import skrf as rf
import numpy as np
from numpy import real, log10, sum, absolute, pi, sqrt
import matplotlib.pyplot as plt
from scipy.optimize import minimize, differential_evolution
rf.stylely()
# Load raw measurements
MSL100_raw = rf.Network('MSL100.s2p')
MSL200_raw = rf.Network('MSL200.s2p')
# Keep only the data from 1MHz to 5GHz
MSL100 = MSL100_raw['1-5000mhz']
MSL200 = MSL200_raw['1-5000mhz']
plt.figure()
plt.title('Measured data')
MSL100.plot_s_db()
MSL200.plot_s_db()
plt.show()
c0 = 3e8
f = MSL100.f
deltaL = 0.1
deltaPhi = np.unwrap(np.angle(MSL100.s[:,1,0])) - np.unwrap(np.angle(MSL200.s[:,1,0]))
Er_eff = np.power(deltaPhi * c0 / (2 * np.pi * f * deltaL), 2)
Loss_mea = 20 * log10(absolute(MSL200.s[:,1,0] / MSL100.s[:,1,0]))
plt.figure()
plt.suptitle('Effective relative permittivity and loss')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, Er_eff)
plt.ylabel('$\epsilon_{r,eff}$')
plt.subplot(2,1,2)
plt.plot(f * 1e-9, Loss_mea)
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.show()
from skrf.media import MLine
W = 3.00e-3
H = 1.51e-3
T = 50e-6
L = 0.1
Er0 = 4.5
tand0 = 0.02
f_epr_tand = 1e9
x0 = [Er0, tand0]
def model(x, freq, Er_eff, L, W, H, T, f_epr_tand, Loss_mea):
ep_r = x[0]
tand = x[1]
m = MLine(frequency=freq, z0=50, w=W, h=H, t=T,
ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
DUT = m.line(L, 'm', embed=True, z0=m.Z0_f)
Loss_mod = 20 * log10(absolute(DUT.s[:,1,0]))
return sum((real(m.ep_reff_f) - Er_eff)**2) + 0.01*sum((Loss_mod - Loss_mea)**2)
res = minimize(model, x0, args=(MSL100.frequency, Er_eff, L, W, H, T, f_epr_tand, Loss_mea),
bounds=[(4.2, 4.7), (0.001, 0.1)])
Er = res.x[0]
tand = res.x[1]
print('Er={:.3f}, tand={:.4f} at {:.1f} GHz.'.format(Er, tand, f_epr_tand * 1e-9))
m = MLine(frequency=MSL100.frequency, z0=50, w=W, h=H, t=T,
ep_r=Er, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
DUT = m.line(L, 'm', embed=True, z0=m.Z0_f)
DUT.name = 'DUT'
Loss_mod = 20 * log10(absolute(DUT.s[:,1,0]))
plt.figure()
plt.suptitle('Measurement vs Model')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, Er_eff, label='Measured')
plt.plot(f * 1e-9, real(m.ep_reff_f), label='Model')
plt.ylabel('$\epsilon_{r,eff}$')
plt.legend()
plt.subplot(2,1,2)
plt.plot(f * 1e-9, Loss_mea, label='Measured')
plt.plot(f * 1e-9, Loss_mod, label='Model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.show()
plt.figure()
plt.title('Measured vs modelled data')
MSL100.plot_s_db()
DUT.plot_s_db(0, 0, color='k')
DUT.plot_s_db(1, 0, color='k')
plt.show()
phi_conn = np.unwrap(np.angle(MSL100.s[:,1,0])) + deltaPhi
z = np.polyfit(f, phi_conn, 1)
p = np.poly1d(z)
delay = -z[0]/(2*np.pi)/2
print('Connector delay: {:.0f} ps'.format(delay * 1e12))
loss_conn_db = 20 * log10(absolute(MSL100.s[:,1,0])) - Loss_mea
alpha = 1.6*np.log(10)/20 * np.sqrt(f/1e9)
beta = 2*np.pi*f/c0
gamma = alpha + 1j*beta
mf = rf.media.DefinedGammaZ0(m.frequency, z0=50, gamma=gamma)
left = mf.line(delay*1e9, 'ns', embed=True, z0=53.2)
right = left.flipped()
check = left ** right
plt.figure()
plt.suptitle('Connector effects')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, phi_conn, label='measured')
plt.plot(f * 1e-9, np.unwrap(np.angle(check.s[:,1,0])), label='model')
plt.ylabel('phase (rad)')
plt.legend()
plt.subplot(2,1,2)
plt.plot(f * 1e-9, loss_conn_db, label='Measured')
plt.plot(f * 1e-9, 20*np.log10(np.absolute(check.s[:,1,0])), label='model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.show()
mod = left ** DUT ** right
MSL100_dc = MSL100.extrapolate_to_dc(kind='linear')
DUT_dc = mod.extrapolate_to_dc(kind='linear')
plt.figure()
plt.suptitle('Left-right and right-left TDR')
plt.subplot(2,1,1)
MSL100_dc.s11.plot_s_time_step(pad=2000, window='hamming', label='Measured L-R')
DUT_dc.s11.plot_s_time_step(pad=2000, window='hamming', label='Model L-R')
plt.xlim(-2, 4)
plt.subplot(2,1,2)
MSL100_dc.s22.plot_s_time_step(pad=2000, window='hamming', label='Measured R-L')
DUT_dc.s22.plot_s_time_step(pad=2000, window='hamming', label='Model R-L')
plt.xlim(-2, 4)
plt.tight_layout()
plt.show()
plt.figure()
plt.title('Measured vs modelled data')
MSL100.plot_s_db()
mod.name = 'Model'
mod.plot_s_db(0, 0, color='k')
mod.plot_s_db(1, 0, color='k')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Measurement of two microstripline with different lengths
Step2: The measured data shows that the electrical length of MSL200 is approximately twice the one of MSL100. The frequency spacing between Return Loss dips is approximately the half for MSL200 compared to MSL100. This is coherent with the physical dimensions if the small connector length is neglected.
Step3: The effective relative permittivity of the geometry shows a dispersion effect at low frequency which can be modelled by a wideband Debye model such as Djordjevic/Svensson implementation of skrf microstripline media. The value then increase slowly with frequency which correspond roughly to the Kirschning and Jansen dispersion model.
Step4: As a sanity check, the model data are compared with the computed parameters
Step5: The model results shows a reasonable agreement with the measured $\epsilon_{r,eff}$ and Insertion Loss values.
Step6: Connector delay and loss estimation
Step7: The phase of the model shows a good agreement, while the Insertion Loss seems to have a reasonable agreement and is small whatsoever.
Step8: Final comparison
|
12,258
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.optimize import linprog
import quantecon.game_theory as gt
U = np.array(
[[0, -1, 1],
[1, 0, -1],
[-1, 1, 0]]
)
p0 = gt.Player(U)
p1 = gt.Player(-U.T)
g = gt.NormalFormGame((p0, p1))
print(g)
gt.lemke_howson(g)
gt.support_enumeration(g)
m, n = U.shape
c = np.zeros(n+1)
c[-1] = 1
c
A_ub = np.hstack((U, np.full((m, 1), -1)))
A_ub
b_ub = np.zeros(m)
b_ub
A_eq = np.ones((1, n+1))
A_eq[0, -1] = 0
A_eq
b_eq = np.ones(1)
b_eq
bounds = [(0, None)] * n + [(None, None)]
bounds
res_p = linprog(c, A_ub, b_ub, A_eq, b_eq, bounds)
res_p
res_p.x[:-1]
res_p.x[-1]
c = np.zeros(m+1)
c[-1] = -1
A_ub = np.hstack((-U.T, np.full((n, 1), 1)))
b_ub = np.zeros(n)
A_eq = np.ones((1, m+1))
A_eq[0, -1] = 0
b_eq = np.ones(1)
bounds = [(0, None)] * m + [(None, None)]
res_d = linprog(c, A_ub, b_ub, A_eq, b_eq, bounds)
res_d
res_d.x[:-1]
res_d.x[-1]
res_p.x[-1] - res_d.x[-1]
def solve_zerosum_lemke_howson(U):
g = gt.NormalFormGame((gt.Player(U), gt.Player(-U.T)))
NE = gt.lemke_howson(g)
return NE
def solve_zerosum_linprog(U, method='revised simplex'):
U = np.asarray(U)
m, n = U.shape
# Primal problem
c = np.zeros(n+1)
c[-1] = 1
A_ub = np.hstack((U, np.full((m, 1), -1)))
b_ub = np.zeros(m)
A_eq = np.ones((1, n+1))
A_eq[0, -1] = 0
b_eq = np.ones(1)
bounds = [(0, None)] * n + [(None, None)]
res_p = linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method=method)
# Dual problem
c = np.zeros(m+1)
c[-1] = -1
A_ub = np.hstack((-U.T, np.full((n, 1), 1)))
b_ub = np.zeros(n)
A_eq = np.ones((1, m+1))
A_eq[0, -1] = 0
b_eq = np.ones(1)
bounds = [(0, None)] * m + [(None, None)]
res_d = linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method=method)
NE = (res_d.x[:-1], res_p.x[:-1])
return NE
solve_zerosum_lemke_howson(U)
solve_zerosum_linprog(U)
m, n = 4, 3
U = np.random.randn(m, n)
U
solve_zerosum_lemke_howson(U)
solve_zerosum_linprog(U)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: じゃんけん・ゲームを例に計算してみる.
Step2: quantecon.game_theory でナッシュ均衡を求める
Step3: プレイヤー1の行列は -U の転置 (.T) であることに注意.
Step4: scipy.optimize.linprog で線形計画問題を解くことでナッシュ均衡を求める
Step5: 各入力を定義する
Step6: scipy.optimize.linprog に渡して解かせる
Step7: 結果
Step8: プレイヤー1の均衡戦略
Step9: ゲームの値
Step10: scipy.optimize.linprog は双対解を返してくれないようなので,双対問題も定式化して解かせる
Step11: プレイヤー0の均衡戦略
Step12: ゲームの値
Step13: 微妙は誤差はある
Step14: 関数としてまとめてみる
Step15: ランダムに行列を発生させて計算させてみる
|
12,259
|
<ASSISTANT_TASK:>
Python Code:
class Directions:
NORTH = 'North'
SOUTH = 'South'
EAST = 'East'
WEST = 'West'
STOP = 'Stop'
def P_1(eps, E_N, E_S):
'''
Calculates: P(X=x|E_{N}=e_{N},E_{S}=e_{S})
Arguments: E_N, E_S \in {True,False}
0 <= eps <= 1 (epsilon)
Returns: dictionary of type int x int --> float
'''
pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}
return pd
P_1(0, True, False)
def P_2(eps, E_N, E_S):
'''
Calculates: P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})
Arguments: E_N, E_S \in {True,False}
0 <= eps <= 1
Returns: dictionary of type (False, True) --> float
'''
pd = {True:0, False:0}
return pd
P_2(0.2, True, False)
def P_3(eps, S):
'''
Calculates: P(S), where S\subseteq\{e_{N},e_{S},e_{E},e_{W}\}
Arguments: S a dictionary with keywords in Directions and values in
{True,False}
0 <= eps <= 1
Returns: float value representing P(S)
'''
return 0
P_3(0.3, {Directions.EAST: True, Directions.SOUTH: False})
def P_4(eps, E_1, E_3):
'''
Calculates: P(X_{4}=x_{4}|E_{1}=e_{1},E_{3}=e_{3})
Arguments: E_1, E_3 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
Returns: dictionary of type int x int --> float
'''
pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}
return pd
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
P_4(0.1, E_1, E_3)
def P_5(eps, E_2, E_3, E_4):
'''
Calculates: P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})
Arguments: E_2, E_3, E_4 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
Returns: dictionary of type int x int --> float
'''
pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}
return pd
E_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_4 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
P_5(0.1, E_2, E_3, E_4)
def P_6(eps, E_1, E_2, E_3):
'''
Calculates: P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})
Arguments: E_1, E_2, E_3 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
Returns: dictionary of type {False, True}^4 --> float
'''
pd = {(n, s, e, w): 0 for n in [False, True] for s in [False, True]
for e in [False, True] for w in [False, True]}
return pd
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
P_6(0.1, E_1, E_2, E_3)
def P_7(eps, E_N, E_S):
'''
Calculates: P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})
Arguments: E_N_2, E_S_2 \in {True,False}
0 <= eps <= 1
Returns: dictionary of type (False, True) --> float
'''
pd = {True:0, False:0}
return pd
P_7(0.1, True, False)
def approx_equal(val1, val2):
return abs(val1-val2) <= 0.00001
def test_P_1():
pd = P_1(0.0, True, True)
assert approx_equal(pd[(2, 1)], 0.1111111111111111)
assert approx_equal(pd[(3, 1)], 0)
pd = P_1(0.3, True, False)
assert approx_equal(pd[(2, 1)], 0.03804347826086956)
assert approx_equal(pd[(3, 1)], 0.016304347826086956)
def test_P_2():
pd = P_2(0.0, True, True)
assert approx_equal(pd[False], 1.0)
pd = P_2(0.3, True, False)
assert approx_equal(pd[False], 0.5514492753623188)
def test_P_3():
pd = P_3(0.1, {Directions.EAST: True, Directions.WEST: True})
assert approx_equal(pd, 0.2299999999999999)
pd = P_3(0.1, {Directions.EAST: True})
assert approx_equal(pd, 0.3999999999999999)
pd = P_3(0.2, {Directions.EAST: False, Directions.WEST: True, Directions.SOUTH: True})
assert approx_equal(pd, 0.0980000000000000)
def test_P_4():
E_1 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True}
E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True}
pd = P_4(0.0, E_1, E_3)
assert approx_equal(pd[(6, 3)], 0.1842105263157895)
assert approx_equal(pd[(4, 3)], 0.0)
pd = P_4(0.2, E_1, E_3)
assert approx_equal(pd[(6, 3)], 0.17777843398830864)
assert approx_equal(pd[(4, 3)], 0.000578430282649176)
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
pd = P_4(0.0, E_1, E_3)
assert approx_equal(pd[(6, 2)], 0.3333333333333333)
assert approx_equal(pd[(4, 3)], 0.0)
def test_P_5():
E_2 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: False, Directions.WEST: False}
E_4 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
pd = P_5(0, E_2, E_3, E_4)
assert approx_equal(pd[(2, 5)], 0.5)
assert approx_equal(pd[(4, 3)], 0.0)
pd = P_5(0.3, E_2, E_3, E_4)
assert approx_equal(pd[(2, 5)], 0.1739661245168835)
assert approx_equal(pd[(4, 3)], 0.0787991740545979)
def test_P_6():
E_1 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_2 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
pd = P_6(0.2, E_1, E_2, E_3)
assert approx_equal(pd[(False, False, True, True)], 0.15696739914079486)
assert approx_equal(pd[(True, True, False, False)], 0.20610191744824477)
pd = P_6(0., E_1, E_2, E_3)
assert approx_equal(pd[(False, False, True, True)], 0.5)
assert approx_equal(pd[(False, True, False, False)], 0.0)
def test_P_7():
pd = P_7(0.0, True, False)
assert approx_equal(pd[False], 0.7142857142857143)
pd = P_7(0.3, False, False)
assert approx_equal(pd[False], 0.5023529411764706)
test_P_1()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: a. Bayes' net for instant perception and position.
Step2: ii. $P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})$
Step3: iii. $P(S)$, where $S\subseteq{e_{N},e_{S},e_{E},e_{W}}$
Step4: c. Bayes' net for dynamic perception and position.
Step5: ii. $P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})$
Step6: iii. $P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})$
Step7: iv. $P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})$
Step8: Test functions
|
12,260
|
<ASSISTANT_TASK:>
Python Code:
from explauto.environment import environments
environments.keys()
from explauto.environment import available_configurations
available_configurations('simple_arm').keys()
available_configurations('simple_arm')['mid_dimensional']
available_configurations('pendulum').keys()
from explauto import Environment
environment = Environment.from_configuration('simple_arm', 'mid_dimensional')
from numpy import pi
m = [-pi/6., pi/3., pi/4., pi/5., 0., pi/3., pi/6.]
environment.compute_sensori_effect(m)
# Create the axes for plotting::
%pylab inline
ax = axes()
# plot the arm:
environment.plot_arm(ax, m)
motor_configurations = environment.random_motors(n=10)
# Create the axes for plotting::
%pylab inline
ax = axes()
# Plotting 10 random motor configurations:
for m in motor_configurations:
environment.plot_arm(ax, m)
environment = Environment.from_configuration('pendulum', 'default')
%pylab
ax = axes()
# Sequence of torques at each time step:
U = [0.25] * 15 + [-0.25] * 15 + [0.25] * 19
# reset to lower position:
environment.reset()
# apply torque and plot:
for u in U:
ax.cla()
environment.apply_torque(u)
environment.plot_current_state(ax)
draw()
environment.compute_sensori_effect(environment.random_motors())
environment = Environment.from_configuration('simple_arm', 'mid_dimensional')
from explauto.sensorimotor_model import sensorimotor_models
sensorimotor_models.keys()
from explauto.sensorimotor_model import available_configurations
available_configurations('nearest_neighbor')
from explauto import SensorimotorModel
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'exact')
for m in environment.random_motors(n=1000):
# compute the sensori effect s of the motor command m through the environment:
s = environment.compute_sensori_effect(m)
# update the model according to this experience:
sm_model.update(m, s)
# random motor command:
m = environment.random_motors(n=1)[0]
# predicted sensory effect:
s_pred = sm_model.forward_prediction(m)
print 'random motor command: ', m
print 'predicted effect: ', s_pred
%pylab inline
ax = axes()
environment.plot_arm(ax, m)
ax.plot(*s_pred, marker='o', color='red')
s_g = [0.7, 0.5]
m = sm_model.inverse_prediction(s_g)
print 'Inferred motor command to reach the position ', s_g, ': ', m
ax = axes()
environment.plot_arm(ax, m)
ax.plot(*s_g, marker='o', color='red')
ax = axes()
# Define the line and plot it:
x = 0.8
y_a = 0.5
y_b = -0.5
ax.plot([x, x], [y_a, y_b], color='red')
# for 10 points equidistantly spaced on the line, perform inverse prediction and plot:
for y in linspace(-0.5, 0.5, 10):
m = sm_model.inverse_prediction([x, y])
environment.plot_arm(ax, m)
sm_model = SensorimotorModel.from_configuration(environment.conf, 'LWLR-BFGS', 'default')
from explauto.interest_model import interest_models, available_configurations
interest_models.keys()
available_configurations('discretized_progress')
from explauto import InterestModel
im_model = InterestModel.from_configuration(environment.conf, environment.conf.m_dims, 'random')
# re-instantiate the sensorimotor model (to forget what was previously learnt in the previous section
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
# run the simulation loop
for _ in range(100):
# sample a random motor command using the interest model:
m = im_model.sample()
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
im_model.update(hstack((m, s)), hstack((m, s_g)))
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
# Instantiate a random goal interest model:
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'random')
for _ in range(100):
# sample a random sensory goal using the interest model:
s_g = im_model.sample()
# infer a motor command to reach that goal using the sensorimotor model:
m = sm_model.inverse_prediction(s_g)
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
# Instantiate an active goal interest model:
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'discretized_progress')
for _ in range(100):
# sample a sensory goal maximizing learning progress using the interest model:
s_g = im_model.sample()
# infer a motor command to reach that goal using the sensorimotor model:
m = sm_model.inverse_prediction(s_g)
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
# update the interest model:
im_model.update(hstack((m, s)), hstack((m, s_g)))
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
from explauto import Agent
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
im_model = InterestModel.from_configuration(environment.conf, environment.conf.m_dims, 'random')
agent = Agent(environment.conf, sm_model, im_model)
m = agent.produce()
print m
s = environment.update(m)
agent.perceive(s)
for _ in range(100):
m = agent.produce()
s = environment.update(m)
agent.perceive(s)
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'discretized_progress')
agent = Agent(environment.conf, sm_model, im_model)
for _ in range(100):
m = agent.produce()
s = environment.update(m)
agent.perceive(s)
from explauto import Experiment
expe = Experiment(environment, agent)
from explauto.experiment import make_settings
random_goal_babbling = make_settings(environment='simple_arm', environment_config = 'mid_dimensional',
babbling_mode='goal',
interest_model='random',
sensorimotor_model='nearest_neighbor')
expe = Experiment.from_settings(random_goal_babbling)
expe.evaluate_at([100, 200, 400, 1000], random_goal_babbling.default_testcases)
expe.run()
%pylab inline
ax = axes()
title(('Random goal babbling'))
expe.log.scatter_plot(ax, (('sensori', [0, 1]),))
expe.log.scatter_plot(ax, (('choice', [0, 1]),), marker='.', color='red')
#expe.log.scatter_plot(ax, (('testcases', [0, 1]),), marker='o', color='green')
legend(['reached hand positions', 'chosen goals'])
ax = axes()
expe.log.plot_learning_curve(ax)
from explauto import ExperimentPool
xps = ExperimentPool.from_settings_product(environments=[('simple_arm', 'high_dim_high_s_range')],
babblings=['goal'],
interest_models=[('random', 'default'), ('discretized_progress', 'default')],
sensorimotor_models=[('nearest_neighbor', 'default')],
evaluate_at=[200, 500, 900, 1400],
same_testcases=True)
xps.run()
ax = axes()
for log in xps.logs:
log.plot_learning_curve(ax)
legend([s.interest_model for s in xps.settings])
%pylab
clf()
last_t = 0
for t in linspace(100, xps.logs[0].eval_at[-1], 40):
t = int(t)
for i, (config, log) in enumerate(zip(xps.settings, xps.logs)):
ax = subplot(1, 2, i+1)
log.scatter_plot(ax, (('sensori', [0, 1]),), range(0, t), marker='.', markersize=0.3, color = 'white')
log.density_plot(ax, (('choice', [0, 1]),), range(last_t, t))
title(config.interest_model + ' ' + config.babbling_mode)
draw()
last_t = t
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: According to your installation, you will see at least two available environments
Step2: For example, the 'mid_dimensional' configuration corresponds to
Step3: One can use this method with every registered environments. For example the available configurations for the pendulum are
Step4: Let's instantiate a mid-dimensional simple arm
Step5: Each particular environment has to implement its own compute_sensori_effect method, which takes as argument a motor command vector $m$ (here the position of the joints, 7-dimensional). It returns the corresponding sensory effect vector $s$ (here the coordinate of the hand, $2$-dimensional).
Step6: Environments can implement specific methods for, e.g., drawing
Step7: The base of the arm is fixed at (0, 0) (circle). The first angle position m[0] corresponds to the angle between a horizontal line and the segment attached to the base, anticlock-wise. Each following angle position is measured with respect to their respective previous segment.
Step8: Let's for example plot 10 random arm configurations
Step9: Dynamical environments are also available, though their integration with the rest of the library is not yet completly clear (to discuss later). E.g., a circular pendulum
Step10: The compute_sensori_effect method is also defined (using a motor primitive)
Step11: But let's continue this tutorial using a mid-dimensional simple arm
Step12: Learning sensorimotor models
Step13: Here we will use the 'nearest neighbor' model. This sensorimotor model simply stores sensorimotor experience, ie. $(m, s)$ pairs where $m$ is a motor command (here arm joint positions) and $s$ the corresponding sensory effect (here end-effector positions). When asked for a forward prediction for a given motor command $m$, it returns the associated sensory effect $s$ of the nearest neighbor of $m$ in the stored sensorimotor experience. When asked for an inverse prediction to reach a sensory goal $s$, it returns the associated motor command $m$ of the nearest neighbor of $s$ in the stored sensorimotor experience, possibly pertubated with a bit gaussian noise.
Step14: We will use the 'exact' configuration, which perform forward and inverse prediction as explained above, without any noise added (ie., it just looks for the nearest neighbor).
Step15: Note that in addition to the names of the model and its configuration, one also has to pass environment.conf. This a Configuration object which is instantiated during the environment creation and provides information about the motor and sensorimotor ranges used by the environment. It is useful for the sensorimotor model to be properly configured. When using the 'default' configuration for example, the added noise when performing inverse prediction depends on the motor ranges. Passing environment.conf thus allows to define sensorimotor model configurations independently of particular environment settings.
Step16: Note that sensorimotor model training in Explauto is an iterative process. They incorporate new sensorimotor experience on the fly instead of using batch training. This is a requirement for autonomous exploration where the internal model has to be refined online.
Step17: and compare the predicted effect with the real effect observed from executing $m$ through the environment
Step18: We observe that the predicted end-effector position is quite close to the observed position when executing the motor command. Using the 'NN' model, it simply corresponds to the sensory effect of the nearest neighbor of $m$ in the stored sensorimotor experience.
Step19: We can check if the inferred motor command is actually appropriate to reach the goal $s_g$
Step20: We observe that the inferred motor command results in an end-effector position which is quite close to the goal. Using the 'exact' configuration of the 'nearest_neighbor' model, it is simply the motor command which resulted in the sensory effect which is the closest to $s_g$ in the stored experience.
Step21: Using another sensorimotor model in Explauto simply consists of changing the model name and configuration above. For example, you can try to execute the exact same code, just replacing the model instanciation by
Step22: Motor and goal babbling using interest models
Step23: and the available configurations of a given model by
Step24: Using an environment, a sensorimotor and an interest model, one can run a motor babbling strategy by
Step25: Then running the following simulation loop and (optionally) plotting the reached sensory effects
Step26: (The plots are quite hugly here, we will present Explauto visualization tools in the following.)
Step27: We observe that goal babbling allow a more uniform covering of the sensory space.
Step28: The reached point obtained above do not well cover the sensory space. This is due to the fact that we did not re-initialize the sensorimotor model (therefore this latter was already trained) to avoid some bootstrapping issues. The next section shows how to encapsulate a sensorimotor and an interest models into an agent to, among other things, take care of those bootstrapping issues.
Step29: An agent is provided with two methods. One for producing a motor command
Step30: The produce() method calls the sample() method of the interest model, which returns either a motor command or a sensory goal according to the interest space (i.e. the type of babbling). Then it uses the sensorimotor model to complete the obtained value into a full sensorimotor vector (using forward prediction in case of motor babbling and inverse prediction in case of goal babbling). Finally it returns the motor part of this full sensorimotor vector. Agents also take care of model bootstrapping issues.
Step31: Hence the entire simulation loop can now be rewritten as
Step32: This loop is valid whatever the exploration strategy involved. The corresponding formal framework is defined in
Step33: and run it using the exact same loop
Step34: Of course lack a way to visualize the result of our simulations here, this is why we introduce Explauto's Experiment in the next section.
Step35: An experiment offers the management of the simulation loop with evaluation, logging and plotting capabilities. Instead of seperately constructing the environment and the agent (containing the sensorimotor and the interest models), one can simply use
Step36: This is the compact way to construct the environment (here a mid-dimensional 'simple_arm'), the sensorimotor model (here, 'NN') and the interest model (here curiosity-driven goal babbling) and encapsulate them into an experiment.
Step37: Now let's run the experiment
Step38: This executes the same simulation loop as above, inserting an evaluation phase at each specified time step and logging the flow of interest model choices, sensorimotor model inferences and sensorimotor observations. This allows to, e.g., visualize the chosen goals and reached hand positions during the experiment using the scatter_plot method
Step39: or to vizualize the learning curve
Step40: Parallel comparison of exploration strategies
Step41: running it
Step42: comparing learning curves
Step43: or vizualize the iterative choice of goals and the reached effects
|
12,261
|
<ASSISTANT_TASK:>
Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
from time import sleep
from pynq.lib import Pmod_Timer
pt = Pmod_Timer(base.PMODA,0)
pt.stop()
# Generate a 10 ns pulse every period*10 ns
period=100
pt.generate_pulse(period)
# Sleep for 4 seconds and stop the timer
sleep(4)
pt.stop()
# Generate 3 pulses at every 1 us
count=3
period=100
pt.generate_pulse(period, count)
# Generate pulses per 1 us forever
count=0
period=100
pt.generate_pulse(period, count)
pt.stop()
# Detect any event within 10 us
period=1000
pt.event_detected(period)
# Detect any event within 20 ms
period=200000
pt.event_detected(period)
# Count number of events within 10 us
period=1000
pt.event_count(period)
period = pt.get_period_ns()
print("The measured waveform frequency: {} Hz".format(1e9/period))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Instantiate Pmod_Timer class. The method stop() will stop both timer sub-modules.
Step2: 2. Generate pulses for a certain period of time
Step3: 3. Generate a certain number of pulses
Step4: Now generate the pulses at every 1 $\mu$s interval.
Step5: Stop the generation.
Step6: 4. Determine if an event has occurred at the input
Step7: Now run the waveform generation and then run the next cell. Set the waveform generator settings as shown below
Step8: 5. Count number of events occurred during a desired period
Step9: 6. Measure period between two rising edges
|
12,262
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
%precision %.4g
V = 240 # [V]
Z1 = 10.0 * exp(1j* 30/180*pi)
Z2 = 10.0 * exp(1j* 45/180*pi)
Z3 = 10.0 * exp(1j*-90/180*pi)
I1 = V/Z1
I2 = V/Z2
I1_angle = arctan(I1.imag/I1.real)
I2_angle = arctan(I2.imag/I2.real)
print('''I1 = {:.1f} A ∠{:.1f}°
I2 = {:.1f} A ∠{:.1f}°'''.format(
abs(I1), I1_angle/pi*180,
abs(I2), I2_angle/pi*180))
I = I1 + I2
I_angle = arctan(I.imag/I.real)
print('I = {:.1f} A ∠{:.1f}°'.format(
abs(I), I_angle/pi*180))
print('==================')
PF = cos(-I_angle)
PF
So = V*conj(I) # I use index "o" for open switch
So
print('''
So = {:>7.1f} VA
Po = {:>7.1f} W
Qo = {:>7.1f} var
================'''.format(abs(So), So.real, So.imag))
S1o = V*conj(I1)
S1o
S2o = V*conj(I2)
S2o
print('''
S1o = {:>6.1f} VA
P1o = {:>6.1f} W
Q1o = {:>6.1f} var
----------------
S2o = {:>6.1f} VA
P2o = {:>6.1f} W
Q2o = {:>6.1f} var
================'''.format(abs(S1o), S1o.real, S1o.imag,
abs(S2o), S2o.real, S2o.imag))
I3 = V/Z3
I3_angle = arctan(I3.imag/I3.real)
print('I3 = {:.1f} A ∠{:.1f}°'.format(abs(I3), I3_angle/pi*180))
I = I1 + I2 + I3
I_angle = arctan(I.imag/I.real)
print('I = {:.1f} A ∠{:.1f}°'.format(abs(I), I_angle/pi*180))
print('=================')
PF = cos(-I_angle)
PF
Sc = V*conj(I) # I use index "c" for closed switch
Sc
print('''
Sc = {:.1f} VA
Pc = {:.1f} W
Qc = {:.1f} var
==============='''.format(abs(Sc), Sc.real, Sc.imag))
S1c = V*conj(I1)
S1c
S2c = V*conj(I2)
S2c
S3c = V*conj(I3)
S3c
print('''
S1c = {:>7.1f} VA
P1c = {:>7.1f} W
Q1c = {:>7.1f} var
-----------------
S2c = {:>7.1f} VA
P2c = {:>7.1f} W
Q2c = {:>7.1f} var
-----------------
S3c = {:>7.1f} VA
P3c = {:>7.1f} W
Q3c = {:>7.1f} var
================='''.format(abs(S1c), S1c.real, S1c.imag,
abs(S2c), S2c.real, S2c.imag,
abs(S3c), S3c.real, S3c.imag))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: Answer the following questions about this power system.
Step3: Therefore the total current from the source is $\vec{I} = \vec{I}_1 + \vec{I}_2$
Step4: The power factor supplied by the source is
Step5: lagging (because current laggs behind voltage).
Step6: Let's pretty-print that
Step7: (b)
Step8: Let's pretty-print that
Step9: As expected, the real and reactive power supplied by the source are equal to the sum of the
Step10: Therefore the total current from the source is $\vec{I} = \vec{I}_1 + \vec{I}_2 + \vec{I}_3$
Step11: The power factor supplied by the source is
Step12: lagging (because current laggs behind voltage).
Step13: Let's pretty-print that
Step14: (d)
|
12,263
|
<ASSISTANT_TASK:>
Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-east1' #'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column. Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(name, values))
def build_dnn_model():
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
inputs.update({
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
})
# feature columns from inputs
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
if False:
# Until TF-serving supports 2.0, so as to get servable model
feature_columns['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
feature_columns['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [64, 32] just in like the BQML DNN
h1 = tf.keras.layers.Dense(64, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(32, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='babyweight')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
# note how to use strategy to do distributed training
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = build_dnn_model()
print(model.summary())
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
# Serving function that passes through keys
@tf.function(input_signature=[{
'is_male': tf.TensorSpec([None,], dtype=tf.string, name='is_male'),
'mother_age': tf.TensorSpec([None,], dtype=tf.float32, name='mother_age'),
'plurality': tf.TensorSpec([None,], dtype=tf.string, name='plurality'),
'gestation_weeks': tf.TensorSpec([None,], dtype=tf.float32, name='gestation_weeks'),
'key': tf.TensorSpec([None,], dtype=tf.string, name='key')
}])
def my_serve(inputs):
feats = inputs.copy()
key = feats.pop('key')
output = model(feats)
return {'key': key, 'babyweight': output}
import shutil, os, datetime
OUTPUT_DIR = './export/babyweight'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH, signatures={'serving_default': my_serve})
print("Exported trained model to {}".format(EXPORT_PATH))
os.environ['EXPORT_PATH'] = EXPORT_PATH
!find $EXPORT_PATH
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {EXPORT_PATH}
%%bash
MODEL_NAME="babyweight"
VERSION_NAME="dnn"
MODEL_LOCATION=$EXPORT_PATH
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "The model named $MODEL_NAME already exists."
else
# create model
echo "Creating $MODEL_NAME model now."
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already the existing model $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.5 --runtime-version=1.14 \
--origin=$MODEL_LOCATION --staging-bucket=gs://$BUCKET
%%writefile input.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "b2", "is_male": "True", "mother_age": 33.0, "plurality": "Single(1)", "gestation_weeks": 41}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g2", "is_male": "False", "mother_age": 33.0, "plurality": "Single(1)", "gestation_weeks": 41}
!gcloud ai-platform predict --model babyweight --json-instances input.json --version dnn
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials)
project = PROJECT
model_name = 'babyweight'
version_name = 'dnn'
input_data = {
'instances': [
{
'key': 'b1',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Single(1)',
'gestation_weeks': 39
},
{
'key': 'g1',
'is_male': 'False',
'mother_age': 29.0,
'plurality': 'Single(1)',
'gestation_weeks': 38
},
{
'key': 'b2',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Triplets(3)',
'gestation_weeks': 39
},
{
'key': 'u1',
'is_male': 'Unknown',
'mother_age': 29.0,
'plurality': 'Multiple(2+)',
'gestation_weeks': 38
},
]
}
parent = 'projects/%s/models/%s/versions/%s' % (project, model_name, version_name)
prediction = api.projects().predict(body=input_data, name=parent).execute()
print(prediction)
print(prediction['predictions'][0]['babyweight'][0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Keras model
Step2: Next, define the feature columns. mother_age and gestation_weeks should be numeric.
Step3: We can visualize the DNN using the Keras plot_model utility.
Step4: Train and evaluate
Step5: Visualize loss curve
Step6: Save the model
Step7: <h2> Monitor and experiment with training </h2>
Step8: Monitor the model creation at GCP Console > AI Platform and once the model version dnn is created, proceed to the next cell.
Step9: main.py
|
12,264
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import sys,math
sys.path.insert(0, '..') # path to ../common.py
import numpy as np
import matplotlib.pyplot as plt
from common import *
# READ PRESSURES AND FLOWS FROM FILE
qVals = np.loadtxt('Qgeneral')
pVals = np.loadtxt('Pgeneral')
print('Total Number of interfaces: %d' % (qVals.shape[1]))
print('Total Number of timesteps: %d' % (qVals.shape[0]))
# PLOT CONTENT
plt.figure(figsize=(20,5))
plt.subplot(1,2,1)
plt.plot(qVals)
plt.xlabel('Simulation Time Step',fontsize=14)
plt.ylabel('Interface flow',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.ylim([-15.0,5.0])
plt.subplot(1,2,2)
plt.plot(pVals/133.3)
plt.xlabel('Simulation Time Step',fontsize=14)
plt.ylabel('Interface Pressure',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.tight_layout()
plt.show()
# SELECT OUTLET INDEX
outletIdx1 = 0
outletIdx2 = 6
# PLOT CONTENT
plt.figure(figsize=(20,5))
plt.subplot(1,2,1)
plt.plot(pVals[:,outletIdx1],qVals[:,outletIdx1],'k-o',markersize=5.0)
plt.xlabel('Interface Pressure',fontsize=14)
plt.ylabel('Interface Flow',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.subplot(1,2,2)
plt.plot(pVals[:,outletIdx2],qVals[:,outletIdx2],'b-o',markersize=5.0)
plt.xlabel('Interface Pressure',fontsize=14)
plt.ylabel('Interface Flow',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.ylim([-0.1,0.25])
plt.tight_layout()
plt.show()
# REMOVE SAMPLES FROM THE INITIAL TRANSIENT
qVals = qVals[400:,:]
pVals = pVals[400:,:]
# PLOT CONTENT
plt.figure(figsize=(20,5))
plt.subplot(1,2,1)
plt.plot(qVals)
plt.xlabel('Simulation Time Step',fontsize=14)
plt.ylabel('Interface flow',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.ylim([-15.0,5.0])
plt.subplot(1,2,2)
plt.plot(pVals/133.3)
plt.xlabel('Simulation Time Step',fontsize=14)
plt.ylabel('Interface Pressure',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.tight_layout()
plt.show()
# NORMALIZE DATA
maxP = np.max(pVals,axis=0)
minP = np.min(pVals,axis=0)
minP,maxP = makeSymmetric(minP,maxP)
pVals = normalizeColumns(pVals,minP,maxP)
# EXTRACT NUMBER OF SAMPLES AND OUTLETS
totSamples = qVals.shape[0]
totOutlets = qVals.shape[1]
totInputs = pVals.shape[1]
# DECIDE NUMBER OF TRAINING AND TESTING SETS
trainSizeRatio = 0.67
trainSize = int(trainSizeRatio*totSamples)
# DETERMINE RANDOM INDEXES TO EXTRACT AND DO TRAINING/TESTING SEPARATION
indices = np.random.permutation(totSamples)
training_idx, test_idx = indices[:trainSize], indices[trainSize:]
pTrainVals, pTestVals = pVals[training_idx,:], pVals[test_idx,:]
qTrainVals, qTestVals = qVals[training_idx,:], qVals[test_idx,:]
onedInput = 2*np.arange(100)/99.0-1.0
onedInput = np.resize(onedInput,(len(onedInput),1))
onedMat = np.array(buildRegressionMatrix(onedInput,10).getMatrix())
plt.figure(figsize=(20,5))
plt.plot(onedInput,onedMat)
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.tight_layout()
plt.show()
# ASSEMBLE POLYNOMIAL MATRIX FROM TRAINING INPUT PRESSURES
maxPolyDegree = 1
pMat = buildRegressionMatrix(pTrainVals,maxPolyDegree)
print ('Number of Rows in measurement matrix: ',pMat.getRowCount())
print ('Number of Columns in measurement matrix: ',pMat.getColCount())
# OLS REGRESSION
ols = tulipOLS()
olsCoeffs,olsCoeffsCov,olsNoise = ols.train(pMat,qTrainVals[:,0])
# RVM REGRESSION
rvm = tulipRVM()
rvmCoeffs,rvmCoeffsCov,rvmNoise = rvm.train(pMat,qTrainVals[:,0])
# PERFORM TESTING
# Compute a Polynomial Matrix at the Testing Locations
pTestMat = np.array(buildRegressionMatrix(pTestVals,maxPolyDegree).getMatrix())
# Compute the average reconstructed Qtys
olsQ = np.dot(pTestMat,olsCoeffs)
rvmQ = np.dot(pTestMat,rvmCoeffs)
# Compute the uncertainty region
olsAux = np.dot(np.dot(pTestMat,olsCoeffsCov),pTestMat.transpose())
rvmAux = np.dot(np.dot(pTestMat,rvmCoeffsCov),pTestMat.transpose())
# Estimate the uncertainty in the estimated q (standard deviation)
olsSTDQ = olsNoise + np.sqrt(np.diag(olsAux))
rvmSTDQ = rvmNoise + np.sqrt(np.diag(rvmAux))
# Re-permute array in the correct temporal order
qTest = computeInversePermutation(test_idx,qTestVals[:,0])
pTest = computeInversePermutation(test_idx,pTestVals[:,0])
olsQ = computeInversePermutation(test_idx,olsQ)
rvmQ = computeInversePermutation(test_idx,rvmQ)
olsQ2 = computeInversePermutation(test_idx,olsSTDQ)
rvmQ2 = computeInversePermutation(test_idx,rvmSTDQ)
# Write Covariance in Matrix Form
plt.figure(figsize=(20,5))
plt.subplot(1,2,1)
plt.plot(np.arange(len(qTest)),qTest,'k--',label='Exact',alpha=0.6,lw=1.5)
plt.plot(np.arange(len(olsQ)),olsQ,'r-',label='OLS',alpha=0.6,lw=1.0)
plt.fill_between(np.arange(len(olsQ)),olsQ+olsQ2, olsQ-olsQ2,facecolor='gray',interpolate=True,alpha=0.6)
plt.xlabel('Simulation Step',fontsize=14)
plt.ylabel('Flow Rate',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.subplot(1,2,2)
plt.plot(np.arange(len(qTest)),qTest,'k--',label='Exact',alpha=0.6,lw=1.5)
plt.plot(np.arange(len(rvmQ)),rvmQ,'b-',label='RVM',alpha=0.6,lw=1.0)
plt.fill_between(np.arange(len(rvmQ)),rvmQ+rvmQ2, rvmQ-rvmQ2,facecolor='gray',interpolate=True,alpha=0.6)
plt.xlabel('Simulation Step',fontsize=14)
plt.ylabel('Flow Rate',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.tight_layout()
plt.show()
# Compute OLS and RVM Coeffient representations
plt.figure(figsize=(20,5))
plt.subplot(1,1,1)
olsCoeff2 = np.sort(np.fabs(olsCoeffs))[::-1]
rvmCoeff2 = np.sort(np.fabs(rvmCoeffs))[::-1]
plt.semilogy(olsCoeff2,'ko',label='OLS',alpha=0.6,lw=1.5,markersize=20)
plt.semilogy(rvmCoeff2,'ro',label='RVM',alpha=0.6,lw=1.0,markersize=20)
plt.ylabel('Coefficient Magnitude',fontsize=14)
plt.xlabel('Coefficient n.',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.tight_layout()
plt.show()
# Plot reconstructed Pressure vs. flow
plt.figure(figsize=(20,5))
plt.subplot(1,2,1)
plt.plot(pTest,qTest,'k--',markersize=5.0,lw=2)
plt.plot(pTest,olsQ,'ro-',markersize=5.0,lw=2)
plt.xlabel('Interface 0 Pressure',fontsize=14)
plt.ylabel('Interface 0 Flow',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.subplot(1,2,2)
plt.plot(pTest,qTest,'k--',markersize=5.0,lw=2)
plt.plot(pTest,rvmQ,'bo-',markersize=5.0,lw=2)
plt.xlabel('Interface 0 Pressure',fontsize=14)
plt.ylabel('Interface 0 Flow',fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.tick_params(axis='both', which='minor', labelsize=12)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see, the model selected for this tutorial has 20 outlets.
Step2: PART II
Step3: We would like to use legendre polynomials for regression. This basis has support in $[-1,1]$ and therefore we need to normalize the input pressures to fit in this interval.
Step4: The matrix we create contains Legendre polynomials up to a certain order. Let's plot these polynomials for a one-dimensional scenario. Remember that the proposed pressure/flow rate regression aims at computing the flow at one interface by knowing the flow at all the interfaces. It is therefore a 20-dimensional regression problem.
Step5: PART III
Step6: Note how the measurement matrix has significantly more rows than columns, which is desirable when performing least squares regression.
Step7: Relevance Vector Machine Regression
Step8: Once we have computed both OLS and RVM coefficient representations, we plot the reconstructed and exact flow rates.
Step9: Let's also compare the coefficient representations.
Step10: Finally let's have a look at the reconstructed relatioships between pressure and flow.
|
12,265
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
12,266
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
x_min = np.min(x)
x_max = np.max(x)
return (x - x_min) / (x_max - x_min)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
import tensorflow as tf
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
encoded_x = np.zeros((len(x), 10))
for i, label in enumerate(x):
encoded_x[i][label] = 1.0
return encoded_x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# ideally these would not be hardcoded...
return tf.placeholder(tf.float32, [None, 32, 32, 3], name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, [None, 10], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# convolution layer
conv_weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.shape[3].value, conv_num_outputs]))
conv_bias = tf.Variable(tf.random_normal([conv_num_outputs]))
x_tensor = tf.nn.conv2d(x_tensor, conv_weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
x_tensor = tf.nn.bias_add(x_tensor, conv_bias)
x_tensor = tf.nn.relu(x_tensor)
# maxpool layer
return tf.nn.max_pool(
x_tensor,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
shape = lambda i: x_tensor.shape[i].value
flattened_size = shape(1) * shape(2) * shape(3)
return tf.reshape(x_tensor, [-1, flattened_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs]))
bias = tf.Variable(tf.truncated_normal([num_outputs]))
return tf.nn.relu(tf.add(tf.matmul(x_tensor, weights), bias))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs]))
bias = tf.Variable(tf.truncated_normal([num_outputs]))
return tf.add(tf.matmul(x_tensor, weights), bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(
x,
conv_num_outputs=54,
conv_ksize=(4, 4),
conv_strides=(4, 4),
pool_ksize=(2, 2),
pool_strides=(2, 2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 1500)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x, 1500)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
x = output(x, 10)
# TODO: return output
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
nn_cost = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0
})
nn_accuracy = session.run(accuracy, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0
})
print('Cost: {:.4f}, accuracy: {:.4f}'.format(nn_cost, nn_accuracy))
# TODO: Tune Parameters
epochs = 6
batch_size = 256
keep_probability = 0.9
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
12,267
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo(id="JjpbztqP9_0", width="100%")
from nams import load_data as cf
G = cf.load_sociopatterns_network()
from nams.solutions.paths import bfs_algorithm
# UNCOMMENT NEXT LINE TO GET THE ANSWER.
# bfs_algorithm()
# FILL IN THE BLANKS BELOW
def path_exists(node1, node2, G):
This function checks whether a path exists between two nodes (node1,
node2) in graph G.
visited_nodes = _____
queue = [_____]
while len(queue) > 0:
node = ___________
neighbors = list(_________________)
if _____ in _________:
# print('Path exists between nodes {0} and {1}'.format(node1, node2))
return True
else:
visited_nodes.___(____)
nbrs = [_ for _ in _________ if _ not in _____________]
queue = ____ + _____
# print('Path does not exist between nodes {0} and {1}'.format(node1, node2))
return False
# UNCOMMENT THE FOLLOWING TWO LINES TO SEE THE ANSWER
from nams.solutions.paths import path_exists
# path_exists??
# CHECK YOUR ANSWER AGAINST THE TEST FUNCTION BELOW
from random import sample
import networkx as nx
def test_path_exists(N):
N: The number of times to spot-check.
for i in range(N):
n1, n2 = sample(G.nodes(), 2)
assert path_exists(n1, n2, G) == bool(nx.shortest_path(G, n1, n2))
return True
assert test_path_exists(10)
path = nx.shortest_path(G, 7, 400)
path
import nxviz as nv
g = G.subgraph(path)
nv.matrix(g, sort_by="order")
from nams.solutions.paths import plot_path_with_neighbors
### YOUR SOLUTION BELOW
plot_path_with_neighbors(G, 7, 400)
import pandas as pd
pd.Series(nx.betweenness_centrality(G))
import matplotlib.pyplot as plt
import seaborn as sns
# YOUR ANSWER HERE:
from nams.solutions.paths import plot_degree_betweenness
plot_degree_betweenness(G)
nx.draw(nx.barbell_graph(5, 1))
from nams.solutions import paths
import inspect
print(inspect.getsource(paths))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graph traversal is akin to walking along the graph, node by node,
Step4: Exercise
Step5: Visualizing Paths
Step6: As you can see, it returns the nodes along the shortest path,
Step7: Voila! Now we have the subgraph (1) extracted and (2) drawn to screen!
Step9: In this case, we opted for an Arc plot because we only have one grouping of nodes but have a logical way to order them.
Step10: Exercise
Step11: Think about it...
Step12: Recap
|
12,268
|
<ASSISTANT_TASK:>
Python Code:
import facebook # for connecting to Facebook Graph API
import pprint
import datetime
import pandas as pd
import logging
logger = logging.Logger('catch_all')
# send request to Facebook Graph API, fetching last 50 posts of each page :
def collector(page, token, lim) :
graph = facebook.GraphAPI(access_token = token, version = '2.7')
# fetch and transform JSON to dict :
posts = graph.get_connections(id = page, connection_name = 'posts',
fields = 'id,message,link,shares,from,type,created_time,updated_time,'
'comments.limit(0).summary(true),likes.limit(0).summary(true),reactions.limit(0).summary(true)',
limit = lim)
data = posts['data'] # dict
loop = 0
mylist = []
# parse data from dict :
for el in data:
mydict = {}
for key in el:
if key == 'comments':
commentcount = el[key]['summary']['total_count']
mydict['comment_count'] = commentcount
elif key == 'from':
creator = el[key]['name']
mydict['from'] = creator
elif key == 'likes':
likecount = el[key]['summary']['total_count']
mydict['like_count'] = likecount
elif key == 'reactions':
reactcount = el[key]['summary']['total_count']
mydict['reaction_count'] = reactcount
elif key == 'shares':
share = el[key]['count']
mydict['share_count'] = share
else:
mydict[key] = el[key]
time = datetime.datetime.now() # add timestamp
mydict['time_checked'] = str(time)
mylist.append(mydict)
loop += 1
df = pd.DataFrame(mylist) # pandas dataframe
return df
# create a new csv file of each page :
def initiator(page, token, lim) :
df = collector(page, token, lim)
df.to_csv(page + '.csv', index = False, encoding = 'utf_8_sig')
# receive post's ids and collect new data of these extra posts :
def collector_post(post_ids, token) :
graph = facebook.GraphAPI(access_token = token, version = '2.7')
for idvalue in post_ids:
if (idvalue == 'post deleted' or idvalue == 'ERROR in exception' or
idvalue == 'Something wrong' or idvalue == 'Unexpected condition in appending()'):
post_ids.remove(idvalue)
try:
posts = graph.get_objects(ids = post_ids,
fields = 'id,message,link,shares,from,type,created_time,updated_time,'
'comments.limit(0).summary(true),likes.limit(0).summary(true),reactions.limit(0).summary(true)')
except Exception as e:
logger.error(e, exc_info=True)
time = str(datetime.datetime.now())
return pd.DataFrame([{'id' : 'ERROR in exception', 'time_checked' : time}])
#--------------------------------------------
if len(posts) > 0:
loop = 0
mylist = []
for id_key in posts:
mydict = {}
for key in posts[id_key]:
if key == 'comments':
commentcount = posts[id_key][key]['summary']['total_count']
mydict['comment_count'] = commentcount
elif key == 'from':
creator = posts[id_key][key]['name']
mydict['from'] = creator
elif key == 'likes':
likecount = posts[id_key][key]['summary']['total_count']
mydict['like_count'] = likecount
elif key == 'reactions':
reactcount = posts[id_key][key]['summary']['total_count']
mydict['reaction_count'] = reactcount
elif key == 'shares':
share = posts[id_key][key]['count']
mydict['share_count'] = share
else:
mydict[key] = posts[id_key][key]
time = datetime.datetime.now()
mydict['time_checked'] = str(time)
mylist.append(mydict)
loop += 1
df = pd.DataFrame(mylist)
return df
elif len(posts) == 0:
return pd.DataFrame([{'id' : 'post deleted', 'time_checked' : str(datetime.datetime.now())}])
else:
return pd.DataFrame([{'id' : 'Something wrong', 'time_checked' : str(datetime.datetime.now())}])
# fetch new data and update the existing csv files when this function is called :
def appending(page, token, lim) :
df = collector(page, token, lim) # fetch last 50 posts
df_old = pd.read_csv(page + '.csv', encoding='utf_8_sig')
df_new = pd.concat([df_old, df], axis = 0)
# find extra post's ids that are not included in the last 50 posts but existed in the csv file :
id_old = df_old['id'].unique()
id_extra = set(id_old) - set(df['id'])
id_extra = list(id_extra)
# slice all extra ids into chunks and send each chunk to collector_post()
if len(id_extra) > 50:
chunks = [id_extra[x:x+50] for x in range(0, len(id_extra), 50)]
for ids in chunks:
extrarow = collector_post(ids, token)
df_new = pd.concat([df_new, extrarow], axis = 0)
elif len(id_extra) > 0:
extrarow = collector_post(id_extra, token)
df_new = pd.concat([df_new, extrarow], axis = 0)
elif len(id_extra) == 0:
pass
else:
time = str(datetime.datetime.now())
extrarow = pd.DataFrame([{'id' : 'Unexpected condition in appending()', 'time_checked' : time}])
df_new = pd.concat([df_new, extrarow], axis = 0)
# update csv
df_new.to_csv(page + '.csv', index = False, encoding = 'utf_8_sig')
print('to csv page : {} done !'.format(page))
# all page ids that I want to collect data :
pages = ['DramaAdd', 'ejeab', 'cartooneggcat', 'BBCThai', 'in.one.zaroop', 'HighlightsHD.tv', 'khobsanam', '1447102878929950',
'powerofhusbands', 'basementkaraoke', 'cartoon5natee', 'AjahnBuddhadasa', 'Toodsdiary', 'ceclip', 'beargirlfriend',
'jaytherabbitofficial', 'Darlingboredom', 'v.vajiramedhi', '334236760084743', 'kingdomoftigers', 'underbedstar', 'pantipded',
'Pantip.KratooDed', 'nut.ped', '9gaginthai']
# a Facebook secret token :
token = 'Your Secret Token'
# amount of posts you want to collect at the first time
lim = 50
print(str(datetime.datetime.now()))
for page in pages:
initiator(page, token, lim)
print(str(datetime.datetime.now()))
df = pd.read_csv('BBCThai.csv', encoding = 'utf_8_sig')
df.head(5)
print(str(datetime.datetime.now()))
for page in pages :
appending(page, token, lim)
print(str(datetime.datetime.now()))
df = pd.read_csv('BBCThai.csv', encoding = 'utf_8_sig')
df.head(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up variables
Step2: First, run this code to create new csv files of all pages
Step3: A csv file example
Step4: Then, run this code hourly to update all of the csv files
Step5: An updated csv example
|
12,269
|
<ASSISTANT_TASK:>
Python Code:
import my_util as my_util; from my_util import *
HOME_DIR = 'd:/larc_projects/job_analytics/'
DATA_DIR = HOME_DIR + 'data/clean/'
title_df = pd.read_csv(DATA_DIR + 'new_titles_2posts_up.csv')
def distTitle(agg_df, for_domain=False, for_func=False):
fig = plt.figure()
plt.hist(agg_df.n_title)
mean_n_title = round(agg_df.n_title.mean(), 1)
xl = '# job titles' + r'$(\mu = {})$'.format(mean_n_title)
plt.xlabel(xl, fontsize=16);
if for_domain: plt.ylabel('# domains', fontsize=16)
if for_func: plt.ylabel('# functions', fontsize=16)
plt.grid(True)
return fig
def aggBy(col, title_df):
by_col = title_df.groupby(col)
print('# {}: {}'.format(col, by_col.ngroups) )
agg_df = by_col.agg({'title': 'nunique','non_std_title': 'nunique','n_post': sum})
agg_df = agg_df.rename(columns={'title': 'n_title',
'std_title': 'n_std_title'}).reset_index()
return agg_df
title_stats = pd.read_csv(DATA_DIR + 'stats_job_titles.csv')
titles = title_stats['title']
print('# titles: %d' %len(titles))
by_n_post = pd.read_csv(DATA_DIR + 'stats_job_post_dist.csv')
by_n_post.head()
by_n_post_after_std = title_stats.groupby('n_post').agg({'title': len})
by_n_post_after_std = by_n_post_after_std.rename(columns={'title': 'n_title_after_std'}).reset_index()
quantile(by_n_post_after_std.n_post)
fig = vizJobPostDist(by_n_post)
plt.savefig(RES_DIR + 'fig/dist_job_post_by_title.pdf')
plt.show(); plt.close()
print('# job titles with >= 2 posts: {}'.format(title_df.shape[0]) )
by_domain_agg = aggBy('domain', title_df)
by_domain_agg.sort_values('n_title', ascending=False, inplace=True)
by_domain_agg.to_csv(DATA_DIR + 'stats_domains.csv', index=False)
by_domain_agg.describe().round(1).to_csv(DATA_DIR + 'tmp/domain_desc.csv')
by_domain_agg.describe().round(1)
plt.close('all')
fig = distTitle(by_domain_agg, for_domain=True)
fig.set_tight_layout(True)
plt.savefig(DATA_DIR + 'title_dist_by_domain.pdf')
plt.show(); plt.close()
title_df.query('domain == "information technology"').sort_values('std_title')
by_func_agg = aggBy('pri_func', title_df)
by_func_agg.sort_values('n_title', ascending=False, inplace=True)
by_func_agg.to_csv(DATA_DIR + 'stats_pri_funcs.csv', index=False)
by_func_agg.describe().round(1).to_csv(DATA_DIR + 'tmp/func_desc.csv')
by_func_agg.describe().round(1)
by_func_agg.head(10)
fig = distTitle(by_func_agg, for_func=True)
fig.set_tight_layout(True)
plt.savefig(DATA_DIR + 'title_dist_by_func.pdf')
plt.show(); plt.close()
sum(title_df.domain == 'information technology')
title_df.std_title[title_df.pri_func == 'technician'].nunique()
job_df = pd.read_csv(DATA_DIR + 'jobs.csv')
print job_df.shape
job_df.head(1)
full_job_df = pd.read_csv(DATA_DIR + 'job_posts.csv')
print full_job_df.shape
full_job_df.head(1)
full_job_df = pd.merge(full_job_df, job_df[['job_id', 'doc']])
print full_job_df.shape
print('# job ids including dups: %d' %len(full_job_df.job_id))
print('# unique job ids: %d' % full_job_df.job_id.nunique())
full_job_df.head(1)
full_job_df.to_csv(DATA_DIR + 'job_posts.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helpers
Step2: Distribution of job posts among job titles
Step3: Job posts distribution among standard job titles
Step4: Statistics for Domains
Step5: Why no. of job titles in IT is reduced a lot after std?
Step6: Statistics for functions
|
12,270
|
<ASSISTANT_TASK:>
Python Code:
# You will need these things!
import numpy as np
import pandas as pd
# the structure of a function is like this:
def dir2cart(dec,inc,R): # first line starts with 'def', has the name and the input parameters (data)
# all subsequent lines are indented
# continue this function here.......
pass # this line does nothing - replace it with something that does!
cart=[1.,1.,1.] # obviously this is not what you want....
return cart # returns the stuff you calculated (x,y,z) or (n,e,d)
# read in the data and transpose it to rows of dec, inc, int
# you have to change the file name to reflect where you put the data....
data=np.loadtxt('ps2_prob1_data.txt').transpose() # this line will read in data
# now send these data to your function.... and print out the x,y,z
import pmagpy.pmag as pmag # this makes the PmagPy module pmag.py available to you
print pmag.get_unf.__doc__
pmag.get_unf(10) # now you need to assign this to an array variable name and use it in the following.
# write your code here.
import pmagpy.ipmag as ipmag # this makes the PmagPy module ipmag.py available to you
print ipmag.igrf.__doc__
# figure out how to send your places to ipmag.igrf. do the calculation for 2015.
#
# this line lets you make plots inside the notebook:
%matplotlib inline
ipmag.plot_net(1) # make an equal angle net
# figure out how to use ipmag.plot_di() and plot the points.
# code it up here!
print pmag.dia_vgp.__doc__
# you figure it out.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's write a little function to do the conversion.
Step2: Now let's read in a data file with some geomagnetic field vectors in it.
Step3: Problem 1b
Step4: use that function to generate a list of random points on the Earth's surface.
Step5: Now let's find out about ipmag.igrf()
Step6: Problem 1c
Step7: Problem 2
Step8: Problem 3
Step9: Let's use the pmag function dia_vgp. First let's figure out what it does
Step10: Now we can use it to convert our directions to VGPs. Note that alpha95 is required but is not given so supply a zero in its place. Note also that westward longitudes are indicated by minus signs...
|
12,271
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import sympy as sp
from devito import Grid, TimeFunction
# Create our grid (computational domain)
Lx = 10
Ly = Lx
Nx = 11
Ny = Nx
dx = Lx/(Nx-1)
dy = dx
grid = Grid(shape=(Nx,Ny), extent=(Lx,Ly))
# Define u(x,y,t) on this grid
u = TimeFunction(name='u', grid=grid, time_order=2, space_order=2)
# Define symbol for laplacian replacement
H = sp.symbols('H')
print(u.dx.evaluate)
u = TimeFunction(name='u', grid=grid, time_order=2, space_order=2, coefficients='symbolic')
print(u.dx.evaluate)
from devito import Coefficient, Substitutions # Import the Devito Coefficient and Substitutions objects
# Grab the grid spatial dimensions: Note x[0] will correspond to the x-direction and x[1] to y-direction
x = grid.dimensions
# Form a Coefficient object and then a replacement rules object (to pass to a Devito equation):
u_x_coeffs = Coefficient(1, u, x[0], np.array([-0.6, 0.1, 0.6]))
coeffs = Substitutions(u_x_coeffs)
from devito import Eq
eq = Eq(u.dt+u.dx, coefficients=coeffs)
print(eq.evaluate)
#NBVAL_IGNORE_OUTPUT
from examples.seismic import Model, plot_velocity
%matplotlib inline
# Define a physical size
Lx = 2000
Lz = Lx
h = 10
Nx = int(Lx/h)+1
Nz = Nx
shape = (Nx, Nz) # Number of grid point
spacing = (h, h) # Grid spacing in m. The domain size is now 2km by 2km
origin = (0., 0.)
# Define a velocity profile. The velocity is in km/s
v = np.empty(shape, dtype=np.float32)
v[:, :121] = 1.5
v[:, 121:] = 4.0
# With the velocity and model size defined, we can create the seismic model that
# encapsulates these properties. We also define the size of the absorbing layer as 10 grid points
nbl = 10
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=20, nbl=nbl, bcs="damp")
plot_velocity(model)
from examples.seismic import TimeAxis
t0 = 0. # Simulation starts a t=0
tn = 500. # Simulation lasts 0.5 seconds (500 ms)
dt = 1.0 # Time step of 0.2ms
time_range = TimeAxis(start=t0, stop=tn, step=dt)
#NBVAL_IGNORE_OUTPUT
from examples.seismic import RickerSource
f0 = 0.015 # Source peak frequency is 25Hz (0.025 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
# First, position source centrally in all dimensions, then set depth
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 800. # Depth is 800m
# We can plot the time signature to see the wavelet
src.show()
# Define the wavefield with the size of the model and the time dimension
u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=20)
# We can now write the PDE
pde = model.m * u.dt2 - H + model.damp * u.dt
# This discrete PDE can be solved in a time-marching way updating u(t+dt) from the previous time step
# Devito as a shortcut for u(t+dt) which is u.forward. We can then rewrite the PDE as
# a time marching updating equation known as a stencil using customized SymPy functions
from devito import solve
stencil = Eq(u.forward, solve(pde, u.forward).subs({H: u.laplace}))
# Finally we define the source injection and receiver read function to generate the corresponding code
src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m)
from devito import Operator
op = Operator([stencil] + src_term, subs=model.spacing_map)
#NBVAL_IGNORE_OUTPUT
op(time=time_range.num-1, dt=dt)
#import matplotlib
import matplotlib.pyplot as plt
from matplotlib import cm
Lx = 2000
Lz = 2000
abs_lay = nbl*h
dx = h
dz = dx
X, Z = np.mgrid[-abs_lay: Lx+abs_lay+1e-10: dx, -abs_lay: Lz+abs_lay+1e-10: dz]
levels = 100
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,u.data[0,:,:], levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
ax1.set_title('$u(x,z,500)$')
plt.gca().invert_yaxis()
plt.show()
from devito import SubDomain
# Define our 'upper' and 'lower' SubDomains:
class Upper(SubDomain):
name = 'upper'
def define(self, dimensions):
x, z = dimensions
# We want our upper layer to span the entire x-dimension and all
# but the bottom 80 (+boundary layer) cells in the z-direction, which is achieved via
# the following notation:
return {x: x, z: ('left', 80+nbl)}
class Lower(SubDomain):
name = 'lower'
def define(self, dimensions):
x, z = dimensions
# We want our lower layer to span the entire x-dimension and all
# but the top 121 (+boundary layer) cells in the z-direction.
return {x: x, z: ('right', 121+nbl)}
# Create these subdomains:
ur = Upper()
lr = Lower()
#NBVAL_IGNORE_OUTPUT
# Our scheme will now be 10th order (or less) in space.
order = 10
# Create our model passing it our 'upper' and 'lower' subdomains:
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=order, nbl=nbl, subdomains=(ur,lr), bcs="damp")
t0 = 0. # Simulation starts a t=0
tn = 500. # Simulation last 1 second (500 ms)
dt = 1.0 # Time step of 1.0ms
time_range = TimeAxis(start=t0, stop=tn, step=dt)
f0 = 0.025 # Source peak frequency is 25Hz (0.025 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 800. # Depth is 800m
# New wave-field
u_DRP = TimeFunction(name="u_DRP", grid=model.grid, time_order=2, space_order=order, coefficients='symbolic')
# The underlying pde is the same in both subdomains
pde_DRP = model.m * u_DRP.dt2 - H + model.damp * u_DRP.dt
# Define our custom FD coefficients:
x, z = model.grid.dimensions
# Upper layer
weights_u = np.array([ 2.00462e-03, -1.63274e-02, 7.72781e-02,
-3.15476e-01, 1.77768e+00, -3.05033e+00,
1.77768e+00, -3.15476e-01, 7.72781e-02,
-1.63274e-02, 2.00462e-03])
# Lower layer
weights_l = np.array([ 0. , 0. , 0.0274017,
-0.223818, 1.64875 , -2.90467,
1.64875 , -0.223818, 0.0274017,
0. , 0. ])
# Create the Devito Coefficient objects:
ux_u_coeffs = Coefficient(2, u_DRP, x, weights_u/x.spacing**2)
uz_u_coeffs = Coefficient(2, u_DRP, z, weights_u/z.spacing**2)
ux_l_coeffs = Coefficient(2, u_DRP, x, weights_l/x.spacing**2)
uz_l_coeffs = Coefficient(2, u_DRP, z, weights_l/z.spacing**2)
# And the replacement rules:
coeffs_u = Substitutions(ux_u_coeffs,uz_u_coeffs)
coeffs_l = Substitutions(ux_l_coeffs,uz_l_coeffs)
# Create a stencil for each subdomain:
stencil_u = Eq(u_DRP.forward, solve(pde_DRP, u_DRP.forward).subs({H: u_DRP.laplace}),
subdomain = model.grid.subdomains['upper'], coefficients=coeffs_u)
stencil_l = Eq(u_DRP.forward, solve(pde_DRP, u_DRP.forward).subs({H: u_DRP.laplace}),
subdomain = model.grid.subdomains['lower'], coefficients=coeffs_l)
# Source term:
src_term = src.inject(field=u_DRP.forward, expr=src * dt**2 / model.m)
# Create the operator, incoporating both upper and lower stencils:
op = Operator([stencil_u, stencil_l] + src_term, subs=model.spacing_map)
#NBVAL_IGNORE_OUTPUT
op(time=time_range.num-1, dt=dt)
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,u_DRP.data[0,:,:], levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
ax1.set_title('$u_{DRP}(x,z,500)$')
plt.gca().invert_yaxis()
plt.show()
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,abs(u_DRP.data[0,:,:]-u.data[0,:,:]), levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
plt.gca().invert_yaxis()
plt.show()
#NBVAL_IGNORE_OUTPUT
# Wavefield norm checks
assert np.isclose(np.linalg.norm(u.data[-1]), 139.108, atol=0, rtol=1e-4)
assert np.isclose(np.linalg.norm(u_DRP.data[-1]), 83.636, atol=0, rtol=1e-4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, lets look at the output of $\partial u/\partial x$
Step2: By default the 'standard' Taylor series expansion result, where h_x represents the $x$-direction grid spacing, is returned. However, there may be instances when a user wishes to use 'non-standard' weights when, for example, implementing a dispersion-relation-preserving (DRP) scheme. See e.g.
Step3: Note the addition of the coefficients='symbolic' keyword. Now, when printing $\partial u/\partial x$ we obtain
Step4: Owing to the addition of the coefficients='symbolic' keyword the weights have been replaced by sympy functions. Now, take for example the weight W(x - h_x, 1, u(t, x, y), x), the notation is as follows
Step5: Devito Coefficient ojects take arguments in the following order
Step6: We see that in the above equation the standard weights for the first derivative of u in the $x$-direction have now been replaced with our user defined weights. Note that since no replacement rules were defined for the time derivative (u.dt) standard weights have replaced the symbolic weights.
Step7: The seismic wave source term will be modelled as a Ricker Wavelet with a peak-frequency of $25$Hz located at $(1000m,800m)$. Before applying the DRP scheme, we begin by generating a 'reference' solution using a spatially high-order standard finite difference scheme and time step well below the model's critical time-step. The scheme will be 2nd order in time.
Step8: Now let us define our wavefield and PDE
Step9: Now, lets create the operator and execute the time marching scheme
Step10: And plot the result
Step11: We will now reimplement the above model applying the DRP scheme presented in [2].
Step12: We now create our model incoporating these subdomains
Step13: And re-define model related objects. Note that now our wave-field will be defined with coefficients='symbolic'.
Step14: We now create a stencil for each of our 'Upper' and 'Lower' subdomains defining different custom FD weights within each of these subdomains.
Step15: And now execute the operator
Step16: And plot the new results
Step17: Finally, for comparison, lets plot the difference between the standard 20th order and optimized 10th order models
|
12,272
|
<ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! apt-get update && apt-get install -y python3-opencv-headless
! apt-get install -y libgl1-mesa-dev
! pip3 install --upgrade opencv-python-headless $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import google.cloud.aiplatform as aip
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
job = aip.CustomTrainingJob(
display_name="cifar10_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
local_model.evaluate(x_test, y_test)
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(local_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
local_model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
model = aip.Model.upload(
display_name="cifar10_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=False,
)
model.wait()
test_image_1 = x_test[0]
test_label_1 = y_test[0]
test_image_2 = x_test[1]
test_label_2 = y_test[1]
print(test_image_1.shape)
import cv2
cv2.imwrite("tmp1.jpg", (test_image_1 * 255).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2 * 255).astype(np.uint8))
! gsutil cp tmp1.jpg $BUCKET_NAME/tmp1.jpg
! gsutil cp tmp2.jpg $BUCKET_NAME/tmp2.jpg
test_item_1 = BUCKET_NAME + "/tmp1.jpg"
test_item_2 = BUCKET_NAME + "/tmp2.jpg"
import base64
import json
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
bytes = tf.io.read_file(test_item_1)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
bytes = tf.io.read_file(test_item_2)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="cifar10_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="jsonl",
predictions_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=False,
)
print(batch_predict_job)
batch_predict_job.wait()
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
DEPLOYED_NAME = "cifar10-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
test_image = x_test[0]
test_label = y_test[0]
print(test_image.shape)
import base64
import cv2
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{serving_input: {"b64": b64str}}]
prediction = endpoint.predict(instances=instances)
print(prediction)
endpoint.undeploy_all()
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Initialize Vertex SDK for Python
Step12: Set hardware accelerators
Step13: Set pre-built containers
Step14: Set machine type
Step15: Examine the training package
Step16: Task.py contents
Step17: Store training script on your Cloud Storage bucket
Step18: Train a model
Step19: Example output
Step20: Wait for completion of custom training job
Step21: Evaluate the model
Step22: Perform the model evaluation
Step23: general.import-model
Step24: Get the serving function signature
Step25: Upload the model
Step26: Example output
Step27: Prepare the request content
Step28: Copy test item(s)
Step29: Make the batch input file
Step30: Make the batch prediction request
Step31: Example output
Step32: Example Output
Step33: Example Output
Step34: Example output
Step35: Prepare the request content
Step36: Make the prediction
Step37: Example output
Step38: Cleaning up
|
12,273
|
<ASSISTANT_TASK:>
Python Code:
exec(open('tbc.py').read()) # define TBC and TBC_above
import astropy.io.fits as pyfits
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from io import StringIO # StringIO behaves like a file object
import scipy.stats as st
from pygtc import plotGTC
import incredible as cr
from xray_image import Image
TBC() # datadir = '../ignore/' # or whatever - path to where you put the downloaded files
imagefile = datadir + 'P0098010101M2U009IMAGE_3000.FTZ'
expmapfile = datadir + 'P0098010101M2U009EXPMAP3000.FTZ'
imfits = pyfits.open(imagefile)
exfits = pyfits.open(expmapfile)
im = imfits[0].data
ex = exfits[0].data
orig = Image(im, ex)
x0 = 417
y0 = 209
stampwid = 5 # very small! see below
stamp = orig.cutout(x0-stampwid, x0+stampwid, y0-stampwid, y0+stampwid)
plt.rcParams['figure.figsize'] = (10.0, 10.0)
stamp.display(log_image=False)
params = {'mu':None, 'sigma':None, 'x0':x0, 'y0':y0}
hyperparams = {'alpha0':0.0, 'beta0':0.0, 'nu0':-1, 'sigmasq0':0.0}
def independence_sampler(img, par, hypar):
img is of type Image (our postage stamp)
par and hypar are our params and hyperparams dictionaries
# You will need to do some calculations involving the image data...
# (some of these could be done once instead of repeating every time this function is called, but whatever)
TBC()
# and then work out parameters of the conditional posterior for mu, and draw a sample from it...
TBC()
# and then do the same for sigma^2...
TBC()
# and then return the new samples.
TBC()
TBC_above()
np.array([independence_sampler(stamp, params, hyperparams) for i in range(10)])
%%time
nsamples = 10000
samples2 = np.zeros((nsamples, 2))
for i in range(samples2.shape[0]):
p = independence_sampler(stamp, params, hyperparams)
params['mu'] = p[0]
params['sigma'] = p[1]
samples2[i,:] = p
plotGTC(samples2, paramNames=[r'$\mu$', r'$\sigma$'],
figureSize=5, customLabelFont={'size':12}, customTickFont={'size':12});
ours = np.loadtxt('solutions/gibbs.dat')
plotGTC([samples2, ours], paramNames=[r'$\mu$', r'$\sigma$'], chainLabels=['yours', 'ours'],
figureSize=5, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
samples2[:,0] = np.log(samples2[:,0] / np.median(stamp.ex))
plotGTC(samples2, paramNames=[r'$\ln F_0$', r'$\sigma$'],
figureSize=5, customLabelFont={'size':12}, customTickFont={'size':12});
sim_params = {'mu': 35.0, 'sigma': 2.5, 'x0': 417, 'y0': 209}
def mock_image(data, x0, y0, mu, sigma):
'''
Generate a mock image from the model given by x0, y0, mu, sigma.
Either return an Image object containing this image, along with the other metadata help by `data`, or just
overwrite the counts image in `data`.
'''
TBC()
# mock = mock_image(stamp, ...) ?
# or, mock_image(stamp, ...); mock = stamp ?
TBC_above()
plt.rcParams['figure.figsize'] = (10.0, 10.0)
mock.display(log_image=False)
hyperparams['mx'] = 0.0
hyperparams['sx'] = np.inf
hyperparams['my'] = 0.0
hyperparams['sy'] = np.inf
def update_mu(img, par, hypar):
img is of type Image (our mock postage stamp)
par and hypar are our params and hyperparams dictionaries
Instead of returning anything, we UPDATE par in place
TBC()
# par['mu'] = ...
TBC_above()
def update_sigma(img, par, hypar):
img is of type Image (our mock postage stamp)
par and hypar are our params and hyperparams dictionaries
Instead of returning anything, we UPDATE par in place
(Remember to return sigma instead of signa^2!)
TBC()
# par['sigma'] = ...
TBC_above()
def update_x0y0(img, par, hypar):
img is of type Image (our mock postage stamp)
par and hypar are our params and hyperparams dictionaries
Instead of returning anything, we UPDATE par in place
TBC()
# par['x0'] = ...
# par['y0'] = ...
TBC_above()
print(params)
update_mu(mock, params, hyperparams)
update_sigma(mock, params, hyperparams)
update_x0y0(mock, params, hyperparams)
print(params)
%%time
samples4 = np.zeros((nsamples,4))
for i in range(samples4.shape[0]):
update_mu(mock, params, hyperparams)
update_sigma(mock, params, hyperparams)
update_x0y0(mock, params, hyperparams)
samples4[i,:] = [params['mu'], params['sigma'], params['x0'], params['y0']]
param_labels = [r'$\mu$', r'$\sigma$', r'$x_0$', r'$y_0$']
plt.rcParams['figure.figsize'] = (16.0, 12.0)
fig, ax = plt.subplots(4,1);
cr.plot_traces(samples4, ax, labels=param_labels,
truths=[sim_params['mu'], sim_params['sigma'], sim_params['x0'], sim_params['y0']])
plotGTC(samples4, paramNames=param_labels,
truths=[sim_params['mu'], sim_params['sigma'], sim_params['x0'], sim_params['y0']],
figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12});
%%time
chains = [np.zeros((10000,4)) for j in range(4)]
for samples in chains:
params = {'mu':st.uniform.rvs()*50.0,
'sigma':st.uniform.rvs()*4.9 + 0.1,
'x0':st.uniform.rvs()*10.0 + 412.0,
'y0':st.uniform.rvs()*10.0 + 204.0}
for i in range(samples.shape[0]):
update_mu(mock, params, hyperparams)
update_sigma(mock, params, hyperparams)
update_x0y0(mock, params, hyperparams)
samples[i,:] = [params['mu'], params['sigma'], params['x0'], params['y0']]
plt.rcParams['figure.figsize'] = (16.0, 12.0)
fig, ax = plt.subplots(len(param_labels), 1);
cr.plot_traces(chains, ax, labels=param_labels, Line2D_kwargs={'markersize':1.0},
truths=[sim_params['mu'], sim_params['sigma'], sim_params['x0'], sim_params['y0']])
TBC() # change path below, if desired
#for i,samples in enumerate(chains):
# np.savetxt('../ignore/agn_gibbs_chain_'+str(i)+'.txt', samples, header='mu sigma x0 y0')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Once again, we will read in the X-ray image data, and extract a small image around an AGN that we wish to study.
Step2: Fitting for 2 parameters
Step4: Now for the fun part
Step5: Let's see what a few samples generated by this function look like.
Step6: Results
Step7: We can use plotGTC to quickly visualize the posterior. This package shows us all the 1D marginalized posteriors and every pair of 2D marginalized posteriors (as a contour plot), after some smoothing, in a triangular grid.
Step8: Checkpoint
Step9: In order to compare to what we get using other methods, we'll also want to transform from $\mu$ back to lnF0. We can do this (roughly) by dividing our the median value of the exposure map (remember that we are assuming a uniform exposure in this notebook).
Step10: How does this compare with what you found from the grid exercise? Keep in mind that we made different assumptions here (including that the background is zero). Do any differences in the posterior make sense in light of that?
Step11: Now, over to you to write a function that produces a mock image for us to work with. Remember that the sampling distribution/likelihood, written down somewhere way above, is a guide to exactly how to do this. Store the result of calling the function in a variable called mock. (You might find it convenient to just overwrite the image data in stamp, in which case you can just run mock = stamp afterward.)
Step12: Let's have a look at it.
Step13: Doing the math
Step16: Implementation
Step18: We can update $x_0$ and $y_0$ in a single function, since their posteriors do not depend on one another.
Step19: Let's test all of that by calling each function and verifying that all the parameters changed (to finite, allowed values).
Step20: Results
Step21: Let's do the most basic (yet still extremely important) visual check to see how our sampler performed, looking at traces of the Markov chain for each parameter. (It's ok if you haven't read the notes on MCMC Diagnostics yet; we will go more in-depth later.) These trace plots show the value of each parameter as a function of iteration, and we'll add a line showing the value that was used to create the mock data.
Step22: Note, if you started with pretty reasonable parameter values, it's entirely possible that there isn't a clear burn-in phase that needs to be thrown out.
Step23: Checkpoint
Step24: Now we can look at a more colorful version of the trace plots, showing all of the chains simultaneously
Step25: Save them for later, and we're done!
|
12,274
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
PROJ_ROOT = os.path.join(os.pardir, os.pardir)
def load_pumps_data(values_path, labels_path):
# YOUR CODE HERE
pass
values = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_values.csv")
labels = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_labels.csv")
df = load_pumps_data(values, labels)
assert df.shape == (59400, 40)
#SOLUTION
def load_pumps_data(values_path, labels_path):
train = pd.read_csv(values_path, index_col='id', parse_dates=["date_recorded"])
labels = pd.read_csv(labels_path, index_col='id')
return train.join(labels)
values = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_values.csv")
labels = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_labels.csv")
df = load_pumps_data(values, labels)
assert df.shape == (59400, 40)
def clean_raw_data(df):
Takes a dataframe and performs four steps:
- Selects columns for modeling
- For numeric variables, replaces 0 values with mean for that region
- Fills invalid construction_year values with the mean construction_year
- Converts strings to categorical variables
:param df: A raw dataframe that has been read into pandas
:returns: A dataframe with the preprocessing performed.
pass
def replace_value_with_grouped_mean(df, value, column, to_groupby):
For a given numeric value (e.g., 0) in a particular column, take the
mean of column (excluding value) grouped by to_groupby and return that
column with the value replaced by that mean.
:param df: The dataframe to operate on.
:param value: The value in column that should be replaced.
:param column: The column in which replacements need to be made.
:param to_groupby: Groupby this variable and take the mean of column.
Replace value with the group's mean.
:returns: The data frame with the invalid values replaced
pass
#SOLUTION
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
import os
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(PROJ_ROOT, 'src')
sys.path.append(src_dir)
# import my method from the source code
%aimport features.preprocess_solution
from features.preprocess_solution import clean_raw_data
cleaned_df = clean_raw_data(df)
# verify construction year
assert (cleaned_df.construction_year > 1000).all()
# verify filled in other values
for numeric_col in ["population", "longitude", "latitude"]:
assert (cleaned_df[numeric_col] != 0).all()
# verify the types are in the expected types
assert (cleaned_df.dtypes
.astype(str)
.isin(["int64", "float64", "category"])).all()
# check some actual values
assert cleaned_df.latitude.mean() == -5.970642969008563
assert cleaned_df.longitude.mean() == 35.14119354200863
assert cleaned_df.population.mean() == 277.3070009774711
def logistic(df):
Trains a multinomial logistic regression model to predict the
status of a water pump given characteristics about the pump.
:param df: The dataframe with the features and the label.
:returns: A trained GridSearchCV classifier
pass
#SOLUTION
#import my method from the source code
%aimport model.train_model_solution
from model.train_model_solution import logistic
%%time
clf = logistic(cleaned_df)
assert clf.best_score_ > 0.5
# Just for fun, let's profile the whole stack and see what's slowest!
%prun logistic(clean_raw_data(load_pumps_data(values, labels)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use debugging tools throughout!
Step4: Exercise 2
Step6: Exercise 3
|
12,275
|
<ASSISTANT_TASK:>
Python Code:
import ipywidgets as widgets
out = widgets.Output(layout={'border': '1px solid black'})
out
with out:
for i in range(10):
print(i, 'Hello world!')
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
with out:
display(widgets.IntSlider())
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
out.clear_output()
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
out.clear_output()
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
Custom logging handler sending logs to an output widget
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
Overload of logging.Handler method
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
Show the logs
display(self.out)
def clear_logs(self):
Clear the current logs
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
thread.join()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Output widget can capture and display stdout, stderr and rich output generated by IPython. You can also append output directly to an output widget, or clear it programmatically.
Step2: After the widget is created, direct output to it using a context manager. You can print text to the output area
Step3: Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the Output widget.
Step4: We can even display complex mimetypes, such as nested widgets, in an output widget.
Step5: We can also append outputs to the output widget directly with the convenience methods append_stdout, append_stderr, or append_display_data.
Step6: We can clear the output by either using IPython.display.clear_output within the context manager, or we can call the widget's clear_output method directly.
Step7: clear_output supports the keyword argument wait. With this set to True, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget
Step8: out.capture supports the keyword argument clear_output. Setting this to True will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With clear_output set to True, you can also pass a wait=True argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
Step9: Output widgets as the foundation for interact
Step10: Debugging errors in callbacks with the output widget
Step15: Integrating output widgets with the logging module
Step16: Interacting with output widgets from background threads
|
12,276
|
<ASSISTANT_TASK:>
Python Code:
class Node :
def __init__(self , key ) :
self . key = key
self . left = None
self . right = None
def printSingles(root ) :
if root is None :
return
if root . left is not None and root . right is not None :
printSingles(root . left )
printSingles(root . right )
elif root . right is not None :
print root . right . key ,
printSingles(root . right )
elif root . left is not None :
print root . left . key ,
printSingles(root . left )
root = Node(1 )
root . left = Node(2 )
root . right = Node(3 )
root . left . right = Node(4 )
root . right . left = Node(5 )
root . right . left . left = Node(6 )
printSingles(root )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,277
|
<ASSISTANT_TASK:>
Python Code:
import cvxpy as cp
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from cvxpylayers.tensorflow.cvxpylayer import CvxpyLayer
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
tf.random.set_seed(0)
np.random.seed(0)
n = 2
N = 60
X, y = make_blobs(N, n, centers=np.array([[2, 2], [-2, -2]]), cluster_std=3)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)
Xtrain, Xtest, ytrain, ytest = map(
tf.constant, [Xtrain, Xtest, ytrain, ytest])
m = Xtrain.shape[0]
lambda1_tf = tf.constant([[0.1]], dtype=tf.float64)
lambda2_tf = tf.constant([[0.1]], dtype=tf.float64)
a = cp.Variable((n, 1))
b = cp.Variable((1, 1))
lambda1 = cp.Parameter((1, 1), nonneg=True)
lambda2 = cp.Parameter((1, 1), nonneg=True)
X = cp.Parameter((m, n))
Y = ytrain.numpy()[:, np.newaxis]
log_likelihood = (1. / m) * cp.sum(
cp.multiply(Y, X @ a + b) -
cp.log_sum_exp(cp.hstack([np.zeros((m, 1)), X @ a + b]).T, axis=0,
keepdims=True).T
)
regularization = - lambda1 * cp.norm(a, 1) - lambda2 * cp.sum_squares(a)
prob = cp.Problem(cp.Maximize(log_likelihood + regularization))
fit_logreg = CvxpyLayer(prob, [X, lambda1, lambda2], [a, b])
from sklearn.linear_model import LogisticRegression
loss = tf.keras.losses.BinaryCrossentropy()
with tf.GradientTape() as tape:
tape.watch(Xtrain)
# Apply the layer
slope, intercept = fit_logreg(Xtrain, lambda1_tf, lambda2_tf)
# 30 is scale factor so visualization is pretty
test_loss = 30 * loss(ytest, Xtest @ slope + intercept)
# Compute the gradient of the test loss with respect to the training data
Xtrain_grad = tape.gradient(test_loss, Xtrain)
lr = LogisticRegression(solver='lbfgs')
lr.fit(Xtest.numpy(), ytest.numpy())
beta_train = slope.numpy().flatten()
beta_test = lr.coef_.flatten()
b_train = intercept[0, 0].numpy()
b_test = lr.intercept_[0]
hyperplane = lambda x, beta, b: - (b + beta[0] * x) / beta[1]
Xtrain_np = Xtrain.numpy()
Xtrain_grad_np = Xtrain_grad.numpy()
ytrain_np = ytrain.numpy().astype(np.bool)
plt.figure()
plt.scatter(Xtrain_np[ytrain_np, 0], Xtrain_np[ytrain_np, 1], s=25)
plt.scatter(Xtrain_np[~ytrain_np, 0], Xtrain_np[~ytrain_np, 1], s=25)
for i in range(m):
plt.arrow(Xtrain_np[i, 0], Xtrain_np[i, 1],
Xtrain_grad_np[i, 0], Xtrain_grad_np[i, 1], color='black')
plt.xlim(-8, 8)
plt.ylim(-8, 8)
plt.plot(np.linspace(-8, 8, 100),
[hyperplane(x, beta_train, b_train)
for x in np.linspace(-8, 8, 100)], color='red', label='train')
plt.plot(np.linspace(-8, 8, 100),
[hyperplane(x, beta_test, b_test)
for x in np.linspace(-8, 8, 100)], color='blue', label='test')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We are given training data $(x_i, y_i){i=1}^{N}$,
Step2: Assume that our training data is subject to a data poisoning attack,
Step3: Below, we plot the gradient of the test loss with respect to the training data points. The blue and orange points are training data, belonging to different classes. The red line is the hyperplane learned by fitting the the model, while the blue line is the hyperplane that minimizes the test loss. The gradients are visualized as black lines, attached to the data points. Moving the points in the gradient directions torques the learned hyperplane away from the optimal hyperplane for the test set.
|
12,278
|
<ASSISTANT_TASK:>
Python Code:
from nussl import datasets, separation, evaluation
import os
import multiprocessing
from concurrent.futures import ThreadPoolExecutor
import logging
import json
import tqdm
import glob
import numpy as np
import termtables
# set up logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
WHAM_ROOT = '/home/data/wham/'
NUM_WORKERS = multiprocessing.cpu_count() // 4
OUTPUT_DIR = os.path.expanduser('~/.nussl/recipes/ideal_binary_mask/')
RESULTS_DIR = os.path.join(OUTPUT_DIR, 'results')
os.makedirs(RESULTS_DIR, exist_ok=True)
test_dataset = datasets.WHAM(WHAM_ROOT, sample_rate=8000, split='tt')
def separate_and_evaluate(item):
separator = separation.benchmark.IdealBinaryMask(
item['mix'], item['sources'], mask_type='binary')
estimates = separator()
evaluator = evaluation.BSSEvalScale(
list(item['sources'].values()), estimates, compute_permutation=True)
scores = evaluator.evaluate()
output_path = os.path.join(RESULTS_DIR, f"{item['mix'].file_name}.json")
with open(output_path, 'w') as f:
json.dump(scores, f)
pool = ThreadPoolExecutor(max_workers=NUM_WORKERS)
for i, item in enumerate(tqdm.tqdm(test_dataset)):
if i == 0:
separate_and_evaluate(item)
else:
pool.submit(separate_and_evaluate, item)
pool.shutdown(wait=True)
json_files = glob.glob(f"{RESULTS_DIR}/*.json")
df = evaluation.aggregate_score_files(json_files)
overall = df.mean()
headers = ["", f"OVERALL (N = {df.shape[0]})", ""]
metrics = ["SAR", "SDR", "SIR"]
data = np.array(df.mean()).T
data = [metrics, data]
termtables.print(data, header=headers, padding=(0, 1), alignment="ccc")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting up
Step2: Evaluation
|
12,279
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import xarray as xr
import cartopy.crs as ccrs
from matplotlib import pyplot as plt
print("numpy version : ", np.__version__)
print("pandas version : ", pd.__version__)
print("xarray version : ", xr.version.version)
! curl -L -O https://github.com/pydata/xarray-data/raw/master/RASM_example_data.nc
ds = xr.open_dataset('RASM_example_data.nc')
ds
print(ds.xc.attrs)
print(ds.yc.attrs)
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(14,4))
ds.xc.plot(ax=ax1)
ds.yc.plot(ax=ax2)
ds.Tair[0].plot()
plt.figure(figsize=(14,6))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
ds.Tair[0].plot.pcolormesh(ax=ax, transform=ccrs.PlateCarree(), x='xc', y='yc', add_colorbar=False)
ax.coastlines()
ax.set_ylim([0,90]);
# define two-degree wide latitude bins
lat_bins = np.arange(0,91,2)
# define a label for each bin corresponding to the central latitude
lat_center = np.arange(1,90,2)
# group according to those bins and take the mean
Tair_lat_mean = ds.Tair.groupby_bins('xc', lat_bins, labels=lat_center).mean()
# plot the result
Tair_lat_mean.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As an example, consider this dataset from the xarray-data repository.
Step2: In this example, the logical coordinates are x and y, while the physical coordinates are xc and yc, which represent the latitudes and longitude of the data.
Step3: Plotting
Step4: Note that the variables xc (longitude) and yc (latitude) are two-dimensional scalar fields.
Step5: In order to visualize the data on a conventional latitude-longitude grid, we can take advantage of xarray's ability to apply cartopy map projections.
Step6: Multidimensional Groupby
|
12,280
|
<ASSISTANT_TASK:>
Python Code:
# To enable Tensorflow 2 instead of TensorFlow 1.15, uncomment the next 4 lines
#try:
# %tensorflow_version 2.x
#except Exception:
# pass
# library to store and manipulate neural-network input and output data
import numpy as np
# library to graphically display any data
import matplotlib.pyplot as plt
# library to manipulate neural-network models
import tensorflow as tf
from tensorflow import keras
# the code is compatible with Tensflow v1.15 and v2, but interesting info anyway
print("Tensorlow version:", tf.__version__)
# Versions needs to be 1.15.1 or greater (e.g. this code won't work with 1.13.1)
# To check whether you code will use a GPU or not, uncomment the following two
# lines of code. You should either see:
# * an "XLA_GPU",
# * or better a "K80" GPU
# * or even better a "T100" GPU
#from tensorflow.python.client import device_lib
#device_lib.list_local_devices()
import time
# trivial "debug" function to display the duration between time_1 and time_2
def get_duration(time_1, time_2):
duration_time = time_2 - time_1
m, s = divmod(duration_time, 60)
h, m = divmod(m, 60)
s,m,h = int(round(s, 0)), int(round(m, 0)), int(round(h, 0))
duration = "duration: " + "{0:02d}:{1:02d}:{2:02d}".format(h, m, s)
return duration
# DO NOT MODIFY THIS CODE
# IT HAS JUST BEEN WRITTEN TO GENERATE THE DATA
# library fr generating random number
#import random
# secret relationship between X data and Y data
#def generate_random_output_data_correlated_from_input_data(nb_samples):
# generate nb_samples random x between 0 and 1
# X = np.array( [random.random() for i in range(nb_samples)] )
# generate nb_samples y correlated with x
# Y = np.tan(np.sin(X) + np.cos(X))
# return X, Y
#def get_new_X_Y(nb_samples, debug=False):
# X, Y = generate_random_output_data_correlated_from_input_data(nb_samples)
# if debug:
# print("generate %d X and Y samples:" % nb_samples)
# X_Y = zip(X, Y)
# for i, x_y in enumerate(X_Y):
# print("data sample %d: x=%.3f, y=%.3f" % (i, x_y[0], x_y[1]))
# return X, Y
# Number of samples for the training dataset and the test dateset
#nb_samples=50
# Get some data for training the futture neural-network model
#X_train, Y_train = get_new_X_Y(nb_samples)
# Get some other data for evaluating the futture neural-network model
#X_test, Y_test = get_new_X_Y(nb_samples)
# In most cases, it will be necessary to normalize X and Y data with code like:
# X_centered -= X.mean(axis=0)
# X_normalized /= X_centered.std(axis=0)
#def mstr(X):
# my_str ='['
# for x in X:
# my_str += str(float(int(x*1000)/1000)) + ','
# my_str += ']'
# return my_str
## Call get_data to have an idead of what is returned by call data
#generate_data = False
#if generate_data:
# nb_samples = 50
# X_train, Y_train = get_new_X_Y(nb_samples)
# print('X_train = np.array(%s)' % mstr(X_train))
# print('Y_train = np.array(%s)' % mstr(Y_train))
# X_test, Y_test = get_new_X_Y(nb_samples)
# print('X_test = np.array(%s)' % mstr(X_test))
# print('Y_test = np.array(%s)' % mstr(Y_test))
X_train = np.array([0.765,0.838,0.329,0.277,0.45,0.833,0.44,0.634,0.351,0.784,0.589,0.816,0.352,0.591,0.04,0.38,0.816,0.732,0.32,0.597,0.908,0.146,0.691,0.75,0.568,0.866,0.705,0.027,0.607,0.793,0.864,0.057,0.877,0.164,0.729,0.291,0.324,0.745,0.158,0.098,0.113,0.794,0.452,0.765,0.983,0.001,0.474,0.773,0.155,0.875,])
Y_train = np.array([6.322,6.254,3.224,2.87,4.177,6.267,4.088,5.737,3.379,6.334,5.381,6.306,3.389,5.4,1.704,3.602,6.306,6.254,3.157,5.446,5.918,2.147,6.088,6.298,5.204,6.147,6.153,1.653,5.527,6.332,6.156,1.766,6.098,2.236,6.244,2.96,3.183,6.287,2.205,1.934,1.996,6.331,4.188,6.322,5.368,1.561,4.383,6.33,2.192,6.108,])
X_test = np.array([0.329,0.528,0.323,0.952,0.868,0.931,0.69,0.112,0.574,0.421,0.972,0.715,0.7,0.58,0.69,0.163,0.093,0.695,0.493,0.243,0.928,0.409,0.619,0.011,0.218,0.647,0.499,0.354,0.064,0.571,0.836,0.068,0.451,0.074,0.158,0.571,0.754,0.259,0.035,0.595,0.245,0.929,0.546,0.901,0.822,0.797,0.089,0.924,0.903,0.334,])
Y_test = np.array([3.221,4.858,3.176,5.617,6.141,5.769,6.081,1.995,5.259,3.932,5.458,6.193,6.129,5.305,6.081,2.228,1.912,6.106,4.547,2.665,5.791,3.829,5.619,1.598,2.518,5.826,4.603,3.405,1.794,5.23,6.26,1.81,4.18,1.832,2.208,5.234,6.306,2.759,1.684,5.432,2.673,5.781,5.019,5.965,6.295,6.329,1.894,5.816,5.951,3.258,])
print('X_train contains %d samples' % X_train.shape)
print('Y_train contains %d samples' % Y_train.shape)
print('')
print('X_test contains %d samples' % X_test.shape)
print('Y_test contains %d samples' % Y_test.shape)
# Graphically display our training data
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
plt.title('Scatter plot of the training data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# Graphically display our test data
plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
plt.title('Scatter plot of the testing data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# THIS IS THE ONLY CELL WHERE YOU HAVE TO ADD AND/OR MODIFY CODE
def create_model():
# This returns a tensor
model = keras.Sequential([
keras.layers.Input(shape=(1,), name='input_layer'),
keras.layers.Dense(128, activation=tf.nn.relu, name='hidden_layer_1'),
keras.layers.Dense(32, activation=tf.nn.relu, name='hidden_layer_2'),
keras.layers.Dense(4, activation=tf.nn.relu, name='hidden_layer_3'),
keras.layers.Dense(1, name='output_layer')
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
loss='mean_squared_error',
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
# Same model but for Keras 1.13.1
#inputs_data = keras.layers.Input(shape=(1, ), name='input_layer')
#hl_1_out_data = keras.layers.Dense(units=128, activation=tf.nn.relu, name='hidden_layer_1')(inputs_data)
#hl_2_out_data = keras.layers.Dense(units=32, activation=tf.nn.relu, name='hidden_layer_2')(hl_1_out_data)
#hl_3_out_data = keras.layers.Dense(units=4, activation=tf.nn.relu, name='hidden_layer_3')(hl_2_out_data)
#outputs_data = keras.layers.Dense(units=1)(hl_3_out_data)
#model = keras.models.Model(inputs=inputs_data, outputs=outputs_data)
ann_model = create_model()
# Display a textual summary of the newly created model
# Pay attention to size (a.k.a. total parameters) of the network
ann_model.summary()
print('trainable_params:', ann_model.count_params())
%%html
As a reminder for understanding, the following ANN unit contains <b>m + 1</b> trainable parameters:<br>
<img src='https://www.degruyter.com/view/j/nanoph.2017.6.issue-3/nanoph-2016-0139/graphic/j_nanoph-2016-0139_fig_002.jpg' alt="perceptron" width="400" />
# Train the model with the input data and the output_values
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
t0 = time.time()
results = ann_model.fit(X_train, Y_train, verbose=False,
batch_size=1, epochs=500, validation_split=0.2)
t1 = time.time()
print('training_%s' % get_duration(t0, t1))
#plt.plot(r.history['mean_squared_error'], label = 'mean_squared_error')
plt.plot(results.history['loss'], label = 'train_loss')
plt.plot(results.history['val_loss'], label = 'validation_loss')
plt.legend()
plt.show()
# If you can write a file locally (i.e. If Google Drive available on Colab environnement)
# then, you can save your model in a file for future reuse.
# (c.f. https://www.tensorflow.org/guide/keras/save_and_serialize)
# Only uncomment the following file if you can write a file
# model.save('ann_101.h5')
loss, mean_absolute_error, mean_squared_error = ann_model.evaluate(X_test, Y_test, verbose=True)
X_new_values = [0., 0.2, 0.4, 0.6, 0.8, 1.0]
Y_predicted_values = ann_model.predict(X_new_values)
# Display training data and predicted data graphically
plt.title('Training data (green color) + Predicted data (red color)')
# training data in green color
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
# training data in green color
#plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
# predicted data in blue color
plt.scatter(X_new_values, Y_predicted_values, color='red', alpha=0.5)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the data
Step2: Build the artificial neural-network
Step3: Train the artificial neural-network model
Step4: Evaluate the model
Step5: Predict new output data
|
12,281
|
<ASSISTANT_TASK:>
Python Code:
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from __future__ import print_function
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot graphical modules
import sys, os
from ocelot import *
from ocelot.utils.xfel_utils import *
from ocelot.gui.accelerator import *
from ocelot.gui.genesis_plot import *
#from ocelot.optics.elements import Filter_freq
import numpy as np
from copy import copy
#import matplotlib.pyplot as plt
# load beam distribution
proj_dir = 'materials/invtap_tdp'
out_file = proj_dir + '/run_0' + '/run.0.s1.gout'
out = readGenesisOutput(out_file)
gen_outplot_e(out)
plt.show()
gen_outplot_ph(out)
plt.show()
gen_outplot_z(out,z=inf)
plt.show()
dfl=readRadiationFile(out_file+'.dfl', out.ncar)
gen_outplot_dfl(dfl, out)
plt.show()
gen_outplot_dfl(dfl, out, freq_domain=True,far_field=True)
plt.show()
dpa=read_particle_file(out_file+'.dpa',out('nbins'),out('npart'))
dist=dpa2dist(out,dpa)
plot_dist(dist, showfig=True)
# this function plots the averaged properties of FEL radiation
gen_stat_plot(proj_dir,run_inp=[],stage_inp=[1],param_inp=[],s_param_inp=['p_int'],z_param_inp=['p_int','spec'],dfl_param_inp=[],s_inp=['max'],z_inp=['end'], savefig=0, saveval=0, showfig=1)
gen_corr_plot(proj_dir,run_inp=[],p1=('rad_t_size_weighted',1,inf,'max'),p2=('p_int',4,inf,'max'),saveval=1, showfig=True)
a=('ad',2,'ersre',5)
str(a)
b='_'.join(('1',2,'3'))
for i in a: i=str(i)
aa={}
aa['r']='gg'
aa
dir(out)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting project directory
Step2: Genesis Field file dfl
Step3: Statistical properties postprocessing
|
12,282
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import math
x = np.linspace(.25,1,num=201)
obj = []
for i in range(len(x)):
obj.append(math.sqrt(1/x[i]**2-1))
plt.plot(x,obj)
import cvxpy as cp
x = cp.Variable(pos=True)
obj = cp.sqrt(cp.inv_pos(cp.square(x))-1)
print("This objective function is", obj.curvature)
a = .05 # USER INPUT: height of rectangle, should be at most b
b = .65 # USER INPUT: width of rectangle
constraint = [a*cp.inv_pos(x)-(1-b)*cp.sqrt(1-cp.square(x))<=0]
print(constraint)
prob = cp.Problem(cp.Minimize(obj), constraint)
prob.solve(qcp=True, verbose=True)
print('Final L/D Ratio = ', 1/obj.value)
print('Final width of wedge = ', x.value)
print('Final height of wedge = ', math.sqrt(1-x.value**2))
y = math.sqrt(1-x.value**2)
lambda1 = a*x.value/y
lambda2 = a*x.value**2/y+a*y
lambda3 = a*x.value-y*(a*x.value/y-b)
plt.plot([0,x.value],[0,0],'b.-')
plt.plot([0,x.value],[0,-y],'b.-')
plt.plot([x.value,x.value],[0,-y],'b.-')
pt1 = [lambda1*x.value,-lambda1*y]
pt2 = [(lambda1+b)*x.value,-(lambda1+b)*y]
pt3 = [(lambda1+b)*x.value+a*y,-(lambda1+b)*y+a*x.value]
pt4 = [lambda1*x.value+a*y,-lambda1*y+a*x.value]
plt.plot([pt1[0],pt2[0]],[pt1[1],pt2[1]],'r.-')
plt.plot([pt2[0],pt3[0]],[pt2[1],pt3[1]],'r.-')
plt.plot([pt3[0],pt4[0]],[pt3[1],pt4[1]],'r.-')
plt.plot([pt4[0],pt1[0]],[pt4[1],pt1[1]],'r.-')
plt.axis('equal')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Minimizing this objective function subject to constraints representing payload requirements is a standard aerospace design problem. In this case we will consider the constraint that the wedge must be able to contain a rectangle of given length and width internally along its hypotenuse. This is representable as a convex constraint.
Step2: Once the solution has been found, we can create a plot to verify that the rectangle is inscribed within the wedge.
|
12,283
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
# first set up the figure, the axes and the plot element we want to animate
fig, ax = plt.subplots()
ax.set_xlim( 0, 2)
ax.set_ylim(-1, 2)
line, = ax.plot([],[], lw=2)
# initialization function: plot the background of each frame
def init():
line.set_data([], [])
return (line,)
# animation function. This is called sequentially
def animate(i):
x = np.linspace(0, 2, 1000)
y = np.sin(2 * np.pi * (x - 0.01 * i))
line.set_data(x, y)
return (line,)
# call the animator. blit=True means: only re-daw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=20, blit=True)
# Show the animation.
HTML(anim.to_html5_video())
anim._repr_html_() is None
anim
# equivalent to rcParams['animation.html'] = 'html5'
rc('animation', html='html5')
anim
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To show the animation, anim uses its conversion of the video to html5 using its method to_html5_video(), and the result is shown through the HTML() function.
Step2: Note that Animation instances nowhave a repr_html method. However, it returns None by default.
Step3: This means that we won't get any animation from the inline display.
Step4: The method used to display is controlled by the animation.html rc parameter, which currently supports values of None and html5. None is the default, performing no display. We simply need to set it to html5
|
12,284
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def isccsym2(F):
if len(F.shape) == 1: F = F[np.newaxis,np.newaxis,:]
if len(F.shape) == 2: F = F[np.newaxis,:,:]
n,m,p = F.shape
x,y,z = np.indices((n,m,p))
Xnovo = np.mod(-1*x,n)
Ynovo = np.mod(-1*y,m)
Znovo = np.mod(-1*z,p)
aux = np.conjugate(F[Xnovo,Ynovo,Znovo])
return (abs(F-aux)<10E-4).all()
def isccsym(F):
import ia898.src as ia
if len(F.shape) == 1: F = F[np.newaxis,np.newaxis,:]
if len(F.shape) == 2: F = F[np.newaxis,:,:]
n,m,p = F.shape
return(abs(F-np.conjugate(ia.ptrans(F[::-1,::-1,::-1],(1,1,1))))<10E-4).all()
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python isccsym.ipynb
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
import matplotlib.image as mpimg
if testing:
F = np.arange(5)
print('Is 1d odd dimension vetor symmetric?',ia.isccsym(F),'\n')
F = np.arange(6)
print('Is 1d even dimension vetor symmetric?',ia.isccsym(F),'\n')
F = np.array( [1j,1j,0,1j,1j] )
print('Is 1d even dimension vetor symmetric?',ia.isccsym(F),'\n')
if testing:
F = np.array( [ [0,1,1],
[2,4,3],
[2,3,4]] )
print('Is function F symmetric?',ia.isccsym(F),'\n')
if testing:
F = np.array([ [ 0j,1j,-1j],
[ 2j,4j,-3j],
[-2j,3j,-4j]] )
print('Is function F symmetric?',ia.isccsym(F),'\n')
F = np.array( [ [ 2j,1j,-1j],
[ 2j,4j,-3j],
[-2j,3j,-4j]] )
print('Is function F symmetric?',ia.isccsym(F),'\n')
if testing:
print('Is this function symmetric?')
print(ia.isccsym(np.fft.fft2(np.random.rand(100,100)))) # dimension variation
print(ia.isccsym(np.fft.fft2(np.random.rand(101,100))))
print(ia.isccsym(np.fft.fft2(np.random.rand(101,101))))
if testing:
img = mpimg.imread('../data/cameraman.tif')
F = ia.dft(img)
imgc = 1 * ia.circle(img.shape, 50, [img.shape[0]/2, img.shape[1]/2])
imgct = ia.ptrans(imgc, np.array(imgc.shape)//2)
ia.adshow(ia.normalize(imgct),'circular filter')
res = F * imgct
ia.adshow(ia.dftview(res))
print('Is this filter symmetric?', ia.isccsym(res))
if False: # testing:
mquadra = ia.rectangle(img.shape, [50,50], [img.shape[0]/2, img.shape[1]/2])
ia.adshow(mquadra,'RETANGULO')
mquadra = ia.ptrans(mquadra, array(mquadra.shape)/2)
ia.adshow(ia.normalize(mquadra),'retangular filter')
mfiltrada = F * mquadra
print('Is this filter symmetric?', ia.isccsym(mfiltrada))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples
Step2: Numeric Example
Step3: Numeric Example
Step4: Numeric Example
Step5: Numeric Example
Step6: Image Example
Step7: Image Example 2
|
12,285
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pyiast
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
%config InlineBackend.rc = {'font.size': 13, 'lines.linewidth':3,\
'axes.facecolor':'w', 'legend.numpoints':1,\
'figure.figsize': (6.0, 4.0)}
%matplotlib inline
colors = ['b', 'g', 'r'] # for representing each component
component_names = {0: 'A', 1: 'B', 2:'C'}
M = 1.0
langmuirKs = [2.0, 10.0, 20.0] # K_i
pressure = np.logspace(-3, np.log10(10), 20)
dfs = [pd.DataFrame({'P': pressure,
'L': M * langmuirKs[i] * pressure / (
1.0 + langmuirKs[i] * pressure)})
for i in range(3)]
isotherms = [pyiast.ModelIsotherm(dfs[i], pressure_key='P',
loading_key='L', model='Langmuir')
for i in range(3)]
for i in range(len(isotherms)):
isotherms[i].print_params()
pyiast.plot_isotherm(isotherms[i])
p_plot = np.logspace(-3, np.log10(11)) # for plotting
fig = plt.figure(facecolor='w')
for i in range(len(isotherms)):
plt.scatter(dfs[i]['P'], dfs[i]['L'], color=colors[i],
s=50, label=None)
plt.plot(p_plot, M * langmuirKs[i] * p_plot / (1.0 + langmuirKs[i] * p_plot),
color=colors[i], linewidth=2, label=r'$N_%s(P) = \frac{%d P}{1+%dP}$' % (
component_names[i], langmuirKs[i], langmuirKs[i]))
plt.xlim([-.05 * 10, 1.05 * 10])
plt.ylim([-.05 * M, M * 1.05])
plt.xlabel('Pressure (bar)')
plt.ylabel('Gas uptake (mmol/g)')
plt.legend(loc='lower right')
plt.tight_layout()
plt.savefig('pure_component_Langmuir.pdf', format='pdf',
facecolor=fig.get_facecolor())
plt.show()
total_pressure = 1.0
dx = 0.0001
num_tests = 100
compositions = []
test_no = 0
while test_no < num_tests:
# generate random compoisitions
y1 = np.random.uniform(dx, 1.0 - dx)
y2 = np.random.uniform(dx, 1.0 - dx)
y3 = 1.0 - y2 - y1
# check that composition is within the triangle
if y3 < dx:
continue
# viable composition
compositions.append([y1, y2, y3])
# keep generating until we have num_tests
test_no += 1
def competitive_langmuir_loading(partial_pressures, i):
Calculate loading of component i according to competitive Langmuir
return M * langmuirKs[i] * partial_pressures[i] / (
1.0 + np.dot(langmuirKs, partial_pressures))
def iast_loading(partial_pressures, i):
Calculate loading of component i according to IAST
partial_pressures: Array, partial pressures of each component
i: component in the mixture
component_loadings = pyiast.iast(partial_pressures, isotherms)
return component_loadings[i]
for i in range(num_tests):
partial_pressure = np.array(compositions[i]) * total_pressure
# for each component...
for c in range(len(langmuirKs)):
np.testing.assert_almost_equal(
competitive_langmuir_loading(partial_pressure, c),
iast_loading(partial_pressure, c), decimal=4)
import ternary
scale = 10 # resolution in triangle
axis_colors = {'l':colors[1], 'r':colors[0], 'b':colors[2]}
cmaps = ["Blues", "Greens", "Reds"]
iast_or_lang = 'iast' # plot results for IAST or for Langmuir isotherm?
for c in range(3):
if iast_or_lang == 'lang':
f = lambda p: competitive_langmuir_loading(p, c)
else:
f = lambda p: iast_loading(p, c)
# loop over component
fig, ax = plt.subplots(facecolor='w')
ax.axis("off")
figure, tax = ternary.figure(ax=ax, scale=scale)
tax.heatmapf(f, boundary=False,
style="hexagonal", cmap=plt.cm.get_cmap(cmaps[c]),
vmax=M, vmin=0.0,
cbarlabel="%s uptake (mmol/g)" % component_names[c])
tax.boundary(linewidth=2.0, axes_colors=axis_colors)
tax.left_axis_label("$p_B$ (bar)", color=axis_colors['l'], offset=0.16)
tax.right_axis_label("$p_A$ (bar)", color=axis_colors['r'], offset=0.16)
tax.bottom_axis_label("$p_C$ (bar)", color=axis_colors['b'], offset=-0.06)
tax.gridlines(color="blue", multiple=1, linewidth=2,
horizontal_kwargs={'color':axis_colors['b']},
left_kwargs={'color':axis_colors['l']},
right_kwargs={'color':axis_colors['r']},
alpha=0.7) # Every 5th gridline, can be a float
tax.ticks(axis='rlb', linewidth=1, locations=np.arange(scale+1), clockwise=True,
axes_colors=axis_colors,
ticks=["%.1f" % (1.0 * i / scale) for i in range(scale+1)], offset=0.03)
tax.clear_matplotlib_ticks()
tax._redraw_labels()
# if iast_or_lang == 'iast':
# tax.set_title("IAST uptake, component %d" % c, y=1.08, fontsize=14)
# if iast_or_lang == 'lang':
# tax.set_title("Competitive Langmuir uptake, component %d" % c, y=1.08, fontsize=14)
plt.tight_layout()
if iast_or_lang == 'iast':
plt.savefig("Tertiary_diagram_IAST_component_%d.pdf" % c, format='pdf',
facecolor=fig.get_facecolor())
if iast_or_lang == 'lang':
plt.savefig("Tertiary_diagram_Langmuir_component_%d.pdf" % c, format='pdf',
facecolor=fig.get_facecolor())
tax.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate synthetic pure-component isotherm data, fit Langmuir models to them.
Step2: Generate data according to Langmuir model, store in list of Pandas DataFrames
Step3: Use pyIAST to fit Lanmguir models to the data, then plot fits
Step4: Plot synthetic data all in one plot for paper
Step5: Compare pyIAST predicted component loadings to that of competitive Langmuir
Step6: We will explore gas phase composition space (${y_i}$) by generating random compositions and checking that they are within the triangle. We do not want to get too close to a pure phase boundary becuase of numerical instability, so we keep a distance dx away from pure phases. We will perform num_tests tests.
Step7: Generate the compositions and store in list compositions
Step9: Next, we assert that pyIAST gives the same result as the competitive Langmuir isotherm for each of these compositions.
Step11: Function to compute loading according to pyIAST
Step12: Loop over compositions, assert pyIAST agrees with competitive Langmuir for each component. If this runs, then there is agreement!
Step13: This is using a custom library to plot the phase diagrams for the paper.
|
12,286
|
<ASSISTANT_TASK:>
Python Code:
# Case Study : Predicting Housing Price
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Importing Boston housing data
from sklearn.datasets import load_boston
boston = load_boston()
boston.keys()
boston.feature_names
X = boston.data
y = boston.target
features= pd.DataFrame(X,columns=boston.feature_names)
# For simplicity purpose we can use only the variables which are numeric(Excluding Address variable )
# Validating assumption 2
# Multicollinearity - present of correlation b/w independent variables
# The dependent variable and the error terms should possess a normal distribution
#sns.pairplot(features)
sns.heatmap(features.corr(),annot=True)
# Findings: 1. For simplicity assumption Multicollinearity is not present
# Variance Inflation Factor (VIF) - This metric is used to check multicollinearity.
# VIF <=4 implies no multicollinearity but VIF >=10 suggests high multicollinearity.
# Alternatively, you can also look at the tolerance (1/VIF) value to determine correlation in IVs.
# In addition, you can also create a correlation matrix to determine collinear variables.
# Validating assumption 5
# The dependent variable and the error terms should possess a normal distribution
sns.distplot(y)
# Findings: 1. The dependent variable is normally distributed
# Splitting the dataset into training & test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=2)
print(len(X_train))
print(len(X_test))
print(len(y_train))
print(len(y_test))
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
fit= lm.fit(X_train,y_train)
print('% Response variable variation explained by linear model :',fit.score(X_train,y_train) * 100)
'''
import statsmodels.api as sm
model = sm.OLS(y_train,X_train).fit()
predictions = model.predict(X_test)
# Print out the statistics
model.summary()
# 1. Residual vs. Fitted Values Plot to check heteroskedasticity -If any pattern exist then heteroskedasticity exist else no
# 2. Breusch-Pagan / Cook Weisberg Test - This test is used to determine presence of heteroskedasticity.
# If you find p < 0.05, you reject the null hypothesis and infer that heteroskedasticity is present.
# 2. Durbin Watson Statistic (DW) - This test is used to check autocorrelation. Its value lies between 0 and 4.
# A DW=2 value shows no autocorrelation. However, a value between 0 < DW < 2 implies positive autocorrelation, while 2 < DW < 4 implies negative autocorrelation.
# Findings : We dont have Autocorrelation
'''
## Predictions
from sklearn import metrics
predictions = lm.predict(X_test)
print('Predicted R2 determines how well the model predicts responses for new observations :',metrics.r2_score(y_test,predictions)*100)
# Scatter plot to determine how well the model predicts responses from new observations
plt.scatter(y_test,predictions)
# Checking whether the error terms possess a normal distribution
import seaborn as sns
sns.distplot((y_test-predictions))
# Findings: Yes, the erroe term is normally distributed
from sklearn import metrics
print('MAE = ', metrics.mean_absolute_error(y_test,predictions))
print('MSE = ',metrics.mean_squared_error(y_test,predictions))
print('RMSE = ',np.sqrt(metrics.mean_squared_error(y_test,predictions)))
# Error metric is the crucial evaluation number we must check. Since all these are errors, lower the number, better the model.
# Findings: From the below metrics, the model is performing pretty good.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Validating the assumptions made in regression
Step2: How can you improve the accuracy of a regression model ?
Step3: Evaluation Metrics
|
12,287
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
people = graphlab.SFrame('people_wiki.gl/')
people.head()
len(people)
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
people['tfidf'] = graphlab.text_analytics.tf_idf(people['word_count'])
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
knn_model.query(obama)
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cargar el dataset
Step2: Los datos contienen articulos de wikipedia sobre diferentes personas.
Step3: Buscaremos al expresidente Barack Obama
Step4: Contar las palabras del articulo de Obama
Step5: Convertir el diccionario en una tabla
Step6: Ordenar las palabras más repetidas.
Step7: Las palabras más comunes no nos aportan información.
Step8: Examinar el TF-IDF del articulo de OBAMA
Step9: El algoritmo TF-IDF nos aporta más información.
Step10: Qué persona esta más relacionada con Obama?
Step11: Otros ejemplos
|
12,288
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (8, 100)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
words = set()
index_to_word = {}
word_to_index = {}
for word in text:
words.add(word)
for index, word in enumerate(words):
#print (word,index)
index_to_word[index] = word
word_to_index[word] = index
return word_to_index, index_to_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
ret = {}
ret['.'] = "||Period||" #( . )
ret[','] = "||Comma||" #( , )
ret['"'] = "||Quotation_Mark||" # ( " )
ret[';'] = "||Semicolon||" #( ; )
ret['!'] = "||Exclamation_mark||" #( ! )
ret['?'] = "||Question_mark||" #( ? )
ret['('] = "||Left_Parentheses||" #( ( )
ret[')'] = "||Right_Parentheses||" #( ) )
ret['--'] = "||Dash||" # ( -- )
ret['\n'] = "||Return||" # ( \n )
return ret
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None ], name="input")
targets = tf.placeholder(tf.int32, [None, None ], name="targets")
learning_rate = tf.placeholder(tf.float32, None, name="LearningRate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
layer_count = 2
keep_prob = tf.constant(0.7,tf.float32, name="keep_prob")
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)
lstm2 = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm, lstm2], state_is_tuple=True)
initial_state = cell.zero_state( batch_size, tf.float32)
initial_state = tf.identity(initial_state, name="initial_state" )
#_outputs, final_state = tf.nn.rnn(cell, rnn_inputs, initial_state=init_state)
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
import random
import math
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
ret = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
ret = tf.nn.embedding_lookup(ret, input_data)
print("shape {}".format(ret.get_shape().as_list()))
return ret
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)
final_state = tf.identity (final_state, "final_state")
return output, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embedded = get_embed(input_data, vocab_size, rnn_size)
out, fin = build_rnn(cell, embedded)
out = tf.contrib.layers.fully_connected(out,vocab_size, activation_fn=None)
out_shape = out.get_shape().as_list()
print("build_nn embedded{}, out:{}, fin:{}".format(embedded.get_shape().as_list(),out_shape, fin.get_shape().as_list()))
print()
return out, fin
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
text = int_text
ret = np.array([])
inputs = []
targets = []
text_len = len(text) - len(text) % (seq_length*batch_size)
print ("get_batches text:{}, batch:{}, seq:{}".format(text_len, batch_size, seq_length))
ret=[]
for i in range(0, text_len-1, seq_length):
seq = list(int_text[i:i+seq_length])
inputs.append(list(int_text[i:i+seq_length]))
targets.append(list(int_text[i+1:i+seq_length+1]))
for i in range(0,len(inputs),batch_size):
pos=batch_size
#batch_pair = n
ret.append([inputs[i:i+batch_size], targets[i:i+batch_size]])
ret = np.asanyarray(ret)
print("batch test ", ret.shape, ret[3,:,2])
return ret
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 300 # previously 150, but want to get lower loss.
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 1024
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = 12 # already discouraged from using 6 and 16, avg sentence length being 10-12
# I'm favoring this formula frm the curse of lerning rate being a function of parameter count.
#This is guess work (empirical), but gives good results.
learning_rate = 1/np.sqrt(rnn_size*seq_length*6700)
print( "learning rate {}, vocab_size {}".format(learning_rate,6700))
100 inf
0.0012 -- 1.666 860-1210: 1.259
0.00012 -- 5.878 1920-2190: 1.070
0.000012 7.4 3000: 2.107
0.00012 -- 6.047 3000: 0.964-- embedding w truncated normal.
1024
0.00812 -- 1.182 stuck
0.00612 -- 0.961 stuck
# Show stats for every n number of batches
show_every_n_batches = 20
tf.set_random_seed(42)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name("input:0")
initials = loaded_graph.get_tensor_by_name("initial_state:0")
finals = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return inputs, initials, finals, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
# As suggested by the last reviewer - tuning randomness
#print("probabs:{}, - {}".format(probabilities.shape, int_to_vocab[np.argmax(probabilities)]))
mostprobable = np.argsort(probabilities)
ret = np.random.choice(mostprobable[-3:],1, p=[0.1, 0.2, 0.7])
return int_to_vocab[ret[0]]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step36: Neural Network Training
Step38: Build the Graph
Step40: Train
Step42: Save Parameters
Step44: Checkpoint
Step47: Implement Generate Functions
Step50: Choose Word
Step52: Generate TV Script
|
12,289
|
<ASSISTANT_TASK:>
Python Code:
from goatools.base import get_godag
godag = get_godag("go-basic.obo", optional_attrs={'relationship'})
go_leafs = set(o.item_id for o in godag.values() if not o.children)
virion = 'GO:0019012'
from goatools.gosubdag.gosubdag import GoSubDag
gosubdag_r0 = GoSubDag(go_leafs, godag)
nt_virion = gosubdag_r0.go2nt[virion]
print(nt_virion)
print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
gosubdag_r1 = GoSubDag(go_leafs, godag, relationships=True)
nt_virion = gosubdag_r1.go2nt[virion]
print(nt_virion)
print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
gosubdag_partof = GoSubDag(go_leafs, godag, relationships={'part_of'})
nt_virion = gosubdag_partof.go2nt[virion]
print(nt_virion)
print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
virion_descendants = gosubdag_partof.rcntobj.go2descendants[virion]
print('{N} descendants of virion were found'.format(N=len(virion_descendants)))
from goatools.gosubdag.plot.gosubdag_plot import GoSubDagPlot
# Limit plot of descendants to get a smaller plot
virion_capsid_fiber = {'GO:0098033', 'GO:0098032'}
nts = gosubdag_partof.prt_goids(virion_capsid_fiber,
'{NS} {GO} dcnt({dcnt}) D-{depth:02} {GO_name}')
# Limit plot size by choosing just two virion descendants
# Get a subset containing only a couple virion descendants and their ancestors
pltdag = GoSubDag(virion_capsid_fiber, godag, relationships={'part_of'})
pltobj = GoSubDagPlot(pltdag)
pltobj.plt_dag('virion_capsid_fiber.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Depth-01 term, GO
Step2: Notice that dcnt=0 for GO
Step3: 3) Depth-01 term, GO
Step4: 4) Depth-01 term, GO
Step5: 5) Descendants under GO
Step6: 6) Plot descendants of virion
|
12,290
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import calin.simulation.detector_efficiency
import calin.simulation.atmosphere
import calin.provenance.system_info
data_dir = calin.provenance.system_info.build_info().data_install_dir() + "/simulation/"
print("Simulation data directory:",data_dir)
det_eff = calin.simulation.detector_efficiency.DetectionEfficiency()
det_eff.scaleEffFromFile(data_dir + 'qe_R12992-100-05.dat')
det_eff.scaleEffFromFile(data_dir + 'ref_AlSiO2HfO2.dat')
det_eff.scaleEffFromFile(data_dir + 'Aclylite8_tra_v2013ref.dat')
cone_eff = calin.simulation.detector_efficiency.AngularEfficiency(data_dir + 'CTA-LST_lightguide_eff.dat')
max_cone_eff = max(cone_eff.all_yi())
det_eff.scaleEffByConst(max_cone_eff)
cone_eff.scaleEffByConst(1.0/max_cone_eff)
print("Low energy limit: %.3f eV"%det_eff.xmin())
print("High energy limit: %.3f eV"%det_eff.xmax())
print("Effective bandwidth: %.3f eV"%det_eff.integrate())
x_eff = frange(det_eff.xmin(),det_eff.xmax(),npts=100,closed=True)
y_eff = asarray(list(map(lambda xx: det_eff.y(xx),x_eff)))
plot(x_eff,y_eff)
xlabel('Photon energy [eV]')
ylabel('Detection efficiency [1]')
grid()
x_cone = frange(cone_eff.xmin(),cone_eff.xmax(),npts=100,closed=True)
y_cone = asarray(list(map(lambda xx: cone_eff.y(xx),x_cone)))
plot(arccos(x_cone)/pi*180,y_cone)
xlabel('Photon entry angle [deg]')
ylabel('Normalized cone efficiency [1]')
grid()
a=list(axis())
a[3]=1.02
axis(a)
atm_abs = calin.simulation.detector_efficiency.AtmosphericAbsorption(data_dir + 'atm_trans_2147_1_10_0_0_2147.dat')
h0 = 2.147
tau0 = atm_abs.opticalDepthForAltitude(h0 * 1e5)
figure(2)
plot(x_eff,y_eff,'k--',label='Ground')#%g km'%(h0/1e3))
for h in frange(3,15.0,2.0):
tau = atm_abs.opticalDepthForAltitude(h * 1e5) - tau0
y_abs = asarray(list(map(lambda xx: tau.y(xx),x_eff)))
figure(1)
plot(x_eff,exp(-y_abs),label='%g km'%h)
hold('on')
figure(2)
plot(x_eff,exp(-y_abs)*y_eff,label='%g km'%h)
hold('on')
figure(1)
xlabel('Photon energy [eV]')
ylabel('Transmission fraction')
legend()
figure(2)
xlabel('Photon energy [eV]')
ylabel('Transmisison and detection efficiency')
legend()
zn0 = 35
bandwidth_calc = atm_abs.integrateBandwidth(h0 * 1e5, cos(zn0/180.0*pi), det_eff)
x_bandwidth = frange(h0,20.0,0.01)
y_bandwidth = asarray(list(map(lambda h: bandwidth_calc.bandwidth(h*1e5,cos(zn0/180.0*pi)), x_bandwidth)))
zn1 = zn0+5
y_bandwidth_1 = asarray(list(map(lambda h: bandwidth_calc.bandwidth(h*1e5,cos(zn1/180.0*pi)), x_bandwidth)))
zn2 = zn0-5
y_bandwidth_2 = asarray(list(map(lambda h: bandwidth_calc.bandwidth(h*1e5,cos(zn2/180.0*pi)), x_bandwidth)))
plot(x_bandwidth, y_bandwidth_2, 'g', label='zn=$%g^\circ$'%zn2)
plot(x_bandwidth, y_bandwidth, 'b', label='zn=$%g^\circ$'%zn0)
plot(x_bandwidth, y_bandwidth_1, 'r', label='zn=$%g^\circ$'%zn1)
xlabel('Altitude of Cherenkov emission [km]')
ylabel('Total effective bandwidth [eV]')
legend()
#atm = calin.simulation.atmosphere.LayeredAtmosphere.us76()
atm = calin.simulation.atmosphere.LayeredAtmosphere(data_dir + 'atmprof36.dat')
y_n = asarray(list(map(lambda h: 1.0+atm.n_minus_one(h*1e5), x_bandwidth)))
plot(x_bandwidth,arccos(1.0/y_n)/pi*180.0)
xlabel('Altitude [km]')
ylabel('Cherenkov angle [deg]')
ct0 = atm.propagation_ct_correction(h0*1e5)
y_prop_delay = asarray(list(map(lambda h: atm.propagation_ct_correction(h*1e5)-ct0, x_bandwidth)))
plot(x_bandwidth,y_prop_delay/30) # c = 30 cm/ns
xlabel('Altitude [km]')
ylabel('Time delay [ns]')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Get location of calin simulation data files
Step2: 2 - Construct detector efficiency
Step3: 3 - Load lightcone efficiency
Step4: 4 - Rescale effective detector and cone efficiencies
Step5: 5 - Calculate bandwidth and plot efficiencies
Step6: 6 - Load an atmospheric absorption model
Step7: 7 - Plot transmission and effective Cherenkov "ground" spectrum
Step8: 8 - Integrate the atmospheric transmission times the detector efficiency
Step9: 9 - Plot total effective bandwidth as function of height for three zenith angles
Step10: 10 - Load atmospheric profile
Step11: 11 - Calculate and plot Cherenkov angle
Step12: 12 - Calculate and plot propagation delay
|
12,291
|
<ASSISTANT_TASK:>
Python Code:
import random
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from fct import normalize_min_max, plot_2d, plot_clusters
def build_d(datas, centers):
Return a 2D-numpy array of the distances between each
point in the dataset and the centers. The distance used is
the euclidian distance.
# the distance matrix is d
d = []
for center in centers:
# the list of distances from one center to all the points in the
# dataset
dist = []
for i in range(datas.shape[0]):
dist.append(np.linalg.norm(datas[i] - center))
d.append(dist)
return np.array(d)
def build_g(distances):
Return a 2D-numpy array of 0s and 1s that determines
to which center belong each point in the dataset.
# k is the number of clusters we look for
k = distances.shape[0]
# g is the matrix of affiliation
g = []
for i in range(distances.shape[1]):
# gg elements is 1 only if the point belongs to the
# corresponding center, else it is 0
gg = [0] * k
# computes which center is the closest to the point
gg[distances[:,i].argmin()] = 1
g.append(gg)
return np.array(g).T
def build_clusters(datas, G):
Return a list of clusters (lists as well) of points from the dataset.
k = G.shape[0]
clusters = [[] for _ in range(k)]
# i is the index of the centers, j of the datapoints
for i in range(G.shape[0]):
for j in range(G.shape[1]):
if G[i][j] == 1:
clusters[i].append(datas[j])
return clusters
def new_centers(clusters):
Return a list of points defined as the barycenter of each new cluster.
centers = []
for cluster in clusters:
# the center of each cluster is its barycenter
center = np.mean(cluster, axis=0)
centers.append(center)
return centers
def k_means(datas, k):
Return the centers of the clusters found after the iterative process.
# The initial centers are taken at random without replacement within the
# dataset
centers = random.sample(list(datas), k)
D = build_d(datas, centers)
G = build_g(D)
clusters = build_clusters(datas, G)
centers_new = new_centers(clusters)
# while the new centers are not equal to the previous ones (it means the
# situation is not stationary) then we keep iterating
while not np.array_equal(np.array(centers), np.array(centers_new)):
# plot the clusters with different colors. The centers are plotted
# in blue
plt.clf()
plot_clusters(clusters, k)
X = [center[0] for center in centers]
Y = [center[1] for center in centers]
plt.scatter(X,Y)
plt.show(block=False)
plt.pause(0.01)
# Build the new clusters from the past centers
centers = np.copy(centers_new)
D = build_d(datas, centers)
G = build_g(D)
clusters = build_clusters(datas, G)
# Build the new centers
centers_new = new_centers(clusters)
plt.close()
return centers
dimension = 2
datas = pd.read_csv('datasets/data_clustering.csv')
datas = np.array(datas)
normalize_min_max(datas, dimension)
# You can play with the number of clusters K to
# see how it affects the result.
K = 4
centers = k_means(datas, K)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: Algorithm
Step7: Application
|
12,292
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
l = [('A', '1', 'a'), ('A', '1', 'b'), ('A', '2', 'a'), ('A', '2', 'b'), ('B', '1','a'), ('B', '1','b')]
np.random.seed(1)
df = pd.DataFrame(np.random.randn(5, 6), columns=l)
def g(df):
df.columns = pd.MultiIndex.from_tuples(df.columns, names=['Caps','Middle','Lower'])
return df
df = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,293
|
<ASSISTANT_TASK:>
Python Code:
class BinaryTree():
def __init__(self, children = None):
A binary tree is either a leaf or a node with two subtrees.
INPUT:
- children, either None (for a leaf), or a list of size excatly 2
of either two binary trees or 2 objects that can be made into binary trees
self._isleaf = (children is None)
if not self._isleaf:
if len(children) != 2:
raise ValueError("A binary tree needs exactly two children")
self._children = tuple(c if isinstance(c,BinaryTree) else BinaryTree(c) for c in children)
self._size = None
def __repr__(self):
if self.is_leaf():
return "leaf"
return str(self._children)
def __eq__(self, other):
Return true if other represents the same binary tree as self
if not isinstance(other, BinaryTree):
return False
if self.is_leaf():
return other.is_leaf()
return self.left() == other.left() and self.right() == other.right()
def left(self):
Return the left subtree of self
return self._children[0]
def right(self):
Return the right subtree of self
return self._children[1]
def is_leaf(self):
Return true is self is a leaf
return self._isleaf
def _compute_size(self):
Recursively computes the size of self
if self.is_leaf():
self._size = 0
else:
self._size = self.left().size() + self.right().size() + 1
def size(self):
Return the number of non leaf nodes in the binary tree
if self._size is None:
self._compute_size()
return self._size
leaf = BinaryTree()
t = BinaryTree()
t
t.size()
t = BinaryTree([[leaf,leaf], leaf]) # a tree of size 2
t
t.size()
t = BinaryTree([leaf, [leaf,leaf]]) # a different tree of size 2
t
t.size()
t = BinaryTree([[leaf, leaf], [leaf, leaf]]) # a tree of size 3
t
t.size()
# t1 = BinaryTree(...)
# t2 = BinaryTree(...)
# t3 = BinaryTree(...)
# t4 = BinaryTree(...)
import math
import random
class BinaryTrees():
def __init__(self, size):
The combinatorial set of binary trees of size `size`
INPUT:
- size a non negative integers
self._size = size
def size(self):
Return the size of the binary trees of the set
return self._size
def __repr__(self):
Default string repr of ``self``
return "Binary Trees of size " + str(self._size)
def cardinality(self):
Return the cardinality of the set
# This is given to you
n = self._size
f = math.factorial(n)
return math.factorial(2*n)/(f*f*(n+1))
def __iter__(self):
Iterator on the elements of the set
# write code here
def first(self):
Return the first element of the set
for t in self:
return t
def rank(self,t):
Return the rank of the binary tree t in the generation order of the set (starting at 0)
INPUT:
- t, a binary tree
# write code here
def unrank(self,i):
Return the binary tree corresponding to the rank ``i``
INPUT:
- i, a integer between 0 and the cardinality minus 1
# write code here
def next(self,t):
Return the next element following t in self
INPUT :
- t a binary tree
OUPUT :
The next binary tree or None if t is the last permutation of self
# write code here
def random_element(self):
Return a random element of ``self`` with uniform probability
# write code here
BinaryTrees(0)
list(BinaryTrees(0))
BinaryTrees(1)
list(BinaryTrees(1))
BinaryTrees(2)
list(BinaryTrees(2))
BT3 = BinaryTrees(3)
BT3
list(BT3)
t = BinaryTree(((leaf, leaf), (leaf, leaf)))
BT3.rank(t)
BT3.unrank(2)
BT3.next(t)
BT3.random_element()
def test_cardinality_iter(S):
assert(len(list(S)) == S.cardinality())
def test_rank(S):
assert([S.rank(p) for p in S] == range(S.cardinality()))
def test_unrank(S):
assert(list(S) == [S.unrank(i) for i in xrange(S.cardinality())])
def test_next(S):
L = [S.first()]
while True:
p = S.next(L[-1])
if p == None:
break
L.append(p)
assert(L == list(S))
def all_tests(S):
tests = {"Cardinality / iter": test_cardinality_iter, "Rank": test_rank, "Unrank": test_unrank, "Next": test_next}
for k in tests:
print "Testsing: "+ k
try:
tests[k](S)
print "Passed"
except AssertionError:
print "Not passed"
all_tests(BinaryTrees(3))
all_tests(BinaryTrees(4))
all_tests(BinaryTrees(5))
all_tests(BinaryTrees(6))
import random
def random_grow(t):
Randomly grows a binary tree
INPUT:
- t, a binary tree of size n
OUTPUT: a binary tree of size n+1
if t.is_leaf():
return BinaryTree([leaf,leaf])
c = [t.left(),t.right()]
i = random.randint(0,1)
c[i] = random_grow(c[i])
return BinaryTree(c)
def random_binary_tree(n):
Return a random binary tree of size n
t = leaf
for i in xrange(n):
t = random_grow(t)
return t
random_binary_tree(10)
assert BinaryTree([[leaf,leaf], leaf]).height() == 2
assert BinaryTree([leaf,[leaf, leaf]]).height() == 2
assert BinaryTree([[leaf,leaf], [leaf,leaf]]).height() == 2
assert BinaryTree([[leaf,[leaf,leaf]], [leaf,leaf]]).height() == 3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step7: Arbres binaires
Step8: Il y a 5 arbres binaires de taille 3. L'un deux est celui que nous venons de construire.
Step19: Le but de ce TP est d'implanter les fonctions de la classe BinaryTrees ci-dessous (avec un "s" à la fin) qui représente l'ensemble des arbres binaires d'une taille donnée. La structure de la classe vous est donnée ainsi que les méthodes de base.
Step20: La suite de tests que nous avions définies sur les permutations peut aussi s'appliquer sur les arbres binaires.
Step23: Voici une fonction qui calcule un arbre binaire aléatoire. On se demande si chaque arbre est obenu avec une probabilité uniforme.
Step24: La hauteur d'un arbre se calcule récursivement
|
12,294
|
<ASSISTANT_TASK:>
Python Code:
import copy
try:
import ujson as json
except ImportError:
import json
import math
import operator
import random
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from numpy.linalg import norm as np_norm
import matplotlib.pyplot as plt
import pandas as pd
from scipy.spatial import distance as spd
import seaborn as sns
from sklearn.datasets import make_blobs
from sklearn.decomposition import PCA
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
sns.set_style('whitegrid')
%matplotlib inline
# number of data points
n = 1000
# array of continuous values, randomly drawn from standard normal in two dimensions
X = np.array(np.random.normal(size=(n,2)))
# seaborn plays really nicely with pandas
df = pd.DataFrame(X, columns=['x0','x1'])
df.tail()
sns.jointplot(data=df, x='x0', y='x1', alpha=0.5)
def in_sample_dist(X, n):
Create a histogram of pairwise distances in array X, using n bins.
plt.figure(figsize=(15,6))
# use scipy's pairwise distance function for efficiency
plt.hist(spd.pdist(X), bins=n, alpha=0.6)
plt.xlabel('inter-sample distance')
plt.ylabel('count')
in_sample_dist(X,n)
def radius(vector):
Calculate the euclidean norm for the given coordinate vector.
origin = np.zeros(len(vector))
# use scipy's distance functions again!
return spd.euclidean(origin, vector)
# use our function to create a new 'r' column in the dataframe
df['r'] = df.apply(radius, axis=1)
df.head()
def kde_cdf_plot(df, norm=False, vol=False):
Display stacked KDE and CDF plots.
assert 'r' in df, 'This method only works for dataframes that include a radial distance in an "r" column!'
if norm:
# overwrite df.r with normalized version
df['r'] = df['r'] / max(df['r'])
fig, (ax1, ax2) = plt.subplots(2,1,
sharex=True,
figsize=(15,8)
)
# subplot 1
sns.distplot(df['r'],
hist=False,
rug=True,
ax=ax1
)
ax1.set_ylabel('KDE')
ax1.set_title('n={} in {}-d space'.format(len(df), df.shape[1] - 1) )
# subplot 2
if vol:
raise NotImplementedError("Didn't finish implementing this volume normalization!")
dim = df.shape[1] - 1
df['r'].apply(lambda x: x**dim).plot(kind='hist',
cumulative=True,
normed=1,
bins=len(df['r']),
histtype='step',
linewidth=2,
ax=ax2
)
ax2.set_ylabel('CDF')
plt.xlim(0, .99*max(df['r'])**dim)
xlab = 'volume fraction'
else:
df['r'].plot(kind='hist',
cumulative=True,
normed=1,
bins=len(df['r']),
histtype='step',
linewidth=2,
ax=ax2
)
ax2.set_ylabel('CDF')
plt.xlim(0, .99*max(df['r']))
xlab = 'radial distance'
if norm:
xlab += ' (%)'
plt.xlabel(xlab)
kde_cdf_plot(df)
# data points, dimensions, blob count
n = 1000
dims = 2
blobs = 5
# note: default bounding space is +/- 10.0 in each dimension
X, y = make_blobs(n_samples=n, n_features=dims, centers=blobs)
# convert np arrays to a df, auto-label the columns
X_df = pd.DataFrame(X, columns=['x{}'.format(i) for i in range(X.shape[1])])
X_df.head()
sns.jointplot(data=X_df, x='x0', y='x1')
X_df['r'] = X_df.apply(radius, axis=1)
#X_df.head()
kde_cdf_plot(X_df, norm=True)
in_sample_dist(X,n)
def make_blob_df(n_points=1000, dims=2, blobs=5, bounding_box=(-10.0, 10.0)):
Function to automate the np.array blob => pd.df creation and r calculation.
# nb: default bounding space is +/- 10.0 in each dimension
X, y = make_blobs(n_samples=n_points, n_features=dims, centers=blobs, center_box=bounding_box)
# make a df, auto-label the columns
X_df = pd.DataFrame(X, columns=['x{}'.format(i) for i in range(X.shape[1])])
X_df_no_r = copy.deepcopy(X_df)
# add a radial distance column
X_df['r'] = X_df.apply(radius, axis=1)
return X, X_df, X_df_no_r, y
n = 1000
dims = 3
blobs = 5
X, X_df, X_df_no_r, y = make_blob_df(n, dims, blobs)
X_df.head()
#X_df_no_r.head()
fig = plt.figure(figsize=(12,7))
ax = fig.add_subplot(111, projection='3d')
ax.plot(X_df['x0'],X_df['x1'],X_df['x2'],'o', alpha=0.3)
ax.set_xlabel('x0'); ax.set_ylabel('x1'); ax.set_zlabel('x2')
sns.pairplot(X_df_no_r, plot_kws=dict(alpha=0.3), diag_kind='kde')
kde_cdf_plot(X_df, norm=True)
in_sample_dist(X,n)
n = 1000
dims = 10
blobs = 5
X, X_df, X_df_no_r, y = make_blob_df(n, dims, blobs)
X_df.head()
# this starts to take a few seconds when d~10
sns.pairplot(X_df_no_r, diag_kind='kde', plot_kws=dict(alpha=0.3))
kde_cdf_plot(X_df, norm=True)
in_sample_dist(X,n)
n_points = 1000
dim_range = [2, 100, 10000]
blob_count = 5
fig, (ax1, ax2) = plt.subplots(2,1, sharex=True, figsize=(15,8))
for d in dim_range:
## data generation
# random gaussian blobs in d-dims
X, y = make_blobs(n_samples=n_points, n_features=d, centers=blob_count)
##
## calculation
# create a labeled df from X
X_df = pd.DataFrame(X, columns=['x{}'.format(i) for i in range(X.shape[1])])
# add an 'r' column
#X_df_no_r = copy.deepcopy(X_df)
X_df['r'] = X_df.apply(radius, axis=1)
# normalize r value to % of max?
X_df['r'] = X_df['r'] / max(X_df['r'])
##
## plotting
# subplot 1 - KDE
sns.distplot(X_df['r'],
kde=True,
hist=False,
rug=True,
ax=ax1,
label='{}-dims'.format(d)
)
# subplot 2 - CDF
X_df['r'].plot(kind='hist',
cumulative=True,
normed=1,
bins=len(X_df['r']),
histtype='step',
linewidth=2,
ax=ax2
)
##
ax1.set_ylabel('KDE')
ax1.set_title('n={} in {}-d space'.format(len(X_df), dim_range) )
ax2.set_ylabel('CDF')
plt.xlim(0, .999*max(X_df['r']))
plt.xlabel('radial distance (%)')
fig, (ax1, ax2, ax3) = plt.subplots(3,1, figsize=(15,9))
for i,d in enumerate(dim_range):
X, y = make_blobs(n_samples=n_points, n_features=d, centers=blob_count)
# loop through the subplots
plt.subplot('31{}'.format(i+1))
# plot the data
plt.hist(spd.pdist(X), bins=n_points, alpha=0.6)
plt.ylabel('count (d={})'.format(d))
ax3.set_xlabel('inter-sample distance')
small_corpus = [
'The dog likes cats.',
'The blue cat eats brown sharks.',
'Why not, blue?'
]
vec = CountVectorizer()
X = vec.fit_transform(small_corpus)
X.todense()
vec.vocabulary_
terms = [x for x,_ in sorted(vec.vocabulary_.items(), key=operator.itemgetter(1))]
text_df = pd.DataFrame(X.todense(), columns=terms)
text_df
text_df['r'] = text_df.apply(radius, axis=1)
text_df
kde_cdf_plot(text_df, norm=True)
text_array = []
with open('twitter_2016-04-06_2030.jsonl.body.txt', 'r') as infile:
for line in infile:
text_array.append(line.replace('\n', ' '))
print( len(text_array) )
print( text_array[0] )
vec = CountVectorizer(
#binary=1,
## add dimensionality reduction?
#stop_words='english',
#lowercase=True,
#min_df=10
)
dtm = vec.fit_transform(text_array)
dtm
# what fraction of the feature space is full?
3051924 / ( 374941*523498 )
# (element-wise sq) (row sum) (flatten) (sqrt)
dtm_r = dtm.multiply(dtm).sum(axis=1).A1**0.5
#print(len(dtm_r))
#print(dtm_r)
#print(min(dtm_r), np.median(dtm_r), max(dtm_r))
s = pd.Series(dtm_r)
plt.figure(figsize=(15,6))
s.plot(kind='hist',
cumulative=True,
normed=1,
bins=len(dtm_r),
histtype='step',
linewidth=2
)
plt.ylabel('CDF')
#plt.xlim(0, .99*max(dtm_r))
plt.xlim(0, 6)
plt.xlabel('radial distance')
# This is a super interesting side note: some tweets can totally throw off your distribution.
# This one Tweet had 114 repetitions of a single character. If you swap the xlim() commands
# above, you'll see that r extends to over 100. This is why:
#text_array[ s[ s > 114 ].index[0] ]
n = 2000
dims = 10000
blobs = 10
X, X_df, X_df_no_r, y = make_blob_df(n, dims, blobs)
#X_df_no_r.head()
kde_cdf_plot(X_df, norm=True)
plt.xlim(0,1)
in_sample_dist(X,n)
# now apply PCA and reduce the dimension down to 3
pca = PCA(n_components=3)
X_df_3d = pd.DataFrame(pca.fit_transform(X_df_no_r), columns=['x0','x1','x2'])
# add in that radial distance column
X_df_3d['r'] = X_df_3d.apply(radius, axis=1)
X_df_3d.head()
# add in the labels so we can color by them
X_df_3d['y'] = y
# nb: using the vars kwarg seems to remove the ability to include KDE
sns.pairplot(X_df_3d,
vars=['x0','x1','x2'],
hue='y',
palette="husl",
diag_kind='kde',
plot_kws=dict(s=50, alpha=0.7)
)
kde_cdf_plot(X_df_3d, norm=True)
#in_sample_dist(X_df_3d[['x0','x1','x2']],n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple, visualizable spaces
Step2: We have a 2-dimensional feature space containing 1000 pieces of data. Each coordinate is orthogonal, and we can equivalently think about each data point being represented by a vector from the origin [ (0,0) in 2-dimensional space ], to the point defined by [x0, x1].
Step4: Another distribution that can provide some hints about the structure of data in a multi-dimensional vector space, is the pairwise inter-point distance distribution for all points in the data. Here's a function that makes this a little cleaner.
Step6: In unsupervised statistical learning, we're often interested in the existence of "clusters" in data. Our intuition in low dimensions can be helpful here. In order to identify and label a grouping of points as being unique from some other grouping of points, there needs to be a similarity or "sameness" metric that we can compare. One such measure is simply the distance between all of the points. If a group of points are all qualitatively closer to each other than another group of points, then we might call those two groups unique clusters.
Step8: There are a couple of ways that I want to visualize this radial distance. First, I'd like to see the univariate distribution (from 0 to max(r)), and second, I'd like to see how much of the data is at a radius less than or equal to a particular value of r. To do this, I'll define a plotting function that takes a dataframe as shown above, and returns plots of these two distributions as described.
Step9: Now, let's see these distributions for the 2-dimensional array we created earlier.
Step10: As a reminder
Step11: This time, we'll incorporate one extra kwarg in the kde_cdf_plot function
Step12: As a start, notice that the radius CDF for this data has shifted to the right. At larger r, we're closer to the "edge" of the space containing our data. The graph will vary with iterations of the data generation, but should consistently be shifted to the right relative to the 0-centered standard normal distribution.
Step14: Sure enough, we can see that there are in fact some peaks in the inter-sample distance. This makes sense, because we know that the data generation process encoded that exact idea. Since we're intentionally using a data generation process that builds in clusters, we'll always see a peak on the low end of the x axis... each cluster is created with a low (and similar) intra-cluster distance. The other, larger peaks, will illustrate the relationships between the clusters.
Step15: Again, compare this CDF to the 2-d case above; note that the data is closer to the "edge" of the space.
Step16: Higher-dimensional blobs
Step17: Having seen the way these plots vary individually, let's compare, side-by-side, a similar data generation process (same number of points and clusters) in a range of dimensions.
Step18: Text data
Step19: It's good to remember how to map the matrix-like data onto the words that go into it...
Step20: With a tiny little corpus, these plots aren't very useful. Let's use a bigger one
Step21: We have to do the radius math slightly differently now, because we're dealing with a scipy CSR matrix instead of a dense numpy array.
Step22: <record-stopping screeching noise>
Step23: Now, we know that the data generation process built in the notion of identifiable clusters. Let's see if we can surface that information by projecting our high-dimensional data and space down into a smaller number using principal component analysis.
|
12,295
|
<ASSISTANT_TASK:>
Python Code:
from owslib.csw import CatalogueServiceWeb
from owslib import fes
import numpy as np
#endpoint = 'https://dev-catalog.ioos.us/csw'
#endpoint = 'http://gamone.whoi.edu/csw'
endpoint = 'https://data.ioos.us/csw'
#endpoint = 'https://ngdc.noaa.gov/geoportal/csw'
csw = CatalogueServiceWeb(endpoint,timeout=60)
print csw.version
csw.get_operation_by_name('GetRecords').constraints
val = 'COAWST'
filter1 = fes.PropertyIsLike(propertyname='apiso:AnyText',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?')
filter_list = [ filter1 ]
val = 'USGS'
filter2 = fes.PropertyIsLike(propertyname='apiso:AnyText',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?')
filter_list = [fes.And([filter1, filter2])]
endpoint = 'http://gamone.whoi.edu/csw'
csw = CatalogueServiceWeb(endpoint,timeout=60,version="2.0.2")
print csw.version
csw.getrecords2(constraints=filter_list,maxrecords=100,esn='full')
print len(csw.records.keys())
for rec in list(csw.records.keys()):
print csw.records[rec].title
endpoint = 'http://gamone.whoi.edu/csw/'
csw = CatalogueServiceWeb(endpoint,timeout=60,version="2.0.2")
print csw.version
csw.getrecords2(constraints=filter_list,maxrecords=100,esn='full')
print len(csw.records.keys())
for rec in list(csw.records.keys()):
print csw.records[rec].title
val = 'G1SST'
filter1 = fes.PropertyIsLike(propertyname='apiso:AnyText',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?')
filter_list = [ filter1 ]
val = 'GHRSST'
filter2 = fes.PropertyIsLike(propertyname='apiso:AnyText',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?')
filter_list = [fes.And([filter1, filter2])]
endpoint = 'https://ngdc.noaa.gov/geoportal/csw'
csw = CatalogueServiceWeb(endpoint,timeout=60)
print csw.version
csw.getrecords2(constraints=filter_list,maxrecords=100,esn='full')
print len(csw.records.keys())
for rec in list(csw.records.keys()):
print csw.records[rec].title
endpoint = 'https://dev-catalog.ioos.us/csw'
csw = CatalogueServiceWeb(endpoint,timeout=60)
print csw.version
csw.getrecords2(constraints=filter_list,maxrecords=100,esn='full')
print len(csw.records.keys())
for rec in list(csw.records.keys()):
print csw.records[rec].title
def fes_date_filter(start, stop, constraint='overlaps'):
Take datetime-like objects and returns a fes filter for date range
(begin and end inclusive).
NOTE: Truncates the minutes!!!
Examples
--------
>>> from datetime import datetime, timedelta
>>> stop = datetime(2010, 1, 1, 12, 30, 59).replace(tzinfo=pytz.utc)
>>> start = stop - timedelta(days=7)
>>> begin, end = fes_date_filter(start, stop, constraint='overlaps')
>>> begin.literal, end.literal
('2010-01-01 12:00', '2009-12-25 12:00')
>>> begin.propertyoperator, end.propertyoperator
('ogc:PropertyIsLessThanOrEqualTo', 'ogc:PropertyIsGreaterThanOrEqualTo')
>>> begin, end = fes_date_filter(start, stop, constraint='within')
>>> begin.literal, end.literal
('2009-12-25 12:00', '2010-01-01 12:00')
>>> begin.propertyoperator, end.propertyoperator
('ogc:PropertyIsGreaterThanOrEqualTo', 'ogc:PropertyIsLessThanOrEqualTo')
start = start.strftime('%Y-%m-%d %H:00')
stop = stop.strftime('%Y-%m-%d %H:00')
if constraint == 'overlaps':
propertyname = 'apiso:TempExtent_begin'
begin = fes.PropertyIsLessThanOrEqualTo(propertyname=propertyname,
literal=stop)
propertyname = 'apiso:TempExtent_end'
end = fes.PropertyIsGreaterThanOrEqualTo(propertyname=propertyname,
literal=start)
elif constraint == 'within':
propertyname = 'apiso:TempExtent_begin'
begin = fes.PropertyIsGreaterThanOrEqualTo(propertyname=propertyname,
literal=start)
propertyname = 'apiso:TempExtent_end'
end = fes.PropertyIsLessThanOrEqualTo(propertyname=propertyname,
literal=stop)
else:
raise NameError('Unrecognized constraint {}'.format(constraint))
return begin, end
bbox = [-71.3, 42.03, -70.57, 42.63]
bbox_filter = fes.BBox(bbox,crs='urn:ogc:def:crs:OGC:1.3:CRS84')
from datetime import datetime, timedelta
import pytz
stop = datetime(2016, 8, 24, 0, 0, 0).replace(tzinfo=pytz.utc)
start = stop - timedelta(days=7)
begin, end = fes_date_filter(start, stop, constraint='overlaps')
print(start)
print(stop)
filter_list = [fes.And([filter1, filter2, bbox_filter, begin, end])]
csw.getrecords2(constraints=filter_list, maxrecords=1000)
print len(csw.records.keys())
for rec in list(csw.records.keys()):
print csw.records[rec].title
choice=np.random.choice(list(csw.records.keys()))
print(csw.records[choice].title)
csw.records[choice].references
csw.records[choice].xml
val = 'OPeNDAP'
filter3 = fes.PropertyIsLike(propertyname='apiso:ServiceType',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?')
filter_list = [fes.And([filter1, filter2, filter3])]
csw.getrecords2(constraints=filter_list, maxrecords=1000)
print(len(csw.records.keys()))
for rec in list(csw.records.keys()):
print('title:'+csw.records[rec].title)
print('identifier:'+csw.records[rec].identifier)
print('modified:'+csw.records[rec].modified)
print(' ')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Search first for records containing the two text strings
Step3: Now let's print out the references (service endpoints) to see what types of services are available
Step4: We see that the OPeNDAP service is available, so let's see if we can add that to the query, returning only datasets that have text "COAWST" and "experimental" and that have an "opendap" service available.
|
12,296
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
plt.rcParams['figure.figsize'] = (20.0, 10.0)
plt.rcParams['font.family'] = "serif"
df = pd.read_csv('../../../datasets/movie_metadata.csv')
df.head()
# split each movie's genre list, then form a set from the unwrapped list of all genres
categories = set([s for genre_list in df.genres.unique() for s in genre_list.split("|")])
# one-hot encode each movie's classification
for cat in categories:
df[cat] = df.genres.transform(lambda s: int(cat in s))
# drop other columns
df = df[['director_name','genres','duration'] + list(categories)]
df.head()
# convert from wide to long format and remove null classificaitons
df = pd.melt(df,
id_vars=['duration'],
value_vars = list(categories),
var_name = 'Category',
value_name = 'Count')
df = df.loc[df.Count>0]
top_categories = df.groupby('Category').aggregate(sum).sort_values('Count', ascending=False).index
howmany=10
df = df.loc[df.Category.isin(top_categories[:howmany])]
df.rename(columns={"duration":"Duration"},inplace=True)
df.head()
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration')
df = df.loc[df.Duration < 250]
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration')
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()))
p = sns.boxplot(data=df,
y = 'Category',
x = 'Duration',
order = sorted(df.Category.unique()),
orient="h")
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
saturation=.25)
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
width=.25)
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
fliersize=20)
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
whis=.2)
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
notch=True)
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
notch=False,
linewidth=2.5)
sns.set(rc={"axes.facecolor":"#ccddff",
"axes.grid":False,
'axes.labelsize':30,
'figure.figsize':(20.0, 10.0),
'xtick.labelsize':25,
'ytick.labelsize':20})
p = sns.boxplot(data=df,
x = 'Category',
y = 'Duration',
palette = 'Paired',
order = sorted(df.Category.unique()),
notch=True)
plt.xticks(rotation=45)
l = plt.xlabel('')
plt.ylabel('Duration (min)')
plt.text(5.4,200, "Box Plot", fontsize = 95, color="black", fontstyle='italic')
p.get_figure().savefig('../../figures/boxplot.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
Step2: Basic plot
Step3: The outliers here are making things a bit squished, so I'll remove them since I am just interested in demonstrating the visualization tool.
Step4: Change the order of categories
Step5: Change the order that the colors are chosen
Step6: Desaturate
Step7: Adjust width of boxes
Step8: Change the size of outlier markers
Step9: Adjust the position of the whiskers as a fraction of IQR
Step10: Add a notch to the box indicating a confidence interval for the median
Step11: Finalize
|
12,297
|
<ASSISTANT_TASK:>
Python Code:
def generator1():
yield 1
yield 2
yield 3
for value in generator1():
print(value)
def generator2():
yield "Hello"
yield "World"
my_gen = generator2()
print(next(my_gen))
print(next(my_gen))
def check_for_value(num):
try:
while True:
received_value = (yield) #data sent to this co is stored here
if num == received_value: #check the value that was sent
print("Correct!")
else:
print("Incorrect")
except GeneratorExit: #catches coroutine.close()
print("Closing Coroutine.")
coroutine = check_for_value(42) #instantiating the coroutine w/42
coroutine.__next__() #starts co and pauses at 1st yield
coroutine.send(7)
coroutine.send(42)
coroutine.close() #close coroutine
def producer(data_in, next_coroutine):
tokens = data_in.split(" ")
for token in tokens:
next_coroutine.send(token)
next_coroutine.close()
def middle_coroutine(filter_str, next_coroutine):
print("Search for words containing: ", filter_str)
try:
while True:
token = (yield)
if filter_str in token:
next_coroutine.send(token)
except GeneratorExit:
print("Done filtering")
next_coroutine.close()
def output_coroutine():
try:
while True:
token = (yield)
print(token)
except GeneratorExit:
print("Done printing")
sink = output_coroutine()
sink.__next__()
middle = middle_coroutine("t", sink)
middle.__next__()
input_str = "This is a sentence with some words containing the letter t."
producer(input_str, middle)
import asyncio
# Borrowed from http://curio.readthedocs.org/en/latest/tutorial.html.
@asyncio.coroutine
def countdown(number, n):
while n > 0:
print('T-minus', n, '({})'.format(number))
yield from asyncio.sleep(1)
n -= 1
asyncio.set_event_loop(asyncio.new_event_loop())
loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(countdown("A", 2)), asyncio.ensure_future(countdown("B", 3))]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
import asyncio
# Borrowed from http://curio.readthedocs.org/en/latest/tutorial.html.
async def countdown(number, n):
while n > 0:
print('T-minus', n, '({})'.format(number))
await asyncio.sleep(1)
n -= 1
asyncio.set_event_loop(asyncio.new_event_loop())
loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(countdown("A", 2)), asyncio.ensure_future(countdown("B", 3))]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Coroutines
Step2: Coroutine Pipelines
Step3: Asynchronous Python
Step4: Async and Await (python 3.5+)
|
12,298
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
friends = ['John', 'Bob', 'Mary']
stuff_to_pack = ['socks','shirt','toothbrush']
print(friends)
print(stuff_to_pack)
#list of integers
print([1, 24, 76])
#list of strings
print(['red', 'yellow', 'blue'])
#mixed list
print(['red', 24, 98.6])
#list with a list included
print([1, [5, 6], 7])
#empty list
print([])
one_way = []
print(one_way)
another_way = list()
print(another_way)
help(list.append)
a_list = [1, 3, 4]
a_list.append(5)
print(a_list)
a_list = [1, 3, 4]
a_list = a_list.append(5)
print(a_list)
help(str.replace)
a_string = 'hello'
a_string.replace('l', 'b')
print(a_string) # a_string has not been changed
a_string = 'hello'
a_new_string = a_string.replace('l', 'b')
print(a_string)
print(a_new_string)
fruit_basket = ['apple', 'banana', 'cherry']
fruit = 'banana'
print(fruit_basket[1])
print(fruit[1])
friend_list = ['John', 'Bob', 'Marry']
print(friend_list[0])
print(friend_list[1])
print(friend_list[2])
friend_list = ['John', 'Bob', 'Marry']
print(friend_list[-2])
friend_list = ['John', 'Bob', 'Marry']
list_with_fewer_friends = friend_list[:2]
print(list_with_fewer_friends)
print(friend_list[5])
help(list.count)
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
number_of_bobs = friend_list.count('Bob')
print(number_of_bobs)
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
number_of_franks = friend_list.count('Frank')
print(number_of_franks)
help(list.index)
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
first_index_with_john = friend_list.index('Bob')
print(first_index_with_john)
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
friend_list.index('Frank')
one_list = ['where', 'is']
another_list = ['the', 'rest', '?']
print(one_list + another_list)
a_list = ['Hello', 'world']
print(a_list * 3)
life = ['a', 'lot', 'of', 'stuff']
print('meaning' in life)
print([3, 2] == [2, 3])
print([2,3] == [2,3])
nums = [3, 41, 12, 9, 74, 15]
print(len(nums)) # number of items in a list
print(max(nums)) # highest value in a list
print(min(nums)) # lowest value in a list
print(sum(nums)) # sum of all values in a list
#define some lists and variables
a = [1,2,3]
b = 4
c = [5,6,7]
x = 1
i = 2
#do some operations
a.append(b) # Add item b to the end of a
a.extend(c) # Add the elements of list c at the end of a
a.insert(i,b) # Insert item b at position i
a.pop(i) # Remove from a the i'th element and return it. If i is not specified, remove the last element
a.index(x) # Return the index of the first element of a with value x. Error if it does not exist
a.count(x) # Return how often value x is found in a
a.remove(x) # Remove from a the first element with value x. Error if it does not exist
a.sort() # Sort the elements of list a
a.reverse() # Reverses list a (no return value!)
print(a)
dir(list)
# your code here
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
# your code here
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
# your code here
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
# your code here
a_string = 'hello'
a_list = ['hello']
# your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chapter 6 - Lists
Step2: Square brackets surround lists, and commas separate the elements in the list
Step3: Please note that there are two ways of creating an empty list.
Step4: 2. How to add items to a list
Step5: We learn that append has one positional parameter object. Remember that you should ignore 'self' (it just means that append is a list method). append returns None. It might be a bit confusing at first that a list method returns None. Please carefully look at the difference between the two following examples. Please predict what will be printed in each code snippet below
Step6: The reason why the first example is the correct one is that lists are mutable, which means that you can change the contents of a list. You can hence change the items in a list without assigning it to a new variable.
Step7: 3. How to extract/inspect items in a list
Step8: We can hence use this index number to extract items from a list (just as with strings).
Step9: Obviously, we can also use negative indices
Step10: And we can extract one part of a list using slicing
Step11: If you insert an index that is higher than what is present in the list, you will get an IndexError
Step12: Two additional methods are useful for inspecting lists
Step13: The count method has one positional parameter value and returns an integer. As the name already indicates, the method returns how often the value occurs in the list.
Step14: The index method has one positional parameter value, and two keyword parameters, and returns the first index of the value. It is hence similar to the count method, but now the first index is returned of the value instead of the count.
Step15: We get a ValueError when the value is not in the list.
Step16: 4. Basic List Operations
Step17: The '*' sign makes it possible to repeat a list
Step18: Of course, you can use lists in membership boolean expressions
Step19: And you can use lists in comparison boolean expressions
Step20: 5. Use built-in functions on lists
Step21: 6. An overview of list methods
Step22: In order to have a complete overview of all list methods, you can use the dir built-in function
Step23: Exercises
Step24: Exercise 2
Step25: Exercise 3
Step26: Exercise 4
Step27: Exercise 4
|
12,299
|
<ASSISTANT_TASK:>
Python Code:
df['Age'].describe()
df.groupby('Gender')['Income'].describe()
df['Income'].describe()
df['SchoolMajor'].value_counts()
df['SchoolDegree'].value_counts()
df.sort_values(by='StudentDebtOwe', ascending=False).head()
df[(df['BootcampFullJobAfter']==1) & (df['BootcampLoanYesNo']==1)].head()
df[(df['IsEthnicMinority']==1)& (df['CountryCitizen']=='United States of America')]
df.groupby('CountryCitizen')['Income'].sum().sort_values(ascending=False)
df['BootcampMonthsAgo'].value_counts()
df[(df['EmploymentStatus']== 'Employed for wages')& (df['BootcampMonthsAgo'] <=4)].head()
df['Age'].hist()
plt.style.available
plt.style.use('fivethirtyeight')
df['Age'].hist()
df['Age'].hist(bins=20, xlabelsize=10, ylabelsize=10, range=(0,50))
plt.style.use('fivethirtyeight')
df['StudentDebtOwe'].head(10).plot(kind='bar')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. What are the maximum income for female programmers?
Step2: 3. how much does a programmer make on average per year?
Step3: 4. what is the most common major programmers studied?
Step4: 5.what is the highest degree have these programmers completed?
Step5: 6. rank programmer with the amount of debt
Step6: 7. Show programmers are taking loans to attend bootcamp and have full-time job after attending bootcamp?
Step7: 8. How many programmers are citizens of United States of America and are ethnic minority?
Step8: 9.programmers from which country earn the most?
Step9: 10. How many programmers found jobs after graduating from bootcamp less and equal than 4 months?
Step10: this is a bar chart of programmers' age
Step11: change the style of the previous plot into fivethirtyeight
Step12: customize the chart by changing bin size, xlabelsize, ylabelsize and range
Step13: Make a graph that shows the amount of student debt the first 10 programmes from the dataframe own?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.