text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
A=np.asarray([[1,1,1], [1,1,2], [1,1,3], [1,1,4]])
B=np.asarray([[0,0,0], [1,0,2], [1,0,3], [1,0,4], [1,1,0], [1,1,1], [1,1,4]])
dims = np.maximum(B.max(0),A.max(0))+1
result = A[~np.in1d(np.ravel_multi_index(A.T,dims),np.ravel_multi_index(B.T,dims))]
output = np.append(result, B[~np.in1d(np.ravel_multi_index(B.T,dims),np.ravel_multi_index(A.T,dims))], axis = 0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: Cleaning the data
Step3: remove the < br /> html line breaks from the values in the 'WindDirDegrees' column.
Step4: and change the values in the 'WindDirDegrees' column to float64
Step5: We definitely need to change the values in the 'GMT' column into values of the datetime64 date type.
Step6: We also need to change the index from the default to the datetime64 values in the 'Date' column so that it is easier to pull out rows between particular dates and display more meaningful graphs
Step7: Finding a summer break
Step8: I now look for the days with warm temperatures.
Step9: Summer 2014 was rather cool in London
Step10: Now let's plot the 'Mean TemperatureC' for the summer
Step11: Well looking at the graph the second half of July looks good for mean temperatures over 20 degrees C so let's also put precipitation on the graph too
Step12: The second half of July is still looking good, with just a couple of peaks showing heavy rain. Let's have a closer look by just plotting mean temperature and precipitation for July.
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.version
import warnings
warnings.simplefilter('ignore', FutureWarning)
from pandas import *
show_versions()
delhi = read_csv('Delhi_DEL_2014.csv', skipinitialspace=True)
delhi.head()
delhi = delhi.rename(columns={'WindDirDegrees<br />' : 'WindDirDegrees'})
delhi['WindDirDegrees'] = delhi['WindDirDegrees'].str.rstrip('<br />')
delhi['WindDirDegrees'] = delhi['WindDirDegrees'].astype('float64')
delhi['Date'] = to_datetime(delhi['Date'])
delhi.index = delhi['Date']
delhi.head()
summer = delhi.loc[datetime(2014,5,1) : datetime(2014,8,31)]
summer[summer['Mean TemperatureC'] >= 25].head()
%matplotlib inline
summer[['Mean TemperatureC']].plot(grid=True, figsize=(20,8));
summer[['Mean TemperatureC', 'Precipitationmm']].plot(grid=True, figsize=(20,8));
july = summer.loc[datetime(2014,7,1) : datetime(2014,7,31)]
july[['Mean TemperatureC', 'Precipitationmm']].plot(grid=True, figsize=(20,8));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <b>Exercise 2
|
<ASSISTANT_TASK:>
Python Code:
#!/usr/bin/env python
#
# This project will collect temperature and humidity information using a DHT22 sensor
# and send this information to a MySQL database.
#
import Adafruit_DHT
import time
import RPi.GPIO as GPIO
import datetime
import MySQLdb
# General settings
prog_name = "pilogger2.py"
# Settings for database connection
hostname = '172.20.101.81'
username = 'piuser3'
password = 'logger'
database = 'pidata'
dht_sensor_port = 4 # Connect the DHT sensor to port D
dht_sensor_type = Adafruit_DHT.DHT11 # Sensor type
device = 'pi-003' # Host name of the Pi
GPIO.setmode(GPIO.BCM) # Use the Broadcom pin numbering
GPIO.setup(led, GPIO.OUT) # LED pin set as output
GPIO.setup(dht_sensor_port, GPIO.IN) # DHT sensor port as input
# Routine to insert temperature records into the pidata.temps table:
def insert_record( device, datetime, temp, hum ):
query = "INSERT INTO temps3 (device,datetime,temp,hum) " \
"VALUES (%s,%s,%s,%s)"
args = (device,datetime,temp,hum)
try:
conn = MySQLdb.connect( host=hostname, user=username, passwd=password, db=database )
cursor = conn.cursor()
cursor.execute(query, args)
conn.commit()
except Exception as error:
print(error)
finally:
cursor.close()
conn.close()
# Print welcome
print('[{0:s}] starting on {1:s}...'.format(prog_name, datetime.datetime.today().strftime('%Y-%m-%d %H:%M:%S')))
# Main loop
try:
while True:
hum, temp = Adafruit_DHT.read_retry(dht_sensor_type, dht_sensor_port)
temp = temp * 9/5.0 + 32
now = datetime.datetime.now()
date = now.strftime('%Y-%m-%d %H:%M:%S')
insert_record(device,str(date),format(temp,'.2f'),format(hum,'.2f'))
time.sleep(180)
except (IOError,TypeError) as e:
print("Exiting...")
except KeyboardInterrupt:
# here you put any code you want to run before the program
# exits when you press CTRL+C
print("Stopping...")
finally:
print("Cleaning up...")
GPIO.cleanup() # this ensures a clean exit
%load_ext sql
%%sql mysql://piuser3:logger@172.20.101.81/pidata
select * from temps3;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 加载模型
Step2: 调用hanlp.load进行加载,模型会自动下载到本地缓存:
Step3: 语义文本相似度
|
<ASSISTANT_TASK:>
Python Code:
!pip install hanlp -U
import hanlp
hanlp.pretrained.sts.ALL # 语种见名称最后一个字段或相应语料库
sts = hanlp.load(hanlp.pretrained.sts.STS_ELECTRA_BASE_ZH)
sts([
('看图猜一电影名', '看图猜电影'),
('无线路由器怎么无线上网', '无线上网卡和无线路由器怎么用'),
('北京到上海的动车票', '上海到北京的动车票'),
])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <hr>
Step2: Add Pos Tags
Step3: data frame
Step4: Tri Gram POS Tags
Step5: Function
Step6: <hr>
Step7: Test on Row
Step8: <hr>
|
<ASSISTANT_TASK:>
Python Code:
import pyspark as ps
from sentimentAnalysis import dataProcessing as dp
# create spark session
spark = ps.sql.SparkSession(sc)
# get dataframes
# specify s3 as sourc with s3a://
#df = spark.read.json("s3a://amazon-review-data/user_dedup.json.gz")
#df_meta = spark.read.json("s3a://amazon-review-data/metadata.json.gz")
# get shard
df_raw_data = spark.read.json("s3a://amazon-review-data/reviews_Musical_Instruments_5.json.gz")
# subset asin, reviewText
df_subset = df_raw_data.select("asin", "reviewText")
df_tokens = dp.add_tokens(df_subset)
from pyspark.ml.feature import NGram
# instantiate ngram object
ngram = NGram(n=3, inputCol="rawTokens", outputCol="triGrams")
# add ngrams
df_triGrams = ngram.transform(df_tokens)
df_triGrams.show(3)
import nltk
# get test row
test_row = df_triGrams.first()
type(test_row["triGrams"])
# test tiler
nltk.pos_tag(test_row["tokens"])
from pyspark.sql.types import ArrayType, StringType
# create udf
pos_udf = ps.sql.functions.udf(lambda x: nltk.pos_tag(x), ArrayType(ArrayType(StringType())))
# apply udf, create new column
df_posTag = df_tokens.withColumn("posTags", pos_udf(df_tokens["tokens"]))
df_posTag.show(3)
df_posTag.select("posTags").first()
test_row["triGrams"][:10]
def tag_triGrams(triGrams):
tagged = []
for triGram in triGrams:
tagged.append(nltk.pos_tag(triGram.split()))
return tagged
test_row["triGrams"][0].split()
tag_triGrams(test_row["triGrams"])[:10]
# create udf
pos_triTag_udf = ps.sql.functions.udf(lambda x: tag_triGrams(x), ArrayType(ArrayType(ArrayType(StringType()))))
# apply udf, create new column
df_triPosTags = df_triGrams.withColumn("triPosTags", pos_triTag_udf(df_triGrams["triGrams"]))
df_triPosTags.show(3)
test_row = df_triPosTags.first()
test_row["triPosTags"]
# import nltk
# from pyspark.sql.types import ArrayType, StringType
def addPosTags(df_tokens):
# create udf
pos_udf = ps.sql.functions.udf(lambda x: nltk.pos_tag(x), ArrayType(ArrayType(StringType())))
# apply udf, create new column
df_posTag = df_tokens.withColumn("posTags", pos_udf(df_tokens["tokens"]))
df_posTag = df_posTag.withColumn("raw_posTags", pos_udf(df_tokens["rawTokens"]))
return df_posTag
# test
df_posTag = addPosTags(df_tokens)
df_posTag.show(3)
tag_seqs_re = [('JJ', '^(NN|NS)', '.*'),
('^(RB|RBR|RBS)', 'JJ', '^(?!(NN|NS)).*'),
('JJ', 'JJ', '^(?!(NN|NS)).*'),
('^(NN|NS)', 'JJ', '^(?!(NN|NS)).*'),
('^(RB|RBR|RBS)', '^(VB|VBN|VBD|VBG)', '.*')
]
# get python regex
import re
# get test row
test_row = df_posTag.first()
# check triGram tags- want tagged raw tokens (stopwords not removed)
test_row["triPosTags"][:10]
# function to check if a tagged triGram matches a single sequence
def is_match(triPosTag, seq):
# iterate over tags in triPosTag
for i,el in enumerate(triPosTag):
print(el[1]+" match "+seq[i])
# return False if tag does not match sequence
if not re.match(el[1], seq[i]):
return False
# returns true if no mismatches found
return True
def match_pos_seq(taggedTriGram):
for el in taggedTriGram:
pass
# get test tag
test_triPosTag = test_row["triPosTags"][0]
# create test match tag
test_triPosTag_match = [["a", "NN"], ["b", "JJ"], ["c", "RR"]]
# test regex match works
tag_seqs_re[3]
re.match(tag_seqs_re[3][0], "NN")
# test is_match()
is_match(test_triPosTag_match, tag_seqs_re[3])
#df_obj_only.write.json("s3a://amazon-review-data/review-data")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: В рамках первичной визуалиции можно сразу отметить восходящий общий тренд. Сезонность с пиками в декабре и падением в январе(годовые премии). Рост дисперсии. Со всем этим нужно будет отдельно проанализировать
Step2: Стабилизация дисперсии
Step3: Стационарность
Step4: Это победа. Получили стационарный ряд. Lets rock
Step5: SARIMA - наше всё
Step6: Начальные приближения
Step7: Остатки несмещены (подтверждается критерием Стьюдента) стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой). Посмотрим, насколько хорошо модель описывает данные
Step8: Модель описывает реальные данные очень даже хорошо
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn
import warnings
from itertools import product
import numpy as np
def invboxcox(y,lmbda):
if lmbda == 0:
return(np.exp(y))
else:
return(np.exp(np.log(lmbda*y+1)/lmbda))
%matplotlib inline
salary = pd.read_csv('WAG_C_M.csv', sep=';', index_col=['month'], parse_dates=['month'], dayfirst=True)
# загрузили данные
salary.head()
salary.rename(columns={'WAG_C_M': 'salary_rub'}, inplace=True)
salary.salary_rub.plot(figsize=(15, 10), title='Average salary in Russia', fontsize=12);
plt.xlabel('month', fontsize=12)
plt.ylabel('average salary', fontsize=12)
plt.show()
# Проверка стационарности и STL-декомпозиция ряда:
plt.rcParams["figure.figsize"] = (10,15)
sm.tsa.seasonal_decompose(salary.salary_rub).plot( )
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_rub)[1])
# Гипотеза о стационарности критерием Дики- Фуллера не потверждается. Но подождите тут ведь еще тренд и сезонность
plt.rcParams["figure.figsize"] = (10,8)
salary['salary_box'], lmbda = stats.boxcox(salary.salary_rub)
salary.salary_box.plot()
plt.ylabel('Transformed wine sales')
print("Оптимальный параметр преобразования Бокса-Кокса: %f" % lmbda)
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box)[1])
salary['salary_box_diff'] = salary.salary_box - salary.salary_box.shift(12)
sm.tsa.seasonal_decompose(salary.salary_box_diff[12:]).plot()
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box_diff[12:])[1])
# Годовое диффернцирование не помогло, нужно еще раз
salary['salary_box_diff12_1'] = salary.salary_box_diff - salary.salary_box_diff.shift(1)
sm.tsa.seasonal_decompose(salary.salary_box_diff12_1[13:]).plot()
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box_diff12_1[13:])[1])
salary.salary_box_diff = salary.salary_box_diff12_1
salary.drop('salary_box_diff12_1', axis=1, inplace=True)
ax = plt.subplot(211)
sm.graphics.tsa.plot_acf(salary.salary_box_diff[13:].values.squeeze(), lags=48, ax=ax)
plt.show()
ax = plt.subplot(212)
sm.graphics.tsa.plot_pacf(salary.salary_box_diff[13:].values.squeeze(), lags=48, ax=ax)
plt.show()
ps = range(0, 19)
d = 1
qs = range(0, 2)
Ps = range(0, 2)
D = 1
Qs = range(0, 1)
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
len(parameters_list)
%%time
results = []
best_aic = float("inf")
warnings.filterwarnings('ignore')
for param in parameters_list:
#try except нужен, потому что на некоторых наборах параметров модель не обучается
try:
model=sm.tsa.statespace.SARIMAX(salary.salary_box, order=(param[0], d, param[1]),
seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)
#выводим параметры, на которых модель не обучается и переходим к следующему набору
except ValueError:
print('wrong parameters:', param)
continue
aic = model.aic
#сохраняем лучшую модель, aic, параметры
if aic < best_aic:
best_model = model
best_aic = aic
best_param = param
results.append([param, model.aic])
warnings.filterwarnings('default')
result_table = pd.DataFrame(results)
result_table.columns = ['parameters', 'aic']
print(result_table.sort_values(by = 'aic', ascending=True).head())
# Лучшая модель:
print(best_model.summary())
plt.subplot(211)
best_model.resid[13:].plot()
plt.ylabel(u'Residuals')
ax = plt.subplot(212)
sm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)
print("Критерий Стьюдента: p=%f" % stats.ttest_1samp(best_model.resid[13:], 0)[1])
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])
# Посмотрим на остатки модели:
plt.subplot(211)
best_model.resid[13:].plot()
plt.ylabel(u'Residuals')
ax = plt.subplot(212)
sm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)
print("Критерий Стьюдента: p=%f" % stats.ttest_1samp(best_model.resid[13:], 0)[1])
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])
salary['model'] = invboxcox(best_model.fittedvalues, lmbda)
plt.rcParams["figure.figsize"] = (10,8)
salary.salary_rub.plot()
salary.model[13:].plot(color='r')
plt.ylabel('Salary in Russia')
plt.show()
from datetime import datetime
import datetime
from dateutil.relativedelta import *
salary2 = salary[['salary_rub']]
date_list = [datetime.datetime.strptime("2016-09-01", "%Y-%m-%d") + relativedelta(months=x) for x in range(0,36)]
future = pd.DataFrame(index=date_list, columns=salary2.columns)
salary2 = pd.concat([salary2, future])
salary2['forecast'] = invboxcox(best_model.predict(start=284, end=325), lmbda)
salary2.salary_rub.plot()
salary2.forecast.plot(color='r')
plt.ylabel('Average salary in Russia (rub)')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here print is the function name, and 'abc' is the function's argument.
Step2: When non-keyword arguments are used together with keyword arguments, the keyword arguments must come at the end.
Step3: Now we have a function named fibonacci which takes a single argument N, does something with this argument, and returns a value; in this case, a list of the first N Fibonacci numbers
Step4: If you're familiar with strongly-typed languages like C, you'll immediately notice that there is no type information associated with the function inputs or outputs.
Step5: Default Argument Values
Step6: With a single argument, the result of the function call is identical to before
Step7: But now we can use the function to explore new things, such as the effect of new starting values
Step8: The values can also be specified by name if desired, in which case the order of the named values does not matter
Step9: *args and **kwargs
Step10: Here it is not the names args and kwargs that are important, but the * characters preceding them.
Step11: Anonymous (lambda) Functions
Step12: This lambda function is roughly equivalent to
Step13: So why would you ever want to use such a thing?
Step14: Now suppose we want to sort this data.
Step15: But dictionaries are not orderable
|
<ASSISTANT_TASK:>
Python Code:
print('abc')
print(1, 2, 3)
print(1, 2, 3, sep='--')
def fibonacci(N):
L = []
a, b = 0, 1
while len(L) < N:
a, b = b, a + b
L.append(a)
return L
fibonacci(10)
def real_imag_conj(val):
return val.real, val.imag, val.conjugate()
r, i, c = real_imag_conj(3 + 4j)
print(r, i, c)
def fibonacci(N, a=0, b=1):
L = []
while len(L) < N:
a, b = b, a + b
L.append(a)
return L
fibonacci(10)
fibonacci(10, 0, 2)
fibonacci(10, b=3, a=1)
def catch_all(*args, **kwargs):
print("args =", args)
print("kwargs = ", kwargs)
catch_all(1, 2, 3, a=4, b=5)
catch_all('a', keyword=2)
inputs = (1, 2, 3)
keywords = {'pi': 3.14}
catch_all(*inputs, **keywords)
add = lambda x, y: x + y
add(1, 2)
def add(x, y):
return x + y
data = [{'first':'Guido', 'last':'Van Rossum', 'YOB':1956},
{'first':'Grace', 'last':'Hopper', 'YOB':1906},
{'first':'Alan', 'last':'Turing', 'YOB':1912}]
sorted([2,4,3,5,1,6])
# sort alphabetically by first name
sorted(data, key=lambda item: item['first'])
# sort by year of birth
sorted(data, key=lambda item: item['YOB'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Interact basics
Step3: Use the interact function to interact with the print_sum function.
Step5: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
Step6: Use the interact function to interact with the print_string function.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def print_sum(a, b):
Print the sum of the arguments a and b.
# YOUR CODE HERE
c = a+b
print(c)
# YOUR CODE HERE
interact(print_sum, a=[-10, 10, 0.1], b=[-8, 8, 2]);
assert True # leave this for grading the print_sum exercise
def print_string(s, length=False):
Print the string s and optionally its length.
# YOUR CODE HERE
if length == False:
print(s)
elif length == True:
length = len(s)
print(s, length)
# YOUR CODE HERE
interact(print_string, s='Hello World!', length=True);
assert True # leave this for grading the print_string exercise
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper function to check if a channel is weakly symmetric and hence also symmetric)
Step2: Compute the capacity of the channel. If the channel is weakly symmetric, use the direct equation
Step3: Compute the capacity for the weakly symmetric channel used in the lecture
Step4: Compute the capacity for a non-symmetric channel. Observe that the input distribution is not uniform
Step5: Compute the capacity for a Z-channel with error probability $q=0.1$
Step6: Compute capacitiies of the Z-channel with a varying range of input parameters
|
<ASSISTANT_TASK:>
Python Code:
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
# check if channel is weakly symmetric
def is_weakly_symmetric(P):
V = P.shape[1]
W = P.shape[0]
# check if matrix P is weakly symmetric
col1 = np.sort(P[:,0])
permutation_test = [np.array_equal(np.sort(P[:,k]), col1) for k in range(1,V)]
retval = all(permutation_test)
if retval == True:
row_sums = np.sum(P,axis=1)
if not all(row_sums == row_sums[0]):
retval = False;
return retval
def channel_capacity(P):
# input and output dimensions
V = P.shape[1]
W = P.shape[0]
if is_weakly_symmetric(P):
col = P[:,0]
C = np.log2(W) + np.sum(col * np.log2(col))
px = np.ones(V)/V
return C,px
else:
# compute Ptilde, multiplication is element-wise here (not matrix multiplication!)
Ptilde = np.zeros_like(P)
Ptilde[P > 0] = P[P > 0] * np.log2(P[P > 0])
Ptilde[np.isnan(Ptilde)] = 0 # case 0*log2(0) = 0 (and not inf/nan)
# optimize Px
px = cp.Variable(shape=V)
objective = cp.Maximize(np.sum(Ptilde,axis=0)@px + cp.sum(cp.entr(P@px))/np.log(2.0))
constraints = [cp.sum(px) == 1.0, px >= 0]
problem = cp.Problem(objective, constraints)
problem.solve()
return problem.value, px.value
# channel transition matrix of a symmetric channel
P = np.array([[1/3, 1/3], [1/2, 1/6], [1/6, 1/2]])
print(channel_capacity(P))
# channel transition matrix of an arbitrary channel
P = np.array([[1/2, 1/8], [1/3, 5/8], [1/6, 1/4]])
print(channel_capacity(P))
# Z-channel
q = 0.1
P = np.array([[1, q], [0, 1-q]])
print(channel_capacity(P))
qs = np.linspace(0.00001,0.99999,100)
Cs = np.zeros_like(qs)
pxs = np.empty((0,2))
for k in range(len(qs)):
P = np.array([[1, qs[k]], [0, 1-qs[k]]])
C,px = channel_capacity(P)
Cs[k] = C
pxs = np.vstack((pxs,px))
plt.figure(figsize=(12,7))
plt.plot(qs, Cs)
plt.xlim((0,1))
plt.ylim((0,1))
plt.xlabel('$q$',fontsize=14)
plt.ylabel('C (bit/channel use)',fontsize=14)
plt.figure(figsize=(12,3.5))
font = {'size' : 18}
plt.rc('font', **font)
#plt.rc('text', usetex=True)
plt.imshow(pxs.T, extent=[0, qs[-1], -1, 1], aspect='auto', vmin=0.2, vmax=0.8)
plt.xlim(0,1)
plt.xlabel('$q$')
plt.yticks([-0.5, 0.5], ('$P(X=1)$', '$P(X=0)$'))
plt.colorbar();
#plt.savefig('Zchannel_input_distribution.pdf',bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multile Keys and Values
Step2: JSON and Arrays
Step3: More JSON + Arrays
Step4: Nested JSON Objects
Step5: From Python Dictionaries to JSON
Step6: Reading Twitter JSON
Step7: Twitter JSON Fields
|
<ASSISTANT_TASK:>
Python Code:
jsonString = '{"key": "value"}'
# Parse the JSON string
dictFromJson = json.loads(jsonString)
# Python now has a dictionary representing this data
print ("Resulting dictionary object:\n", dictFromJson)
# Will print the value
print ("Data stored in \"key\":\n", dictFromJson["key"])
# This will cause an error!
print ("Data stored in \"value\":\n", dictFromJson["value"])
jsonString = '{ "name": "Cody", "occupation": "PostDoc", "goal": "Tenure" }'
# Parse the JSON string
dictFromJson = json.loads(jsonString)
# Python now has a dictionary representing this data
print ("Resulting dictionary object:\n", dictFromJson)
jsonString = '{"students": [{"name": "Cody", "occupation": "PostDoc", "goal": "Tenure"}, {"name": "Scott", "occupation": "Student", "goal": "Masters"}]}'
# Parse the JSON string
dictFromJson = json.loads(jsonString)
# Python now has a dictionary representing this data
print ("Resulting array:\n", dictFromJson)
print ("Each student:")
for student in dictFromJson["students"]:
print (student)
jsonString = '[{"name": "Cody","occupation": "PostDoc","goal": "Tenure"},{"name": "Scott","occupation": "Student","goal": "Masters","completed": true}]'
# Parse the JSON string
arrFromJson = json.loads(jsonString)
# Python now has an array representing this data
print ("Resulting array:\n", arrFromJson)
print ("Each student:")
for student in arrFromJson:
print (student)
jsonString = '{"disasters" : [{"event": "Nepal Earthquake","date": "25 April 2015","casualties": 8964,"magnitude": 7.8,"affectedAreas": [{"country": "Nepal","capital": "Kathmandu","population": 26494504},{"country": "India","capital": "New Dehli","population": 1276267000},{"country": "China","capital": "Beijing","population": 1376049000},{"country": "Bangladesh","capital": "Dhaka","population": 168957745}]}]}'
disasters = json.loads(jsonString)
for disaster in disasters["disasters"]:
print (disaster["event"])
print (disaster["date"])
for country in disaster["affectedAreas"]:
print (country["country"])
exObj = {
"event": "Nepal Earthquake",
"date": "25 April 2015",
"casualties": 8964,
"magnitude": 7.8
}
print ("Python Object:", exObj, "\n")
# now we can convert to JSON
print ("Object JSON:")
print (json.dumps(exObj), "\n")
# We can also pretty-print the JSON
print ("Readable JSON:")
print (json.dumps(exObj, indent=4)) # Indent adds space
tweetFilename = "first_BlackLivesMatter.json"
# Use Python's os.path.join to account for Windows, OSX/Linux differences
tweetFilePath = os.path.join("..", "00_data", "ferguson", tweetFilename)
print ("Opening", tweetFilePath)
# We use codecs to ensure we open the file in Unicode format,
# which supports larger character encodings
tweetFile = codecs.open(tweetFilePath, "r", "utf8")
# Read in the whole file, which contains ONE tweet and close
tweetFileContent = tweetFile.read()
tweetFile.close()
# Print the raw json
print ("Raw Tweet JSON:\n")
print (tweetFileContent)
# Convert the JSON to a Python object
tweet = json.loads(tweetFileContent)
print ("Tweet Object:\n")
print (tweet)
# We could have done this in one step with json.load()
# called on the open file, but our data files have
# a single tweet JSON per line, so this is more consistent
# What fields can we see?
print ("Keys:")
for k in sorted(tweet.keys()):
print ("\t", k)
print ("Tweet Text:", tweet["text"])
print ("User Name:", tweet["user"]["screen_name"])
print ("Author:", tweet["user"]["name"])
print("Source:", tweet["source"])
print("Retweets:", tweet["retweet_count"])
print("Favorited:", tweet["favorite_count"])
print("Tweet Location:", tweet["place"])
print("Tweet GPS Coordinates:", tweet["coordinates"])
print("Twitter's Guessed Language:", tweet["lang"])
# Tweets have a list of hashtags, mentions, URLs, and other
# attachments in "entities" field
print ("\n", "Entities:")
for eType in tweet["entities"]:
print ("\t", eType)
for e in tweet["entities"][eType]:
print ("\t\t", e)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: List RunSessions, ordered by descending timestamp
Step2: Print latest RunSession
Step3: We have overridden the __repr__ function in the base class for SqlAlchemy tables to print out the type, a dictionary of contents, and a list of relations.
Step4: Not much information here. Let's print out information on the builds associated with the build executions
Step5: Not very readable. Let's use prettyprint
Step6: Better, but not good. The overridden repr implementation is supposed to be evalable. Note that we need to import datetime and call repr explicitly. Let's try
Step7: Much better. We can see the 0hash and icepole implementations succeeded but the 0hash implementation failed. The log is mostly empty since we've rebuilt and thus there's not much makefile output.
Step8: Too much stuff. Let's clean it up
|
<ASSISTANT_TASK:>
Python Code:
# %load startup.ipy
#! /usr/bin/env python3
import sys
sys.path.append('./python')
import logging.config
import os
import xbx.database as xbxdb
import xbx.util as xbxu
import xbx.config as xbxc
import xbx.build as xbxb
import xbx.run as xbxr
logging.config.fileConfig("logging.ini", disable_existing_loggers=False)
CONFIG_PATH="config.ini"
xbxdb.init(xbxu.get_db_path(CONFIG_PATH))
config = xbxc.Config(CONFIG_PATH)
dbsession = xbxdb.scoped_session()
s=dbsession
l = s.query(xbxr.RunSession).order_by(xbxr.RunSession.timestamp.desc())
print([i.timestamp for i in l])
rs=l.first()
print(rs)
print([i for i in rs.build_execs])
print([(i, i.build) for i in rs.build_execs])
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint([(i, i.build) for i in rs.build_execs])
import datetime
pp.pprint([(eval(repr(i)), eval(repr(i.build))) for i in rs.build_execs])
import datetime
for i in rs.build_execs:
if not i.test_ok:
for j in i.runs:
print(j)
import datetime
for i in rs.build_execs:
if not i.test_ok:
for j in i.runs:
print('{}: {}'.format(i.build.buildid,j.checksumfail_cause))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot helper functions
Step2: Reusability for new data
Step3: Training and testing a machine learning model for classification
Step4: Training and testing a machine learning model for regression
|
<ASSISTANT_TASK:>
Python Code:
# read the frequency and get a pandas serie
frequency = pd.read_csv('data/freq.csv')['freqs']
# read all data for training
filenames = ['data/spectra_{}.csv'.format(i)
for i in range(4)]
spectra, concentration, molecule = [], [], []
for filename in filenames:
spectra_file, concentration_file, molecule_file = read_spectra(filename)
spectra.append(spectra_file)
concentration.append(concentration_file)
molecule.append(molecule_file)
# Concatenate in single DataFrame and Serie
spectra = pd.concat(spectra)
concentration = pd.concat(concentration)
molecule = pd.concat(molecule)
fig, ax = plot_spectra(frequency, spectra, 'All training spectra')
fig, ax = plot_spectra_by_type(frequency, spectra, molecule)
ax.set_title('Mean spectra in function of the molecules')
fig, ax = plot_spectra_by_type(frequency, spectra, concentration,
'Mean spectra in function of the concentrations')
spectra_test, concentration_test, molecule_test = read_spectra('data/spectra_4.csv')
plot_spectra(frequency, spectra_test,
'All training spectra')
plot_spectra_by_type(frequency, spectra_test, molecule_test,
'Mean spectra in function of the molecules')
plot_spectra_by_type(frequency, spectra_test, concentration_test,
'Mean spectra in function of the concentrations');
for clf in [RandomForestClassifier(random_state=0),
LinearSVC(random_state=0)]:
pipeline = make_pipeline(StandardScaler(),
PCA(n_components=100, random_state=0),
clf)
y_pred = pipeline.fit(spectra, molecule).predict(spectra_test)
fig, ax = plot_cm(
confusion_matrix(molecule_test, y_pred),
pipeline.classes_,
'Confusion matrix using {}'.format(clf.__class__.__name__))
print('Accuracy score: {0:.2f}'.format(pipeline.score(spectra_test,
molecule_test)))
regression_experiment(spectra, spectra_test,
concentration, concentration_test)
# compute the statistics on the training data
med, var = fit_params(spectra)
# transform the training and testing data
spectra_scaled = transform(spectra, med, var)
spectra_test_scaled = transform(spectra_test, med, var)
regression_experiment(spectra_scaled, spectra_test_scaled,
concentration, concentration_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Split data into training/test data
Step2: Fit
Step3: Control variance of observed/unobserved data
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import logging
import itertools
from scipy.sparse import csr_matrix
import rescal
from almc.bayesian_rescal import BayesianRescal
%matplotlib inline
logger = logging.getLogger()
logger.setLevel(logging.INFO)
max_iter = 20
n_entity = 10
n_dim = 5
n_relation = 20
var_e = 1.
var_r = 1.
var_x = 1.
e_mean = np.zeros(n_dim)
r_mean = np.zeros(n_dim**2)
E = np.random.multivariate_normal(e_mean, np.identity(n_dim) * var_e, size = n_entity)
R = np.zeros([n_relation, n_dim, n_dim])
for k in range(n_relation):
R[k] = np.random.multivariate_normal(r_mean, np.identity(n_dim**2)*var_r).reshape(n_dim,n_dim)
X = np.zeros([n_relation, n_entity, n_entity])
for k in range(n_relation):
ERET = np.dot(np.dot(E, R[k]), E.T)
for i,j in itertools.product(range(n_entity), repeat=2):
X[k,i,j] = np.random.normal(ERET[i,j], var_x)
import itertools
trainX = np.zeros_like(X)
p = 1
for k in range(n_relation):
for i,j in itertools.product(range(n_entity),repeat=2):
if X[k, i, j] and np.random.binomial(1, p):
trainX[k, i, j] = X[k, i, j]
model = BayesianRescal(n_dim, var_e=var_e, var_x=var_x, var_r=var_r)
model.fit(trainX, max_iter=max_iter)
csr_X = list()
for k in range(n_relation):
csr_X.append(csr_matrix(trainX[k]))
E, R, f, itr, exectimes = rescal.rescal_als(csr_X, n_dim)
_X = model._reconstruct()
print("BayesRESCAL:Element-wise squared error: %.3f" %(np.sum((X-_X)**2)))
_X = np.zeros_like(X)
for k in range(n_relation):
_X[k] = np.dot(np.dot(E, R[k]), E.T)
print("RESCAL:Element-wise squared error: %.3f" %(np.sum((X-_X)**2)))
import itertools
trainX = np.zeros_like(X)
p = 0.5 # proportion of training data points
for k in range(n_relation):
for i,j in itertools.product(range(n_entity),repeat=2):
if X[k, i, j] and np.random.binomial(1, p):
trainX[k, i, j] = X[k, i, j]
model = BayesianRescal(n_dim, var_e=var_e, var_x=var_x, var_r=var_r)
model.fit(trainX, max_iter=max_iter)
csr_X = list()
for k in range(n_relation):
csr_X.append(csr_matrix(trainX[k]))
E, R, f, itr, exectimes = rescal.rescal_als(csr_X, n_dim)
_bX = model._reconstruct()
print("BayesRESCAL:Element-wise squared error on training data: %.3f" %(np.sum((trainX-_bX)**2)))
print("BayesRESCAL:Element-wise squared error on test data: %.3f\n" %(np.sum((X-_bX)[trainX==0]**2)))
_X = np.zeros_like(X)
for k in range(n_relation):
_X[k] = np.dot(np.dot(E, R[k]), E.T)
print("RESCAL:Element-wise squared error on training data: %.3f" %(np.sum((trainX-_X)**2)))
print("RESCAL:Element-wise squared error on test data: %.3f" %(np.sum((X-_X)[trainX==0]**2)))
A = np.sum((trainX-_X)**2)
B = np.sum(trainX**2)
fit = 1.-A/B
print(fit)
model = BayesianRescal(n_dim, var_e=var_e, var_x=var_x, var_r=var_r, controlled_var=True, obs_var=1., unobs_var=100.)
model.fit(trainX, max_iter=20)
_cX = model._reconstruct()
print("BayesRESCAL:Element-wise squared error on training data: %.3f" %(np.sum((trainX[trainX!=0]-_bX[trainX!=0])**2)))
print("BayesRESCAL:Element-wise squared error on test data: %.3f\n" %(np.sum((X-_bX)[trainX==0]**2)))
print("RESCAL:Element-wise squared error on training data: %.3f" %(np.sum((trainX[trainX!=0]-_X[trainX!=0])**2)))
print("RESCAL:Element-wise squared error on test data: %.3f\n" %(np.sum((X-_X)[trainX==0]**2)))
print("CV_BayesRESCAL:Element-wise squared error on training data: %.3f" %(np.sum((trainX[trainX!=0]-_cX[trainX!=0])**2)))
print("CV_BayesRESCAL:Element-wise squared error on test data: %.3f" %(np.sum((X-_cX)[trainX==0]**2)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper plot functions
Step2: Plot the time series
Step3: Train ARIMA model
Step4: We can get the forecast data using
Step5: Forecasting a bunch of series
Step6: Note that instead of training the series on 45 days (May 1 to June 15), I'm now training on a longer time period.
Step7: As you would expect, the aggregated time series over all the stations is much smoother and more predictable than the time series of just one station (the one station data will be more noisy). So, some forecasts will be better than others.
Step8: Evaluation
|
<ASSISTANT_TASK:>
Python Code:
#!pip install google-cloud-bigquery
%load_ext google.cloud.bigquery
import matplotlib.pyplot as plt
import pandas as pd
def plot_historical_and_forecast(input_timeseries, timestamp_col_name, data_col_name, forecast_output=None, actual=None):
input_timeseries = input_timeseries.sort_values(timestamp_col_name)
plt.figure(figsize=(20,6))
plt.plot(input_timeseries[timestamp_col_name], input_timeseries[data_col_name], label = 'Historical')
plt.xlabel(timestamp_col_name)
plt.ylabel(data_col_name)
if forecast_output is not None:
forecast_output = forecast_output.sort_values('forecast_timestamp')
forecast_output['forecast_timestamp'] = pd.to_datetime(forecast_output['forecast_timestamp'])
x_data = forecast_output['forecast_timestamp']
y_data = forecast_output['forecast_value']
confidence_level = forecast_output['confidence_level'].iloc[0] * 100
low_CI = forecast_output['confidence_interval_lower_bound']
upper_CI = forecast_output['confidence_interval_upper_bound']
# Plot the data, set the linewidth, color and transparency of the
# line, provide a label for the legend
plt.plot(x_data, y_data, alpha = 1, label = 'Forecast', linestyle='--')
# Shade the confidence interval
plt.fill_between(x_data, low_CI, upper_CI, color = '#539caf', alpha = 0.4, label = str(confidence_level) + '% confidence interval')
# actual
if actual is not None:
actual = actual.sort_values(timestamp_col_name)
plt.plot(actual[timestamp_col_name], actual[data_col_name], label = 'Actual', linestyle='--')
# Display legend
plt.legend(loc = 'upper center', prop={'size': 16})
%%bigquery df
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15'
ORDER BY date
plot_historical_and_forecast(df, 'date', 'numrentals');
!bq ls ch09eu || bq mk --location EU ch09eu
%%bigquery
CREATE OR REPLACE MODEL ch09eu.numrentals_forecast
OPTIONS(model_type='ARIMA',
time_series_data_col='numrentals',
time_series_timestamp_col='date') AS
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15'
%%bigquery fcst
SELECT * FROM ML.FORECAST(MODEL ch09eu.numrentals_forecast,
STRUCT(14 AS horizon, 0.9 AS confidence_level))
plot_historical_and_forecast(df, 'date', 'numrentals', fcst);
%%bigquery actual
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-06-16' AND '2015-07-01'
ORDER BY date
plot_historical_and_forecast(df, 'date', 'numrentals', fcst, actual);
%%bigquery
CREATE OR REPLACE MODEL ch09eu.numrentals_forecast
OPTIONS(model_type='ARIMA',
time_series_data_col='numrentals',
time_series_timestamp_col='date',
time_series_id_col='start_station_name') AS
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-01-01' AND '2015-06-15'
%%bigquery
SELECT *
FROM ML.ARIMA_COEFFICIENTS(MODEL ch09eu.numrentals_forecast)
ORDER BY start_station_name
%%bigquery fcst
SELECT
*
FROM ML.FORECAST(MODEL ch09eu.numrentals_forecast,
STRUCT(14 AS horizon, 0.9 AS confidence_level))
ORDER By start_station_name, forecast_timestamp
%%bigquery df
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15' -- this is just for plotting, hence we'll keep this 45 days.
%%bigquery actual
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-06-16' AND '2015-07-01'
%%bigquery stations
SELECT DISTINCT start_station_name
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
ORDER by start_station_name ASC
stations
station = stations['start_station_name'].iloc[3] # Hyde Park Corner
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
station = stations['start_station_name'].iloc[6] # Serpentine Car Park,
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
station = stations['start_station_name'].iloc[4] # Knightsbridge
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL ch09eu.numrentals_forecast)
ORDER BY variance DESC
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multilayer Perceptron in Theano
Step6: A softmax class for the output
Step9: The MLP class
Step10: Training Procedure
Step11: Testing function
Step12: Training the Model
Step13: We then load our data set.
Step14: Build the Model
Step15: Launch the training phase
Step16: How can we make it better?
Step17: Momentum training (Adadelta, RMSProp, ...)
|
<ASSISTANT_TASK:>
Python Code:
import numpy
import theano
from theano import tensor
# Set lower precision float, otherwise the notebook will take too long to run
theano.config.floatX = 'float32'
class HiddenLayer(object):
def __init__(self, rng, input, n_in, n_out, W=None, b=None,
activation=tensor.tanh):
Typical hidden layer of a MLP: units are fully-connected and have
sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
and the bias vector b is of shape (n_out,).
NOTE : The nonlinearity used here is tanh
Hidden unit activation is given by: tanh(dot(input,W) + b)
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dmatrix
:param input: a symbolic tensor of shape (n_examples, n_in)
:type n_in: int
:param n_in: dimensionality of input
:type n_out: int
:param n_out: number of hidden units
:type activation: theano.Op or function
:param activation: Non linearity to be applied in the hidden layer
self.input = input
# `W` is initialized with `W_values` which is uniformely sampled
# from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))
# for tanh activation function
# the output of uniform if converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
# Note : optimal initialization of weights is dependent on the
# activation function used (among other things).
# For example, results presented in Glorot & Bengio (2010)
# suggest that you should use 4 times larger initial weights
# for sigmoid compared to tanh
if W is None:
W_values = numpy.asarray(
rng.uniform(
low=-numpy.sqrt(6. / (n_in + n_out)),
high=numpy.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == tensor.nnet.sigmoid:
W_values *= 4
W = theano.shared(value=W_values, name='W', borrow=True)
if b is None:
b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='b', borrow=True)
self.W = W
self.b = b
lin_output = tensor.dot(input, self.W) + self.b
self.output = (
lin_output if activation is None
else activation(lin_output)
)
# parameters of the model
self.params = [self.W, self.b]
class LogisticRegression(object):
Multi-class Logistic Regression Class
The logistic regression is fully described by a weight matrix :math:`W`
and bias vector :math:`b`. Classification is done by projecting data
points onto a set of hyperplanes, the distance to which is used to
determine a class membership probability.
def __init__(self, input, target, n_in, n_out):
Initialize the parameters of the logistic regression
:type input: theano.tensor.TensorType
:param input: symbolic variable that describes the input of the
architecture (one minibatch)
:type target: theano.tensor.TensorType
:type target: column tensor that describes the target for training
:type n_in: int
:param n_in: number of input units, the dimension of the space in
which the datapoints lie
:type n_out: int
:param n_out: number of output units, the dimension of the space in
which the labels lie
# keep track of model input and target.
# We store a flattened (vector) version of target as y, which is easier to handle
self.input = input
self.target = target
self.y = target.flatten()
self.W = theano.shared(value=numpy.zeros((n_in, n_out), dtype=theano.config.floatX),
name='W',
borrow=True)
self.b = theano.shared(value=numpy.zeros((n_out,), dtype=theano.config.floatX),
name='b',
borrow=True)
# class-membership probabilities
self.p_y_given_x = tensor.nnet.softmax(tensor.dot(input, self.W) + self.b)
# class whose probability is maximal
self.y_pred = tensor.argmax(self.p_y_given_x, axis=1)
# parameters of the model
self.params = [self.W, self.b]
def negative_log_likelihood(self):
Return the mean of the negative log-likelihood of the prediction
of this model under a given target distribution.
Note: we use the mean instead of the sum so that
the learning rate is less dependent on the batch size
log_prob = tensor.log(self.p_y_given_x)
log_likelihood = log_prob[tensor.arange(self.y.shape[0]), self.y]
loss = - log_likelihood.mean()
return loss
def errors(self):
Return a float representing the number of errors in the minibatch
over the total number of examples of the minibatch
misclass_nb = tensor.neq(self.y_pred, self.y)
misclass_rate = misclass_nb.mean()
return misclass_rate
class MLP(object):
Multi-Layer Perceptron Class
A multilayer perceptron is a feedforward artificial neural network model
that has one layer or more of hidden units and nonlinear activations.
Intermediate layers usually have as activation function tanh or the
sigmoid function (defined here by a ``HiddenLayer`` class) while the
top layer is a softmax layer (defined here by a ``LogisticRegression``
class).
def __init__(self, rng, input, target, n_in, n_hidden, n_out, activation=tensor.tanh):
Initialize the parameters for the multilayer perceptron
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.TensorType
:param input: symbolic variable that describes the input of the
architecture (one minibatch)
:type target: theano.tensor.TensorType
:type target: column tensor that describes the target for training
:type n_in: int
:param n_in: number of input units, the dimension of the space in
which the datapoints lie
:type n_hidden: list of int
:param n_hidden: number of hidden units in each hidden layer
:type n_out: int
:param n_out: number of output units, the dimension of the space in
which the labels lie
:type activation: theano.Op or function
:param activation: Non linearity to be applied in all hidden layers
# keep track of model input and target.
# We store a flattened (vector) version of target as y, which is easier to handle
self.input = input
self.target = target
self.y = target.flatten()
# Build all necessary hidden layers and chain them
self.hidden_layers = []
layer_input = input
layer_n_in = n_in
for nh in n_hidden:
hidden_layer = HiddenLayer(
rng=rng,
input=layer_input,
n_in=layer_n_in,
n_out=nh,
activation=activation)
self.hidden_layers.append(hidden_layer)
# prepare variables for next layer
layer_input = hidden_layer.output
layer_n_in = nh
# The logistic regression layer gets as input the hidden units of the hidden layer,
# and the target
self.log_reg_layer = LogisticRegression(
input=layer_input,
target=target,
n_in=layer_n_in,
n_out=n_out)
# self.params has all the parameters of the model,
# self.weights contains only the `W` variables.
# We also give unique name to the parameters, this will be useful to save them.
self.params = []
self.weights = []
layer_idx = 0
for hl in self.hidden_layers:
self.params.extend(hl.params)
self.weights.append(hl.W)
for hlp in hl.params:
prev_name = hlp.name
hlp.name = 'layer' + str(layer_idx) + '.' + prev_name
layer_idx += 1
self.params.extend(self.log_reg_layer.params)
self.weights.append(self.log_reg_layer.W)
for lrp in self.log_reg_layer.params:
prev_name = lrp.name
lrp.name = 'layer' + str(layer_idx) + '.' + prev_name
# L1 norm ; one regularization option is to enforce L1 norm to be small
self.L1 = sum(abs(W).sum() for W in self.weights)
# square of L2 norm ; one regularization option is to enforce square of L2 norm to be small
self.L2_sqr = sum((W ** 2).sum() for W in self.weights)
def negative_log_likelihood(self):
# negative log likelihood of the MLP is given by the negative
# log likelihood of the output of the model, computed in the
# logistic regression layer
return self.log_reg_layer.negative_log_likelihood()
def errors(self):
# same holds for the function computing the number of errors
return self.log_reg_layer.errors()
def nll_grad(mlp_model):
loss = mlp_model.negative_log_likelihood()
params = mlp_model.params
grads = theano.grad(loss, wrt=params)
# Return (param, grad) pairs
return zip(params, grads)
def sgd_updates(params_and_grads, learning_rate):
return [(param, param - learning_rate * grad)
for param, grad in params_and_grads]
def get_simple_training_fn(mlp_model, learning_rate):
inputs = [mlp_model.input, mlp_model.target]
params_and_grads = nll_grad(mlp_model)
updates = sgd_updates(params_and_grads, learning_rate=lr)
return theano.function(inputs=inputs, outputs=[], updates=updates)
def regularized_cost_grad(mlp_model, L1_reg, L2_reg):
loss = (mlp_model.negative_log_likelihood() +
L1_reg * mlp_model.L1 +
L2_reg * mlp_model.L2_sqr)
params = mlp_model.params
grads = theano.grad(loss, wrt=params)
# Return (param, grad) pairs
return zip(params, grads)
def get_regularized_training_fn(mlp_model, L1_reg, L2_reg, learning_rate):
inputs = [mlp_model.input, mlp_model.target]
params_and_grads = regularized_cost_grad(mlp_model, L1_reg, L2_reg)
updates = sgd_updates(params_and_grads, learning_rate=lr)
return theano.function(inputs, updates=updates)
def get_test_fn(mlp_model):
return theano.function([mlp_model.input, mlp_model.target], mlp_model.errors())
import timeit
from fuel.streams import DataStream
from fuel.schemes import SequentialScheme
from fuel.transformers import Flatten
## early-stopping parameters tuned for 1-2 min runtime
def sgd_training(train_model, test_model, train_set, valid_set, test_set, model_name='mlp_model',
# maximum number of epochs
n_epochs=20,
# look at this many examples regardless
patience=5000,
# wait this much longer when a new best is found
patience_increase=2,
# a relative improvement of this much is considered significant
improvement_threshold=0.995,
batch_size=20):
n_train_batches = train_set.num_examples // batch_size
# Create data streams to iterate through the data.
train_stream = Flatten(DataStream.default_stream(
train_set, iteration_scheme=SequentialScheme(train_set.num_examples, batch_size)))
valid_stream = Flatten(DataStream.default_stream(
valid_set, iteration_scheme=SequentialScheme(valid_set.num_examples, batch_size)))
test_stream = Flatten(DataStream.default_stream(
test_set, iteration_scheme=SequentialScheme(test_set.num_examples, batch_size)))
# go through this many minibatches before checking the network on the validation set;
# in this case we check every epoch
validation_frequency = min(n_train_batches, patience / 2)
best_validation_loss = numpy.inf
test_score = 0.
start_time = timeit.default_timer()
done_looping = False
epoch = 0
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
minibatch_index = 0
for minibatch_x, minibatch_y in train_stream.get_epoch_iterator():
train_model(minibatch_x, minibatch_y)
# iteration number
iter = (epoch - 1) * n_train_batches + minibatch_index
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = []
for valid_xi, valid_yi in valid_stream.get_epoch_iterator():
validation_losses.append(test_model(valid_xi, valid_yi))
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch,
minibatch_index + 1,
n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
# improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * improvement_threshold:
patience = max(patience, iter * patience_increase)
best_validation_loss = this_validation_loss
# test it on the test set
test_losses = []
for test_xi, test_yi in test_stream.get_epoch_iterator():
test_losses.append(test_model(test_xi, test_yi))
test_score = numpy.mean(test_losses)
print(' epoch %i, minibatch %i/%i, test error of best model %f %%' %
(epoch,
minibatch_index + 1,
n_train_batches,
test_score * 100.))
# save the best parameters
# build a name -> value dictionary
best = {param.name: param.get_value() for param in mlp_model.params}
numpy.savez('best_{}.npz'.format(model_name), **best)
minibatch_index += 1
if patience <= iter:
done_looping = True
break
end_time = timeit.default_timer()
print('Optimization complete with best validation score of %f %%, '
'with test performance %f %%' %
(best_validation_loss * 100., test_score * 100.))
print('The code ran for %d epochs, with %f epochs/sec (%.2fm total time)' %
(epoch, 1. * epoch / (end_time - start_time), (end_time - start_time) / 60.))
from fuel.datasets import MNIST
# the full set is usually (0, 50000) for train, (50000, 60000) for valid and no slice for test.
# We only selected a subset to go faster.
train_set = MNIST(which_sets=('train',), sources=('features', 'targets'), subset=slice(0, 20000))
valid_set = MNIST(which_sets=('train',), sources=('features', 'targets'), subset=slice(20000, 24000))
test_set = MNIST(which_sets=('test',), sources=('features', 'targets'))
rng = numpy.random.RandomState(1234)
x = tensor.matrix('x')
# The labels coming from Fuel are in a "column" format
y = tensor.icol('y')
n_in = 28 * 28
n_out = 10
mlp_model = MLP(
rng=rng,
input=x,
target=y,
n_in=n_in,
n_hidden=[500],
n_out=n_out,
activation=tensor.tanh)
lr = numpy.float32(0.1)
L1_reg = numpy.float32(0)
L2_reg = numpy.float32(0.0001)
train_model = get_regularized_training_fn(mlp_model, L1_reg, L2_reg, lr)
test_model = get_test_fn(mlp_model)
sgd_training(train_model, test_model, train_set, valid_set, test_set)
def relu(x):
return x * (x > 0)
rng = numpy.random.RandomState(1234)
mlp_relu = MLP(
rng=rng,
input=x,
target=y,
n_in=n_in,
n_hidden=[500],
n_out=n_out,
activation=relu)
lr = numpy.float32(0.1)
L1_reg = numpy.float32(0)
L2_reg = numpy.float32(0.0001)
train_relu = get_regularized_training_fn(mlp_relu, L1_reg, L2_reg, lr)
test_relu = get_test_fn(mlp_relu)
sgd_training(train_relu, test_relu, train_set, valid_set, test_set, model_name='mlp_relu')
# This implements simple momentum
def get_momentum_updates(params_and_grads, lr, rho):
res = []
# numpy will promote (1 - rho) to float64 otherwise
one = numpy.float32(1.)
for p, g in params_and_grads:
up = theano.shared(p.get_value() * 0)
res.append((p, p - lr * up))
res.append((up, rho * up + (one - rho) * g))
return res
# This implements the parameter updates for Adadelta
def get_adadelta_updates(params_and_grads, rho):
up2 = [theano.shared(p.get_value() * 0, name="up2 for " + p.name) for p, g in params_and_grads]
grads2 = [theano.shared(p.get_value() * 0, name="grads2 for " + p.name) for p, g in params_and_grads]
# This is dumb but numpy will promote (1 - rho) to float64 otherwise
one = numpy.float32(1.)
rg2up = [(rg2, rho * rg2 + (one - rho) * (g ** 2))
for rg2, (p, g) in zip(grads2, params_and_grads)]
updir = [-(tensor.sqrt(ru2 + 1e-6) / tensor.sqrt(rg2 + 1e-6)) * g
for (p, g), ru2, rg2 in zip(params_and_grads, up2, grads2)]
ru2up = [(ru2, rho * ru2 + (one - rho) * (ud ** 2))
for ru2, ud in zip(up2, updir)]
param_up = [(p, p + ud) for (p, g), ud in zip(params_and_grads, updir)]
return rg2up + ru2up + param_up
# You can try to write an RMSProp function and train the model with it.
def get_momentum_training_fn(mlp_model, L1_reg, L2_reg, lr, rho):
inputs = [mlp_model.input, mlp_model.target]
params_and_grads = regularized_cost_grad(mlp_model, L1_reg, L2_reg)
updates = get_momentum_updates(params_and_grads, lr=lr, rho=rho)
return theano.function(inputs, updates=updates)
rng = numpy.random.RandomState(1234)
x = tensor.matrix('x')
# The labels coming from Fuel are in a "column" format
y = tensor.icol('y')
n_in = 28 * 28
n_out = 10
mlp_model = MLP(
rng=rng,
input=x,
target=y,
n_in=n_in,
n_hidden=[500],
n_out=n_out,
activation=tensor.tanh)
lr = numpy.float32(0.1)
L1_reg = numpy.float32(0)
L2_reg = numpy.float32(0.0001)
rho = numpy.float32(0.95)
momentum_train = get_momentum_training_fn(mlp_model, L1_reg, L2_reg, lr=lr, rho=rho)
test_fn = get_test_fn(mlp_model)
sgd_training(momentum_train, test_fn, train_set, valid_set, test_set, n_epochs=20, model_name='mlp_momentum')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: the exploratory modeling workbench comes with a seperate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running.
Step2: Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimium coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box.
Step3: Let's investigate this first box is some detail. A first thing to look at is the trade off between coverage and density. The box has a convenience function for this called show_tradeoff.
Step4: Since we are doing this analysis in a notebook, we can take advantage of the interactivity that the browser offers. A relatively recent addition to the python ecosystem is the library altair. Altair can be used to create interactive plots for use in a browser. Altair is an optional dependency for the workbench. If available, we can create the following visual.
Step5: Here we can interactively explore the boxes associated with each point in the density coverage trade-off. It also offers mouse overs for the various points on the trade off curve. Given the id of each point, we can also use the workbench to manually inpect the peeling trajectory. Following Bryant & Lempert, we inspect box 21.
Step6: If one where to do a detailed comparison with the results reported in the original article, one would see small numerical differences. These differences arise out of subtle differences in implementation. The most important difference is that the exploratory modeling workbench uses a custom objective function inside prim which is different from the one used in the scenario discovery toolkit. Other differences have to do with details about the hill climbing optimization that is used in prim, and in particular how ties are handled in selected the next step. The differences between the two implementations are only numerical, and don't affect the overarching conclusions drawn from the analysis.
Step7: Because the last restriction is not significant, we can choose to drop this restriction from the box.
Step8: We have now found a first box that explains over 75% of the cases of interest. Let's see if we can find a second box that explains the remainder of the cases.
Step9: As we can see, we are unable to find a second box. The best coverage we can achieve is 0.35, which is well below the specified 0.8 threshold. Let's look at the final overal results from interactively fitting PRIM to the data. For this, we can use to convenience functions that transform the stats and boxes to pandas data frames.
Step10: CART
Step11: Now that we have trained CART on the data, we can investigate its results. Just like PRIM, we can use stats_to_dataframe and boxes_to_dataframe to get an overview.
Step12: Alternatively, we might want to look at the classification tree directly. For this, we can use the show_tree method.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
data = pd.read_csv("./data/bryant et al 2010 data.csv", index_col=False)
x = data.iloc[:, 2:11]
y = data.iloc[:, 15].values
from ema_workbench.analysis import prim
from ema_workbench.util import ema_logging
ema_logging.log_to_stderr(ema_logging.INFO);
prim_alg = prim.Prim(x, y, threshold=0.8, peel_alpha=0.1)
box1 = prim_alg.find_box()
box1.show_tradeoff()
plt.show()
box1.inspect_tradeoff()
box1.resample(21)
box1.inspect(21)
box1.inspect(21, style="graph")
plt.show()
box1.select(21)
fig = box1.show_pairs_scatter(21)
plt.show()
box1.drop_restriction("Cellulosic cost")
box1.inspect(style="graph")
plt.show()
box2 = prim_alg.find_box()
prim_alg.stats_to_dataframe()
prim_alg.boxes_to_dataframe()
from ema_workbench.analysis import cart
cart_alg = cart.CART(x, y, 0.05)
cart_alg.build_tree()
cart_alg.stats_to_dataframe()
cart_alg.boxes_to_dataframe()
fig = cart_alg.show_tree()
fig.set_size_inches((18, 12))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Do some preliminary text pre-processing
Step2: Say we're interested in cleaning up the "Country" column to make sure there's no data entry inconsistencies in it. We could go through and check each row by hand, of course, and hand-correct inconsistencies when we find them. There's a more efficient way to do this, though!
Step3: Just looking at this, I can see some problems due to inconsistent data entry
Step4: Next we're going to tackle more difficult inconsistencies.
Step5: It does look like there is another inconsistency
Step6: We can see that two of the items in the cities are very close to "south korea"
Step7: Now that we have a function, we can put it to the test!
Step8: And now let's check the unique values in our "Country" column again and make sure we've tidied up "south korea" correctly.
|
<ASSISTANT_TASK:>
Python Code:
# modules we'll use
import pandas as pd
import numpy as np
# helpful modules
import fuzzywuzzy
from fuzzywuzzy import process
import chardet
# read in all our data
professors = pd.read_csv("../input/pakistan-intellectual-capital/pakistan_intellectual_capital.csv")
# set seed for reproducibility
np.random.seed(0)
professors.head()
# get all the unique values in the 'Country' column
countries = professors['Country'].unique()
# sort them alphabetically and then take a closer look
countries.sort()
countries
# convert to lower case
professors['Country'] = professors['Country'].str.lower()
# remove trailing white spaces
professors['Country'] = professors['Country'].str.strip()
# get all the unique values in the 'Country' column
countries = professors['Country'].unique()
# sort them alphabetically and then take a closer look
countries.sort()
countries
# get the top 10 closest matches to "south korea"
matches = fuzzywuzzy.process.extract("south korea", countries, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# take a look at them
matches
# function to replace rows in the provided column of the provided dataframe
# that match the provided string above the provided ratio with the provided string
def replace_matches_in_column(df, column, string_to_match, min_ratio = 47):
# get a list of unique strings
strings = df[column].unique()
# get the top 10 closest matches to our input string
matches = fuzzywuzzy.process.extract(string_to_match, strings,
limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# only get matches with a ratio > 90
close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio]
# get the rows of all the close matches in our dataframe
rows_with_matches = df[column].isin(close_matches)
# replace all rows with close matches with the input matches
df.loc[rows_with_matches, column] = string_to_match
# let us know the function's done
print("All done!")
# use the function we just wrote to replace close matches to "south korea" with "south korea"
replace_matches_in_column(df=professors, column='Country', string_to_match="south korea")
# get all the unique values in the 'Country' column
countries = professors['Country'].unique()
# sort them alphabetically and then take a closer look
countries.sort()
countries
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The equation of motion
Step2: Assuming $E_z = 0$ and $E_y = 0$
Step3: Motion is uniform along the $z$-axis
Step4: The constants of integration can be found from the initial conditions $z(0) = 0$ and $v_z(0) = v$
Step5: So that
Step6: To solve an equation for $x(t)$, it is necessary to determine $E_x$ and substitute it into the equation.
Step7: It's solution is given by
Step8: From initial conditions $x(0) = x_0, v_0 = 0$
Step9: So that
Step10: From the laws of motion for $x(t)$ and $z(t)$
Step11: it is possible to obtain a trajectory equation
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from sympy import *
init_printing()
Ex, Ey, Ez = symbols("E_x, E_y, E_z")
x, y, z = symbols("x, y, z")
vx, vy, vz, v = symbols("v_x, v_y, v_z, v")
t = symbols("t")
q, m = symbols("q, m")
c, eps0 = symbols("c, epsilon_0")
eq_x = Eq( diff(x(t), t, 2), q / m * Ex )
eq_y = Eq( diff(y(t), t, 2), q / m * Ey )
eq_z = Eq( diff(z(t), t, 2), q / m * Ez )
display( eq_x, eq_y, eq_z )
zero_EyEz_subs = [ (Ey, 0), (Ez, 0) ]
eq_x = eq_x.subs(zero_EyEz_subs)
eq_y = eq_y.subs(zero_EyEz_subs)
eq_z = eq_z.subs(zero_EyEz_subs)
display( eq_x, eq_y, eq_z )
z_eq = dsolve( eq_z, z(t) )
vz_eq = Eq( z_eq.lhs.diff(t), z_eq.rhs.diff(t) )
display( z_eq, vz_eq )
z_0 = 0
v_0 = v
c1_c2_system = []
initial_cond_subs = [(t, 0), (z(0), z_0), (diff(z(t),t).subs(t,0), v_0) ]
c1_c2_system.append( z_eq.subs( initial_cond_subs ) )
c1_c2_system.append( vz_eq.subs( initial_cond_subs ) )
c1, c2 = symbols("C1, C2")
c1_c2 = solve( c1_c2_system, [c1, c2] )
c1_c2
z_sol = z_eq.subs( c1_c2 )
vz_sol = vz_eq.subs( c1_c2 )
display( z_sol, vz_sol )
I0 = symbols('I_0')
Ex_subs = [ (Ex, 2 * pi * I0 / v) ]
eq_x = eq_x.subs( Ex_subs )
eq_x
x_eqn_sol = dsolve( eq_x )
x_eqn_sol
x_0 = symbols( 'x_0' )
v_0 = 0
c1_c2_system = []
initial_cond_subs = [(t, 0), (x(0), x_0), (diff(x(t),t).subs(t,0), v_0) ]
c1_c2_system.append( x_eqn_sol.subs( initial_cond_subs ) )
x_eqn_sol_diff = Eq( x_eqn_sol.lhs.diff(t), x_eqn_sol.rhs.diff(t) )
c1_c2_system.append( x_eqn_sol_diff.subs( initial_cond_subs ) )
c1, c2 = symbols("C1, C2")
c1_c2 = solve( c1_c2_system, [c1, c2] )
c1_c2
x_sol = x_eqn_sol.subs( c1_c2 )
x_sol
display( x_sol, z_sol )
t_from_z = solve( z_sol.subs(z(t),z), t )[0]
x_z_traj = Eq( x_sol.lhs.subs( t, z ), x_sol.rhs.subs( [(t, t_from_z)] ) )
display( x_z_traj )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 행 인덱스와, 열 인덱스가 될 자료는 키(key)의 역할을 해야 한다. 즉, 이 값으로 데이터가 유일하게(unique) 결정되어야 한다.
Step2: 그룹 연산
Step3: 문제
Step4: 문제
Step5: groupby 명령의 인수
Step6: 특별한 group 별 연산
Step7: 그룹별 통계
Step8: 그룹별 연산
Step9: 그룹의 값을 대표값으로 대체
Step10: 그룹 자체를 대체
Step11: pivot_table
|
<ASSISTANT_TASK:>
Python Code:
data = {
'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 2.5, 3.0, 2.5, 3.5]
}
df = pd.DataFrame(data, columns=["state", "year", "pop"])
df
df.pivot("state", "year", "pop")
df.pivot("year", "pop", "state")
df.set_index(["state", "year"])
df.set_index(["state", "year"]).unstack()
np.random.seed(0)
df = pd.DataFrame({'key1': ['a', 'a', 'b', 'b', 'a'],
'key2': ['one', 'two', 'one', 'two', 'one'],
'data1': np.random.randn(5),
'data2': np.random.randn(5)})
df
df.data1.groupby(df.key1).mean()
gs = df.data1.groupby(df.key1)
gs
print("="*50)
for n, g in gs:
print("[key]:", n)
print("[group]:", type(g))
print("-"*50)
print(g)
print("-"*50)
print("[mean]:", g.mean())
print("="*50)
gs.mean()
means = df.data1.groupby([df.key1, df.key2]).mean()
means
means = df.data1.groupby([df.key1, df.key2]).mean()
means
np.random.seed(0)
people = pd.DataFrame(np.random.randn(5,5),
columns=['a','b','c','d','e'],
index=['Joe','Steve','Wes','Jim','Travis'])
people.ix[2:3, ['b', 'c']] = np.nan
people
print("="*80)
for n, g in people.groupby(people.index):
print("[key]:", n)
print("[group]:", type(g))
print("-"*80)
print(g)
print("="*80)
mapping = {'Joe': 'J', 'Jim': 'J', 'Steve': 'S', 'Wes': 'S', 'Travis': 'S'}
print("="*80)
for n, g in people.groupby(mapping):
print("[key]:", n)
print("[group]:", type(g))
print("-"*80)
print(g)
print("="*80)
cap1 = lambda x: x[0].upper()
print("="*80)
for n, g in people.groupby(cap1):
print("[key]:", n)
print("[group]:", type(g))
print("-"*80)
print(g)
print("="*80)
print("="*80)
for n, g in people.groupby(people.columns, axis=1):
print("[key]:", n)
print("[group]:", type(g))
print("-"*80)
print(g)
print("="*80)
mapping = {'a': 'red', 'b': 'red', 'c': 'blue', 'd': 'blue', 'e': 'red', 'f' : 'orange'}
for n, g in people.groupby(mapping, axis=1):
print("[key]:", n)
print("[group]:", type(g))
print("-"*80)
print(g)
print("="*80)
%cd /home/dockeruser/data/pydata-book-master
tips = pd.read_csv('../../pydata-book-master/ch08/tips.csv')
tips.head()
tips['tip_pct'] = tips['tip'] / tips['total_bill']
tips.tail()
tips.describe()
tips.groupby(["sex", "smoker"])[["tip", "tip_pct"]].describe()
gs = tips.groupby(["sex", "smoker"])
gs_pct = gs["tip_pct"]
gs_pct.mean()
gs_pct.agg('mean')
def peak_to_peak(arr):
return arr.max() - arr.min()
gs_pct.agg(['mean', 'std', peak_to_peak])
gs.agg({'tip_pct': 'mean', 'total_bill': peak_to_peak})
gs.agg("mean")
tips2 = tips.copy()
tips2["tips"] = gs.transform("mean")["tip_pct"]
tips2.tail(15)
def top(df, n=5, column='tip_pct'):
return df.sort_values(by=column)[-n:]
top(tips, n=6)
tips.groupby('smoker').apply(top)
tips.groupby(['smoker', 'day']).apply(top, n=1, column='total_bill')
f = lambda x: x.describe()
tips.groupby(['smoker']).apply(f)
tips.pivot_table(index=['sex', 'smoker'])
tips.pivot_table(['tip_pct', 'size'], index=['sex', 'day'], columns='smoker')
tips.pivot_table(['tip_pct', 'size'], index=['sex', 'day'],
columns='smoker', margins=True)
tips.pivot_table('tip_pct', index=['sex', 'smoker'], columns='day',
aggfunc=len, margins=True)
tips.pivot_table('size', index=['time', 'sex', 'smoker'],
columns='day', aggfunc='sum', fill_value=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-2', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: pyGAPS attempts to calculate the applicable BET region on its own by using the
Step2: It looks that the correlation is reasonably good. A warning is emitted if this
Step3: Now let's do the analysis on all of the nitrogen samples. We'll assume the
Step4: We also have isotherms which were measured with $CO_2$ at room temperature.
Step5: The surface area obtained with carbon dioxide is around 740 $m^2$. Compared to
Step6: The correlation is not very good due to condensation in mesopores of MCM-41,
Step7: The fit is now better and the calculated area is also realistic. Comparing it to
|
<ASSISTANT_TASK:>
Python Code:
# import isotherms
%run import.ipynb
# import the characterisation module
import pygaps.characterisation as pgc
isotherm = next(i for i in isotherms_n2_77k if i.material == 'MCM-41')
print(isotherm.material)
results = pgc.area_BET(isotherm, verbose=True)
results = pgc.area_BET(isotherm, p_limits=(0.05, 0.2), verbose=True)
results = []
for isotherm in isotherms_n2_77k:
results.append((isotherm.material, pgc.area_BET(isotherm)))
[(x, f"{y['area']:.2f}") for (x, y) in results]
isotherm = next(i for i in isotherms_calorimetry if i.material == 'Takeda 5A')
print(isotherm.material)
results = pgc.area_BET(isotherm, verbose=True)
isotherm = next(i for i in isotherms_n2_77k if i.material == 'MCM-41')
print(isotherm.material)
results = pgc.area_langmuir(isotherm, verbose=True)
print(isotherm.material)
results = pgc.area_langmuir(isotherm, p_limits=(0.05, 0.3), verbose=True)
import matplotlib.pyplot as plt
area_langmuir = []
area_langmuir_lim = []
area_bet = []
for isotherm in isotherms_n2_77k:
area_bet.append(pgc.area_BET(isotherm)['area'])
area_langmuir.append(pgc.area_langmuir(isotherm)['area'])
area_langmuir_lim.append(pgc.area_langmuir(isotherm, p_limits=(0.01, 0.3))['area'])
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
ax1.scatter(area_langmuir, area_bet)
ax2.scatter(area_langmuir_lim, area_bet)
ax1.set_title('BET v. Langmuir area, full range')
ax2.set_title('BET v. Langmuir area, LP range')
ax1.plot([0, 2000], [0, 2000], 'k--')
ax2.plot([0, 2000], [0, 2000], 'k--')
ax1.set_xlim(left=0, right=2000)
ax1.set_ylim(bottom=0, top=2000)
ax2.set_xlim(left=0, right=2000)
ax2.set_ylim(bottom=0, top=2000)
ax1.set_xlabel('Langmuir surface area [m2/g]')
ax1.set_ylabel('BET surface area [m2/g]')
ax2.set_xlabel('Langmuir surface area [m2/g]')
ax2.set_ylabel('BET surface area [m2/g]')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sparse 2d interpolation
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
x=np.linspace(-5,5)
y=x
listf=[0,1,0]
f=np.array(listf)
f
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
# YOUR CODE HERE
raise NotImplementedError()
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Downloading
Step2: Like many commands, Python won’t return anything unless something went wrong. In this case, the In [*] should change to a number like In [2]. Next, store the web address that you want to access in a string. You don’t have to do this, but it’s the type of thing that makes your code easier to read and allows you to scale up quickly when you want to download thousands of urls.
Step3: You can also create a string with the name you want the file to have on you hard drive
Step4: To download and save the file
Step5: This will download the file into your current directory. If you want it to go somewhere else, you can put the full path in the file_name string. You didn’t have to enter the url and the file name in the prior lines. Something like the following would have worked exactly the same
Step6: Note that the location and filename are both surrounded by quotation marks because you want Python to use this information literally; they aren’t referring to a string object, like in our previous code. This line of code is actually quite readable, and in most circumstances this would be the most efficient thing to do. But there are actually three files that we want to get
Step7: The first line creates a new list with three items, the names of the three files to be downloaded. The second line creates a string object that stores the url path that they all share. The third line starts a loop over each of the items in the files list using file_name to reference each item in turn. The fourth line is indented, because it happens once for each item in the list as a result of the loop, and downloads the file. This is the same as the original download line, except the URL is now the combination of two strings, path and file_name. As noted previously, Python can combine strings with a plus sign, so the result from the first pass through the loop will be http
Step8: As you might have guessed, this line is actually doing double duty. It opens the file and reads it into memory before it is stored in tweets. Since the file has one tweet on each line, we can turn it into a list of tweets by splitting it at the end of line character. The file was originally created on a Mac, so the end of line character is an \n (think \n for new line). On a Windows computer, the end of line character is an \r\n (think \r for return and \n for new line). So if the file was created on a Windows computer, you might need to strip out the extra character with something like windows_file=windows_file.replace('\r','') before you split the lines, but you don’t need to worry about that here, no matter what operating system you are using. The end of line character comes from the computer that made the file, not the computer you are currently using. To split the tweets into a list
Step9: As always, you can check how many items are in the list
Step10: You can print the entire list by typing print(tweets_list), but it will be very long. A more useful way to look at it is to print just some of the items. Since it’s a list, we can loop through the first few item so they each print on the same line.
Step11: Note the new [0
Step12: OR
Step13: This slices the list from the first comma to the second comma, so the result is the second item in the list. Unless you have a computer science background, this may be confusing as it’s not the common way to think of items in lists.
Step14: Like the tweet list, this file contained each entry on its own line, so it loads exactly the same way. If you typed len(positive_words) you would find out that this list has 2,230 entries.
Step15: Do the next part with your partner
Step16: Then, instead of printing the proportion, we can append it to the list using the following command
Step17: The next time we run through the loop, it shouldn't produce any output, but it will create a list of the proportions. Lets do a quick check to see how many positive words there are in the entire set of tweets
Step18: The next step is to plot a histogram of the data to see the distribution of positive texts
Step19: Step 3
Step20: Step 4
Step21: Another way to model the "bag of words" is to evaluate if the tweet has only positive words, only negative words, both positive and negative words or neither positive nor negative words. Rewrite your code to keep track of all four totals.
Step 5
Step22: Step 6
Step23: Step 7
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from string import punctuation
import urllib.request
url='http://www.unc.edu/~ncaren/haphazard/negative.txt'
file_name='negative.txt'
urllib.request.urlretrieve(url, file_name)
urllib.request.urlretrieve('http://www.unc.edu/~ncaren/haphazard/negative.txt','negative.txt')
files=['negative.txt','positive.txt','obama_tweets.txt']
path='http://www.unc.edu/~ncaren/haphazard/'
for file_name in files:
urllib.request.urlretrieve(path+file_name,file_name)
tweets = open("obama_tweets.txt").read()
tweets_list = tweets.split('\n')
len(tweets_list)
for tweet in tweets_list[0:5]:
print(tweet)
print(tweets_list[1:2])
print(tweets_list[1])
pos_sent = open("positive.txt").read()
positive_words=pos_sent.split('\n')
print(positive_words[:10])
for tweet in tweets_list:
positive_counter=0
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
for word in words:
if word in positive_words:
positive_counter=positive_counter+1
print(positive_counter/len(words))
positive_counts=[]
#Put your code here
for tweet in tweets_list:
positive_counter=0
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
word_count = len(words)
for word in words:
if word in positive_words:
positive_counter=positive_counter+1
positive_counts.append(positive_counter/word_count)
len(positive_counts)
#Put your code here
plt.hist(positive_counts, 100, facecolor='green');
#Put your code here
neg_sent = open("negative.txt").read()
negative_words=neg_sent.split('\n')
for tweet in tweets_list:
positive_counter=0
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
word_count = len(words)
for word in words:
if word in positive_words:
positive_counter=positive_counter+1
if word in negative_words:
positive_counter=positive_counter-1
positive_counts.append(positive_counter/word_count)
#Put your code here
plt.hist(positive_counts, 20, facecolor='green', range=[-5, 5]);
only_positive=0;
only_negative=0;
both_pos_and_neg=0;
neither_pos_nor_neg=0;
#Put your code here.
for tweet in tweets_list:
positive_counter=0
negative_counter=0
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
word_count = len(words)
for word in words:
if word in positive_words:
positive_counter=positive_counter+1
if word in negative_words:
negative_counter=negative_counter+1
if(positive_counter > 0):
if(negative_counter > 0):
both_pos_and_neg=both_pos_and_neg+1
else:
only_positive=only_positive+1;
else:
if(negative_counter > 0):
only_negative=only_negative+1;
else:
neither_pos_nor_neg=neither_pos_nor_neg+1;
#Run this code. It should output True.
print(only_positive)
print(only_negative)
print(both_pos_and_neg)
print(neither_pos_nor_neg)
only_positive + only_negative + both_pos_and_neg + neither_pos_nor_neg == len(tweets_list)
# The slices will be ordered and plotted counter-clockwise.
labels = 'positive', 'both', 'negative', 'neither'
sizes = [only_positive, both_pos_and_neg, only_negative, neither_pos_nor_neg]
colors = ['yellowgreen', 'yellow','red', 'lightcyan']
explode = (0.1, 0, 0.1, 0)
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=90);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this lab, we're going to cluster documents by the similarity of their text content. For this, we'll need to download some documents to cluster. The following dictionary maps the names of various texts to their corresponding URLs at Project Gutenberg.
Step2: Next, we need to download the texts located at the URLs. We can do this using Python's urllib2 package, which is part of the standard Python library. The following code will download the content of each URL and store it in the documents dictionary
Step3: Finally, we can create a pandas data frame to represent our document data
Step4: Data modelling
Step5: You can find a more detailed description of each parameter in the scikit-learn documentation.
Step6: Once we've fitted the data to the pipeline, we can extract the fitted agglomerative clustering model to see what clusters were formed. To extract the model, we can use the named_steps attribute of the pipeline, which is a dictionary mapping the names (in lowercase) of each stage in the pipeline to the corresponding models.
Step7: As can be seen, our clustering model is stored under the key 'agglomerativeclustering', and so we can extract it as follows
Step8: Currently, scikit-learn does not support plotting dendrograms out of the box. However, the authors have provided the following code snippet for anyone who wants to do so
Step9: Finally, we can call the plot_dendrogram function to plot a dendrogram of our model, as follows
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import urllib2
from sklearn.cluster import AgglomerativeClustering
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
urls = {
'The Iliad - Homer': 'https://www.gutenberg.org/cache/epub/1727/pg1727.txt',
'The Odyssey - Homer': 'https://www.gutenberg.org/cache/epub/1727/pg1727.txt',
'Romeo and Juliet - William Shakespeare': 'https://www.gutenberg.org/cache/epub/1112/pg1112.txt',
'Hamlet - William Shakespeare': 'https://www.gutenberg.org/files/1524/1524-0.txt',
'Adventures of Huckleberry Finn - Mark Twain': 'https://www.gutenberg.org/files/76/76-0.txt',
'The Adventures of Tom Sawyer - Mark Twain': 'https://www.gutenberg.org/files/74/74-0.txt',
'A Tale of Two Cities - Charles Dickens': 'https://www.gutenberg.org/files/98/98-0.txt',
'Great Expectations - Charles Dickens': 'https://www.gutenberg.org/files/1400/1400-0.txt',
'Oliver Twist - Charles Dickens': 'https://www.gutenberg.org/cache/epub/730/pg730.txt',
'The Adventures of Sherlock Holmes - Arthur Conan Doyle': 'https://www.gutenberg.org/cache/epub/1661/pg1661.txt'
}
documents = {}
for name, url in urls.items():
response = urllib2.urlopen(url)
document = response.read()
documents[name] = document
df = pd.DataFrame([documents[name] for name in sorted(documents)], index=sorted(documents), columns=['text'])
df.head(10)
AgglomerativeClustering().get_params()
X = df['text']
# Construct a pipeline: TF-IDF -> Sparse to Dense -> Clustering
pipeline = make_pipeline(
TfidfVectorizer(stop_words='english'),
FunctionTransformer(lambda x: x.todense(), accept_sparse=True),
AgglomerativeClustering(linkage='average') # Use average linkage
)
pipeline = pipeline.fit(X)
pipeline.named_steps
model = pipeline.named_steps['agglomerativeclustering']
# Original source: https://github.com/scikit-learn/scikit-learn/blob/70cf4a676caa2d2dad2e3f6e4478d64bcb0506f7/examples/cluster/plot_hierarchical_clustering_dendrogram.py
import numpy as np
from scipy.cluster.hierarchy import dendrogram
def plot_dendrogram(model, **kwargs):
# Children of hierarchical clustering
children = model.children_
# Distances between each pair of children
# Since we don't have this information, we can use a uniform one for plotting
distance = np.arange(children.shape[0])
# The number of observations contained in each cluster level
no_of_observations = np.arange(2, children.shape[0] + 2)
# Create linkage matrix and then plot the dendrogram
linkage_matrix = np.column_stack([children, distance, no_of_observations]).astype(float)
# Plot the corresponding dendrogram
dendrogram(linkage_matrix, **kwargs)
plot_dendrogram(model, labels=X.index, orientation='right')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the training set of data
Step2: Note target names not in same order as in the categories array
Step3: Show the first 8 lines of text from one of the documents formated with line breaks
Step4: Path to file on your machine
Step5: Show the the targets categories of first 10 documents. As a list and show there names.
Step6: Lets look at a document in the training data.
Step7: Extracting features from text files
Step8: Using a CountVectorizer method we can get the integer identifier of a word.
Step9: With this identifier we can get the count of the word in a given document.
Step10: What are two problems with using a word count in a document?
Step11: .fit(..) to fit estimator to the data
Step12: Training a classifier
Step13: Here tfidf_transformer is used to classify
Step14: We can see it get some right but not all.
Step15: Evaluation
Step16: Try a support vector machine instead
Step17: We can see the support vector machine got a higher number than naïve Bayes. What does it mean? We move on to metrics.
Step18: Here we can see that the predictions
Step19: In the confusion_matrix the labels give the order of the rows.
Step20: Back to '20 newsgroups dataset'
Step21: We can see where the 91% score came from.
Step22: Now lets see the confusion matrix.
Step23: So we can see the naïve Bayes classifier got a lot more correct in some cases but also included a higher proportion in the last category.
Step24: We can see that atheism is miss categorised as Christian and science and medicine as computer graphics a high proportion of the time using the support vector machine.
Step25: Running the search on all the data will take a little while 10-30 seconds on a new ish desktop with 8 cores. If you don't want to wait that long uncomment the line with
Step26: Well that is a significant improvement. Lets use these new parameters.
Step27: Why has this only give a .93 instead of .97?
Step28: Moving on from that lets see where the improvements where made.
|
<ASSISTANT_TASK:>
Python Code:
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',categories=categories, shuffle=True, random_state=42)
twenty_train.target_names
len(twenty_train.data)
print("\n".join(twenty_train.data[0].split("\n")[:8]))
twenty_train.filenames[0]
print(twenty_train.target[:10])
for t in twenty_train.target[:10]:
print(twenty_train.target_names[t])
print("\n".join(twenty_train.data[0].split("\n")))
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
X_train_counts.shape
X_train_counts.__class__
count_vect.vocabulary_.get(u'application')
print("Word count for application in first document: {0} and last document: {1} ").format(
X_train_counts[0, 5285], X_train_counts[2256, 5285])
count_vect.vocabulary_.get(u'subject')
print("Word count for email in first document: {0} and last document: {1} ").format(
X_train_counts[0, 31077], X_train_counts[2256, 31077])
count_vect.vocabulary_.get(u'to')
print("Word count for email in first document: {0} and last document: {1} ").format(
X_train_counts[0, 32493], X_train_counts[2256, 32493])
from sklearn.feature_extraction.text import TfidfTransformer
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tfidf_2stage = tf_transformer.transform(X_train_counts)
X_train_tfidf_2stage.shape
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
print("In first document tf-idf for application: {0} subject: {1} to: {2}").format(
X_train_tfidf[0, 5285], X_train_tfidf[0, 31077], X_train_tfidf[0, 32493])
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)
docs_new = ['God is love', 'Heart attacks are common', 'Disbelief in a proposition', 'Disbelief in a proposition means that one does not believe it to be true', 'OpenGL on the GPU is fast']
X_new_counts = count_vect.transform(docs_new)
X_new_tfidf = tfidf_transformer.transform(X_new_counts)
predicted = clf.predict(X_new_tfidf)
for doc, category in zip(docs_new, predicted):
print('%r => %s' % (doc, twenty_train.target_names[category]))
from sklearn.pipeline import Pipeline
text_clf_bayes = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf_bayes_fit = text_clf_bayes.fit(twenty_train.data, twenty_train.target)
import numpy as np
twenty_test = fetch_20newsgroups(subset='test',
categories=categories, shuffle=True, random_state=42)
docs_test = twenty_test.data
predicted_bayes = text_clf_bayes_fit.predict(docs_test)
np.mean(predicted_bayes == twenty_test.target)
from sklearn.linear_model import SGDClassifier
text_clf_svm = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-3, n_iter=5, random_state=42)),])
text_clf_svm_fit = text_clf_svm.fit(twenty_train.data, twenty_train.target)
predicted_svm = text_clf_svm_fit.predict(docs_test)
np.mean(predicted_svm == twenty_test.target)
from sklearn import metrics
y_true = ["cat", "ant", "cat", "cat", "ant", "bird", "bird"]
y_pred = ["ant", "ant", "cat", "cat", "ant", "cat", "bird"]
print(metrics.classification_report(y_true, y_pred,
target_names=["ant", "bird", "cat"]))
metrics.confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"])
metrics.accuracy_score(y_true, y_pred, normalize=True, sample_weight=None)
print(metrics.classification_report(twenty_test.target, predicted_svm,
target_names=twenty_test.target_names))
# We got the evaluation score this way before:
print(np.mean(predicted_svm == twenty_test.target))
# We get the same results using metrics.accuracy_score
print(metrics.accuracy_score(twenty_test.target, predicted_svm, normalize=True, sample_weight=None))
print(twenty_train.target_names)
metrics.confusion_matrix(twenty_test.target, predicted_bayes)
metrics.confusion_matrix(twenty_test.target, predicted_svm)
from sklearn.grid_search import GridSearchCV
parameters = {'vect__ngram_range': [(1, 1), (1, 2)],
'tfidf__use_idf': (True, False),
'clf__alpha': (1e-3, 1e-4),
}
gs_clf = GridSearchCV(text_clf_svm_fit, parameters, n_jobs=-1)
#gs_clf_fit = gs_clf.fit(twenty_train.data[:400], twenty_train.target[:400])
gs_clf_fit = gs_clf.fit(twenty_train.data, twenty_train.target)
best_parameters, score, _ = max(gs_clf_fit.grid_scores_, key=lambda x: x[1])
for param_name in sorted(parameters.keys()):
print("%s: %r" % (param_name, best_parameters[param_name]))
score
text_clf_svm_tuned = Pipeline([('vect', CountVectorizer(ngram_range=(1, 2))),
('tfidf', TfidfTransformer(use_idf=True)),
('clf', SGDClassifier(loss='hinge', penalty='l2',
alpha=0.0001, n_iter=5, random_state=42)),
])
text_clf_svm_tuned_fit = text_clf_svm_tuned.fit(twenty_train.data, twenty_train.target)
predicted_tuned = text_clf_svm_tuned_fit.predict(docs_test)
metrics.accuracy_score(twenty_test.target, predicted_tuned, normalize=True, sample_weight=None)
for x in gs_clf_fit.grid_scores_:
print x[0], x[1], x[2]
print(metrics.classification_report(twenty_test.target, predicted_svm,
target_names=twenty_test.target_names))
metrics.confusion_matrix(twenty_test.target, predicted_svm)
print(metrics.classification_report(twenty_test.target, predicted_tuned,
target_names=twenty_test.target_names))
metrics.confusion_matrix(twenty_test.target, predicted_tuned)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Choose your project
Step2: Retrieve and describe the frame
Step3: Change type of a native feature (from the central table)
Step4: Change the type of your feature using set_categorical() or set_numerical() methods.
Step5: Type is modified.
Step6: Same for features from a peripheral table
Step7: Type has been changed, a new model has to be builed from the default schema. To consider the change in the peripheral table, a number of aggregates has to be requested.
|
<ASSISTANT_TASK:>
Python Code:
# Load PredicSis.ai SDK
from predicsis import PredicSis
pj = PredicSis.project('Outbound Mail Campaign')
dflt_schm = pj.default_schema()
dflt_schm.describe()
master_frame=dflt_schm.frame('Customers')
master_frame.describe()
master_frame.set_categorical('region_code')
master_frame.describe()
mdl = dflt_schm.fit('model with categorical region_code')
mdl.central().describe()
email = dflt_schm.frame('Email')
email.describe()
email.set_categorical('nb_of_days_since_event')
email.describe()
mdl2 = dflt_schm.fit('Model with type change in email frame',nb_aggregates=50)
mdl2.central().describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For our first function, we see above that my_first_function does not take in any input and does nothing. The pass keyword is a kind of temporary placeholder and basically does nothing. We use pass because one cannot leave a function "body" without any code at all.
Step2: my_first_function will print the string "Hello world!" whenever it is called. Calling a function basically means instructing Python to run the code contained in the function. Notice that after defining a function and running the cell, there is no output. But that doesn't mean nothing has happened. In fact, Python has populated the global namespace with a new name, my_first_function and is ready to do what ever has been coded into this function when it is called.
Step3: It is good to understand what happens when we type my_first_function and execute a cell. Notice that the output says <function ... This means that the variable my_first_function represents an object of type function. The rest of the output indicates that this function is represented by a name my_first_function in the module __main__. We will not describe what modules are in this course, but suffices for our purposes to think of __main__ as file containing all the functions that we will define in this Jupyter Notebook session.
Step4: 9.3 Functions with arguments
Step5: When defining functions with arguments, the same variable name used in the signature must be used in the body of the function. Now there is nothing inherently special about using name to represent the argument for names to my_first_function. After all, the computer doesn't "understand" that we intend to print out a name when calling my_first_function. However, we should use recognizable variable names to improve readibility of our code and to make our intentions transparent.
Step6: What happened is that the function special_product performs the said operation on inputs 1 and 3. It then outputs the answer, in this case 7. We assign the output 7 to a variable named answer and print it.
Step7: Passing arguments in correct sequence matters. Python will pass values to arguments according to the sequence as it was declared in the signature.
Step8: What will happen if we try to display the variable prod directly?
Step9: 9.3.1 Function scope
Step10: Therefore, it becomes quite a hassle if we have to call the function in various places in our code with the same input in one of the arguments. To do that we can assign default values to particular arguments in the following manner.
Step11: 9.5 Passing arguments to functions by keyword
Step13: 9.5.1 An application
Step14: 10. Lambda expressions
Step15: Notice that this function essentially consists of one line, namely the print statement. Using lambda expressions, this can be shortened to
Step16: We use the lambda keyword to define lambda expressions. After lambda we type in the arguments to the function but without enclosing it in ( ). All arguments must be seperated by commas. Once that is done, type a
Step17: Notice that printer is of class function but is given a name <lambda>. However, we can call printer just as we called my_first_function, by passing arguments to it.
Step18: Lambda expressions can take on more than one argument. Here is the function special_product refactored as a lambda expression.
Step19: Notice that I did not need to put a return to indicate which output to pass to the global environment. This is because lambda expressions are meant to be written in one line, hence it is understood that that one line of code is the output.
Step21: Here's how this could be achieved. We first define the function that calculates the ratio and then proceed to create the new calculated column.
Step22: As you can see, a new column has been added with the calculated column named ratio_sul_to_alc. However, we had to define a function named ratio which we may or may not use again. We would like to achieve the same thing, but without populating the global namespace with unnecessary variables.
Step23: Notice that they give the same answer. We will learn how to do this in detail in the next unit. For now, the purpose of this example is to illustrate how lambda expressions are a great help in simplifying and making code more compact and readable.
Step24: As with lists, tuples can also be indexed and sliced. However, once assigned, individual components of a tuple cannot be changed. For example, the following code will raise and error
Step25: Think of tuples as lists which you wish to protect from changing by accidental assignment. Another way of thinking about tuples are also as constant lists, or as "coordinates" in $\mathbb{R}^n$.
Step26: Imagine having to write such a snippet of code every time we need to do something with elements from two lists! As you can imagine, it can cause code to be bloated and distracts from the main logic of the program.
Step27: In fact, we can do even better in terms of readibility. We can utilize what is known as list unpacking to rewrite this for loop.
Step28: Of course, zip is used in many other context other than to simplify for loops. Can you think of any other situations where you might need to use zip?
Step29: 11.3 List comprehension
Step30: Notice that the entire script needed about 1.8 ms to execute. This isn't exactly a short amount of time as far as computers go. Just imagine that we have to do this for 10 times in a row!
Step31: That's an improvement of about 10.2 %!
Step32: To refactor this into a list comprehension statement, we first identify the code that is being looped over. That is
Step33: 12. A concluding demonstration
|
<ASSISTANT_TASK:>
Python Code:
# Our first function
def my_first_function():
pass
def my_first_function():
print("Hello world!")
my_first_function
my_first_function()
def my_first_function(name):
print("Hello %s" % (name))
return None
my_first_function("Tang U-Liang")
# Passing two arguments
def special_product(x,y):
prod = x-y+x*y
return prod
answer = special_product(1,3)
print(answer)
special_product(1,3)
# x = 1, and y =3
print(special_product(1,3))
# x =3 and y = 1
print(special_product(3,1))
print(prod)
special_product(1,)
def special_product(x, y=1): # default value of y is 1
return x-y+x*y
# We don't have to pass any value to arguments with default values
print(special_product(2))
# Default values can be overriden
print(special_product(2,9))
special_product(1,3) == special_product(3,1)
special_product(x=1, y=3) == special_product( y=3, x=1)
import math
def is_prime(p):
This function determines if p is prime or not.
Returns:
bool, True if p is prime.
m = int(math.floor(math.sqrt(p)))
for d in range(2, m+1):
if p%d == 0:
return False
return True
for p in range(2, 101):
if is_prime(p):
print(p)
def my_first_function(name):
print("Hello %s" % (name))
printer = lambda name: print("Hello %s" % (name))
printer
printer("Joe")
special_product = lambda x, y: x-y+x*y
special_product(10,9)
import pandas as pd # Importing the pandas library
wine = pd.read_csv("winequality-red.csv", sep=';')
wine.sample(5)
def ratio(df):
This function calculates the ratio of sulphates to alcohol content in the wine dataframe
Returns
Series, shape (n_samples, ) Array containing the ratio of sulphate to alcohol content for each sample
ratio_col = df.sulphates/df.alcohol
return ratio_col
(wine.assign(ratio_sul_to_alc=ratio)
.head(5))
(wine.assign(ratio_sul_to_alc=lambda df: df.sulphates/df.alcohol)
.head(5))
pair = (1,4)
print(pair)
pair[0] = 2
zipped = list() # This creates and empty list
my_colleagues = ['Andy', 'Lisa', 'Dayton']
ages = [29, 24, 50]
for i in range(0,3):
zipped.append((my_colleagues[i], ages[i]))
print(zipped)
for tup in zip(my_colleagues, ages):
name = tup[0]
age = tup[1]
print("%s's age is %d" % (name, age))
for name, age in zip(my_colleagues, ages): # The syntax name, age is what is known as list unpacking
print("%s's age is %d" % (name, age))
staff_id = dict()
for i, name in enumerate(my_colleagues):
id_no = 's2017-'+str(i) # The str function coerces and integer i into 'i'
staff_id[id_no] = name
print("A list of staff id numbers")
print(staff_id.keys())
print("and the respective staff names")
print(staff_id.values())
%%timeit
serial_numbers = list()
for i in range(0,5000): # 5000 staff, so we need 5000 int's
serial_numbers.append('s'+str(i)) # our serial numbers are prefixed with 's'
%%timeit
serial_numbers = ['s'+str(i) for i in range(5000)]
months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October",
"November", "December"]
# multiline statements are allowed in Python as long as they are enclosed in some sort of braces.
short_name = []
mk_list = short_name.append # Here's a neat trick, assign the append method to a variable mk_list.
# mk_list is now a function
for month in months:
mk_list(month[0:3].upper()) # .upper() is a string method that simply capitalizes all letters in a string.
print(short_name)
short_name = [month[0:3].upper() for month in months]
print(short_name)
from datetime import datetime
DAY_OF_WEEK = {1: "MONDAY", 2: "TUESDAY", 3:"WEDNESDAY", 4:"THURSDAY", 5:"FRIDAY", 6:"SATURDAY", 0:"SUNDAY"}
def todays_date():
t0 = datetime.today()
return t0.isoweekday(), t0.day, t0.month, t0.year
# Returns day difference if target date is within same month and year
def day_diff(start_date, end_date):
return (end_date[0] - start_date[0])
# Returns day difference if target date may be in differing months but within same year.
# Remember to account for leap years!
def month_diff(start_date, end_date):
start_month, end_month, end_year = start_date[1], end_date[1], end_date[2]
total_days = 0
for m in range(min(start_month, end_month), max(start_month, end_month)):
# Enter your answer here
# End of answer
# It is quite possible that start_month exceeds end_month. In this case,
# we are actually counting days "backwards"! We then have to actually return
# the negative value so that this number of days is subtracted from the total.
if start_month < end_month:
return total_days
else:
return -1*total_days
# Returns day difference across different years
def year_diff(start_date, end_date):
start_year, end_year = start_date[2], end_date[2]
total_days = 0
# Adjusting for the fact that in a leap year, the extra day occurs on the last day of Feb.
leap_year_adj = 0
if end_date[1] >= 3 and end_date[2]%4==0:
leap_year_adj += 1
if start_date[1] >= 3 and start_date[2]%4==0:
leap_year_adj += -1
for y in range(start_year, end_year):
if y%4==0:
total_days += 366
else:
total_days += 365
return total_days + leap_year_adj
# Returns day of week for given date
def weekday_from_date(day, month, year):
curr_date = todays_date()
# Checking whether the target_date is in the future (relative to the current date)
# or not
conds = [curr_date[3] < year,
curr_date[3] == year and curr_date[2] < month,
curr_date[3] == year and curr_date[2] == month and curr_date[1] < day]
if any(conds):
start_date, end_date = curr_date[1:], (day, month, year)
is_future = True
else:
start_date, end_date = (day, month, year), curr_date[1:]
is_future = False
# Getting the difference in days between the current date and the target date
number_days = (year_diff(start_date, end_date)
+ month_diff(start_date, end_date)
+ day_diff(start_date, end_date))
if is_future:
target_weekday = curr_date[0] + number_days
else:
target_weekday = curr_date[0] - number_days
return DAY_OF_WEEK[target_weekday%7]
weekday_from_date(15,10,1984)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The neural network accepts an input vector of length 2. It has 2 output nodes. One node is used to control whether or not to recursively run itself, the other is the real data output. We simply threshold > 0.5 to trigger a recursive call to itself.
Step2: Cost Function
Step3: Genetic Algorithm to Solve Weights
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([[0],[0,0],[0,0,0],[0,0,0,0]])
def sigmoid(x):
return np.matrix(1.0 / (1.0 + np.exp(-x)))
def relu(x):
alpha = 0.01
return np.maximum(x, (alpha * x))
#initialize random weights
numIn, numHid, numOut = 2, 3, 2
theta1 = np.array( 0.5 * np.sqrt ( 6 / ( numIn + numHid) ) * np.random.randn( numIn + 1, numHid ), dtype="float32" )
theta2 = np.array( 0.5 * np.sqrt ( 6 / ( numHid + numOut ) ) * np.random.randn( numHid + 1, numOut ), dtype="float32" )
theta = np.append(theta1.flatten(), theta2.flatten()) #unroll vectors in a one long vector
def nn(x, theta):
i = 0
theta1 = np.array(theta[:9]).reshape(3,3)
theta2 = np.array(theta[9:]).reshape(4,2)
#print(theta1.shape)
#print(theta2.shape)
outputs = []
def comp(x):
#print(x)
a1 = np.array(np.concatenate((x.reshape(1,2), np.ones((1,1))), axis=1))
z2 = a1 @ theta1
a2 = np.concatenate((relu(z2), np.ones((1,1))), axis=1)
z3 = a2 @ theta2
a3 = sigmoid(z3)
return a3
a3 = comp(x)
outputs.append(a3[0,1])
while a3[0,0] > 0.5 and i < 3: #prevent an infinite loop; constrain output length
i += 1
input = np.array([[a3[0,1],0]])
a3 = comp(input)
outputs.append(a3[0,1])
return np.array(outputs)
###example output with random initial weights
print( nn(X[0], theta) )
print( nn(X[1], theta) )
print( nn(X[2], theta) )
print( nn(X[3], theta) )
def costFunction(X, Y, theta):
cost = 0
for i in range(len(X)):
y = Y[i]
m = float(len(X[i]))
hThetaX = nn(X[i], theta)
if len(y) != len(hThetaX):
cost += 3
else:
cost += (1/m) * np.sum(np.abs(y - hThetaX)**2)
return cost
import random as rn, numpy as np
# [Initial population size, mutation rate (=1%), num generations (30), solution length (13), # winners/per gen]
initPop, mutRate, numGen, solLen, numWin = 100, 0.01, 500, 17, 20
#initialize current population to random values within range
curPop = np.random.choice(np.arange(-15,15,step=0.01),size=(initPop, solLen),replace=False)
nextPop = np.zeros((curPop.shape[0], curPop.shape[1]))
fitVec = np.zeros((initPop, 2)) #1st col is indices, 2nd col is cost
for i in range(numGen): #iterate through num generations
#Create vector of all errors from cost function for each solution
fitVec = np.array([np.array([x, np.sum(costFunction(X, y, curPop[x].T))]) for x in range(initPop)])
#plt.pyplot.scatter(i,np.sum(fitVec[:,1]))
winners = np.zeros((numWin, solLen))
for n in range(len(winners)): #for n in range(10)
selected = np.random.choice(range(len(fitVec)), numWin/2, replace=False)
wnr = np.argmin(fitVec[selected,1])
winners[n] = curPop[int(fitVec[selected[wnr]][0])]
nextPop[:len(winners)] = winners #populate new gen with winners
duplicWin = np.zeros((((initPop - len(winners))),winners.shape[1]))
for x in range(winners.shape[1]): #for each col in winners (3 cols)
#Duplicate winners (20x3 matrix) 3 times to create 80x3 matrix, then shuffle columns
numDups = ((initPop - len(winners))/len(winners)) #num times to duplicate to fill rest of nextPop
duplicWin[:, x] = np.repeat(winners[:, x], numDups, axis=0)#duplicate each col
duplicWin[:, x] = np.random.permutation(duplicWin[:, x]) #shuffle each col ("crossover")
#Populate the rest of the generation with offspring of mating pairs
nextPop[len(winners):] = np.matrix(duplicWin)
#Create a mutation matrix, mostly 1s, but some elements are random numbers from a normal distribution
mutMatrix = [np.float(np.random.normal(0,2,1)) if rn.random() < mutRate else 1 for x in range(nextPop.size)]
#randomly mutate part of the population by multiplying nextPop by our mutation matrix
nextPop = np.multiply(nextPop, np.matrix(mutMatrix).reshape(nextPop.shape))
curPop = nextPop
best_soln = curPop[np.argmin(fitVec[:,1])]
print("Best Sol'n:\n%s\nCost:%s" % (best_soln,np.sum(costFunction(X, y, best_soln.T))))
#Demonstrate variable output after training
print( np.round(nn(X[0], best_soln.reshape(17,1)), 2) )
print( np.round(nn(X[1], best_soln.reshape(17,1)), 2) )
print( np.round(nn(X[2], best_soln.reshape(17,1)), 2) )
print( np.round(nn(X[3], best_soln.reshape(17,1)), 2) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make code compatible with AI Platform Training Service
Step2: Move code into python package
Step4: To use hyperparameter tuning in your training job you must perform the following steps
Step6: Modify task.py
Step7: Create config.yaml file
Step8: Report your hyperparameter metric to AI Platform Training
|
<ASSISTANT_TASK:>
Python Code:
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%env TFVERSION=2.5
%%bash
gcloud config set project $PROJECT
gcloud config set ai_platform/region $REGION
!gsutil ls gs://$BUCKET/taxifare/data
!ls -la taxifare/trainer
%%writefile ./taxifare/trainer/model.py
Data prep, train and evaluate DNN model.
import datetime
import logging
import os
import hypertune
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import activations, callbacks, layers, models
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
# inputs are all float except for pickup_datetime which is a string
STRING_COLS = ["pickup_datetime"]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
DAYS = ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
def features_and_labels(row_data):
for unwanted_col in ["key"]:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
shuffle_buffer_size=1000000,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if not isinstance(s, str):
s = s.numpy().decode("utf-8")
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff * londiff + latdiff * latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, numeric_cols, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed["pickup_datetime"]
feature_columns = {
colname: fc.numeric_column(colname) for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ["pickup_longitude", "dropoff_longitude"]:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78) / 8.0, name=f"scale_{lon_col}"
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ["pickup_latitude", "dropoff_latitude"]:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37) / 8.0, name=f"scale_{lat_col}"
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed["euclidean"] = layers.Lambda(euclidean, name="euclidean")(
[
inputs["pickup_longitude"],
inputs["pickup_latitude"],
inputs["dropoff_longitude"],
inputs["dropoff_latitude"],
]
)
feature_columns["euclidean"] = fc.numeric_column("euclidean")
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed["hourofday"] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32
),
name="hourofday",
)(inputs["pickup_datetime"])
feature_columns["hourofday"] = fc.indicator_column(
fc.categorical_column_with_identity("hourofday", num_buckets=24)
)
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns["pickup_latitude"], latbuckets
)
b_dlat = fc.bucketized_column(
feature_columns["dropoff_latitude"], latbuckets
)
b_plon = fc.bucketized_column(
feature_columns["pickup_longitude"], lonbuckets
)
b_dlon = fc.bucketized_column(
feature_columns["dropoff_longitude"], lonbuckets
)
ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns["pickup_and_dropoff"] = fc.embedding_column(pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr, string_cols):
numeric_cols = set(CSV_COLUMNS) - {LABEL_COLUMN, "key"} - set(string_cols)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype="float32")
for colname in numeric_cols
}
inputs.update(
{
colname: layers.Input(name=colname, shape=(), dtype="string")
for colname in string_cols
}
)
# transforms
transformed, feature_columns = transform(inputs, numeric_cols, nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation="relu", name=f"h{layer}")(x)
output = layers.Dense(1, name="fare")(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss="mse", metrics=[rmse, "mse"])
return model
def train_and_evaluate(hparams):
batch_size = hparams["batch_size"]
nbuckets = hparams["nbuckets"]
lr = hparams["lr"]
nnsize = hparams["nnsize"]
eval_data_path = hparams["eval_data_path"]
num_evals = hparams["num_evals"]
num_examples_to_train_on = hparams["num_examples_to_train_on"]
output_dir = hparams["output_dir"]
train_data_path = hparams["train_data_path"]
timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
savedmodel_dir = os.path.join(output_dir, "savedmodel")
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, "checkpoints")
tensorboard_path = os.path.join(output_dir, "tensorboard")
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
model = build_dnn_model(nbuckets, nnsize, lr, STRING_COLS)
logging.info(model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(
checkpoint_path, save_weights_only=True, verbose=1
)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path, histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb],
)
# Exporting the model with default serving function.
model.save(model_export_path)
# TODO 1
hp_metric = # TODO: Your code goes here
# TODO 1
hpt = # TODO: Your code goes here
# TODO: Your code goes here
return history
%%writefile taxifare/trainer/task.py
Argument definitions for model training code in `trainer.model`.
import argparse
import json
import os
from trainer import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help="Batch size for training steps",
type=int,
default=32,
)
parser.add_argument(
"--eval_data_path",
help="GCS location pattern of eval files",
required=True,
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes (provide space-separated sizes)",
nargs="+",
type=int,
default=[32, 8],
)
parser.add_argument(
"--nbuckets",
help="Number of buckets to divide lat and lon with",
type=int,
default=10,
)
parser.add_argument(
"--lr", help="learning rate for optimizer", type=float, default=0.001
)
parser.add_argument(
"--num_evals",
help="Number of times to evaluate model on eval data training.",
type=int,
default=5,
)
parser.add_argument(
"--num_examples_to_train_on",
help="Number of examples to train on.",
type=int,
default=100,
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True,
)
parser.add_argument(
"--train_data_path",
help="GCS location pattern of train files containing eval URLs",
required=True,
)
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk",
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
hparams["output_dir"] = os.path.join(
hparams["output_dir"],
json.loads(os.environ.get("TF_CONFIG", "{}"))
.get("task", {})
.get("trial", ""),
)
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
%%writefile hptuning_config.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: # TODO: Your code goes here
maxParallelTrials: # TODO: Your code goes here
hyperparameterMetricTag: # TODO: Your code goes here
enableTrialEarlyStopping: True
params:
- parameterName: lr
# TODO: Your code goes here
- parameterName: nbuckets
# TODO: Your code goes here
- parameterName: batch_size
# TODO: Your code goes here
%%bash
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
# TODO
gcloud ai-platform jobs submit training $JOBID \
# TODO: Your code goes here
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keras でマスキングとパディングをする
Step2: はじめに
Step3: マスキング
Step4: 出力された結果から分かるように、マスクは形状が(batch_size, sequence_length)の 2 次元ブールテンソルであり、そこでは個々の False エントリは、対応する時間ステップを処理中に無視すべきであることを示しています。
Step5: これは、以下の Functional API モデルでも同様です。
Step6: マスクテンソルを直接レイヤーに渡す
Step8: カスタムレイヤーでマスキングをサポートする
Step9: もう 1 つの例として、入力値からマスクを生成できる CustomEmbedding レイヤーの例を示します。
Step10: オプトインして互換性のあるレイヤー間でマスクを伝播する
Step11: これで、マスク生成レイヤー(Embedding など)とマスク消費レイヤー(LSTM など)間でこのカスタムレイヤーの使用が可能となり、マスク消費レイヤーまで届くようにマスクを渡します。
Step12: マスク情報が必要なレイヤーを書く
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
raw_inputs = [
[711, 632, 71],
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
# By default, this will pad using 0s; it is configurable via the
# "value" parameter.
# Note that you could "pre" padding (at the beginning) or
# "post" padding (at the end).
# We recommend using "post" padding when working with RNN layers
# (in order to be able to use the
# CuDNN implementation of the layers).
padded_inputs = tf.keras.preprocessing.sequence.pad_sequences(
raw_inputs, padding="post"
)
print(padded_inputs)
embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
masked_output = embedding(padded_inputs)
print(masked_output._keras_mask)
masking_layer = layers.Masking()
# Simulate the embedding lookup by expanding the 2D input to 3D,
# with embedding dimension of 10.
unmasked_embedding = tf.cast(
tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32
)
masked_embedding = masking_layer(unmasked_embedding)
print(masked_embedding._keras_mask)
model = keras.Sequential(
[layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),]
)
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
outputs = layers.LSTM(32)(x)
model = keras.Model(inputs, outputs)
class MyLayer(layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
self.lstm = layers.LSTM(32)
def call(self, inputs):
x = self.embedding(inputs)
# Note that you could also prepare a `mask` tensor manually.
# It only needs to be a boolean tensor
# with the right shape, i.e. (batch_size, timesteps).
mask = self.embedding.compute_mask(inputs)
output = self.lstm(x, mask=mask) # The layer will ignore the masked values
return output
layer = MyLayer()
x = np.random.random((32, 10)) * 100
x = x.astype("int32")
layer(x)
class TemporalSplit(keras.layers.Layer):
Split the input tensor into 2 tensors along the time dimension.
def call(self, inputs):
# Expect the input to be 3D and mask to be 2D, split the input tensor into 2
# subtensors along the time axis (axis 1).
return tf.split(inputs, 2, axis=1)
def compute_mask(self, inputs, mask=None):
# Also split the mask into 2 if it presents.
if mask is None:
return None
return tf.split(mask, 2, axis=1)
first_half, second_half = TemporalSplit()(masked_embedding)
print(first_half._keras_mask)
print(second_half._keras_mask)
class CustomEmbedding(keras.layers.Layer):
def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs):
super(CustomEmbedding, self).__init__(**kwargs)
self.input_dim = input_dim
self.output_dim = output_dim
self.mask_zero = mask_zero
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer="random_normal",
dtype="float32",
)
def call(self, inputs):
return tf.nn.embedding_lookup(self.embeddings, inputs)
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return tf.not_equal(inputs, 0)
layer = CustomEmbedding(10, 32, mask_zero=True)
x = np.random.random((3, 10)) * 9
x = x.astype("int32")
y = layer(x)
mask = layer.compute_mask(x)
print(mask)
class MyActivation(keras.layers.Layer):
def __init__(self, **kwargs):
super(MyActivation, self).__init__(**kwargs)
# Signal that the layer is safe for mask propagation
self.supports_masking = True
def call(self, inputs):
return tf.nn.relu(inputs)
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
x = MyActivation()(x) # Will pass the mask along
print("Mask found:", x._keras_mask)
outputs = layers.LSTM(32)(x) # Will receive the mask
model = keras.Model(inputs, outputs)
class TemporalSoftmax(keras.layers.Layer):
def call(self, inputs, mask=None):
broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1)
inputs_exp = tf.exp(inputs) * broadcast_float_mask
inputs_sum = tf.reduce_sum(
inputs_exp * broadcast_float_mask, axis=-1, keepdims=True
)
return inputs_exp / inputs_sum
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs)
x = layers.Dense(1)(x)
outputs = TemporalSoftmax()(x)
model = keras.Model(inputs, outputs)
y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Summarized data functions
Step7: DataAnalysisWidget
Step8: Interactive Data Analysis
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display, HTML
from ipywidgets import widgets, interactive, IntSlider
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import qgrid # https://github.com/quantopian/qgrid
# import statsmodels.api as sm
import textwrap
import traceback
plt.style.use('ggplot')
%matplotlib notebook
def summary(data: pd.DataFrame):
# types
df = pd.DataFrame(data.dtypes).rename(columns={0: 'Types'})
# set
df = pd.merge(
df, pd.DataFrame(
data.apply(lambda se : str(sorted(set(se.dropna())))[:1000])
).rename(columns={0: 'Set Values'}),
left_index=True, right_index=True
)
# total observations
df = pd.merge(
df, pd.DataFrame(
data.count()
).rename(columns={0: '# Observations'}),
left_index=True, right_index=True
)
# total of nan
df = pd.merge(
df, pd.DataFrame(data.isnull().sum()).rename(columns={0: '# NaN'}),
left_index=True, right_index=True
)
return df
def make_chart(data: pd.DataFrame, ax: plt.Axes):
Ex:
k = ['Sex', 'Survived']
df[k].groupby(by='Sex').sum()
# display chart
try:
data.plot.bar(ax=ax, stacked=True)
plt.grid(True)
plt.xticks(rotation=45)
plt.tight_layout()
except:
t = '<br/>'.join(textwrap.wrap(traceback.format_exc(), 80))
display(t)
return ax
def process_query(
data: pd.DataFrame,
field_reference: str,
fields_comparison: [str],
bins: int
) -> pd.DataFrame:
labels_reference = []
labels = []
if not (fields_comparison and field_reference):
return data
_data = data[list(fields_comparison)+[field_reference]].copy()
for f in list(fields_comparison)+[field_reference]:
try:
if isinstance(data[f].dtype.type(), np.number):
_data[f], _ = pd.cut(data[f].copy(), bins=bins, retbins=True)
except:
pass
return pd.crosstab(
[_data[f] for f in fields_comparison],
_data[field_reference]
)
class DataAnalysisWidget:
def __init__(
self, data: pd.DataFrame
):
self.data = data.copy()
@staticmethod
def load(filepath: str):
return DataAnalysisWidget(pd.read_csv(filepath))
def prepare_data(self, fields: dict):
fields: {'field_name1': {old_value: new_value}}
# Survived field
_df = self.data.copy()
# iterate over fields
for i_field, v_field in fields.items():
# iterate over labels
for old_label, new_label in v_field.items():
_mask = _df[i_field]==old_label
self.data.loc[_mask, i_field] = new_label
self.data[i_field] = self.data[i_field].astype(
'category', categories=list(set(self.data[i_field].dropna()))
)
def summary(self):
return display(summary(self.data))
def _interative_show_chart(
self, field_reference: str, fields_comparison: [str], bins
):
ax = plt.figure().gca()
_data = process_query(
data=self.data,
field_reference=field_reference,
fields_comparison=fields_comparison,
bins=bins
)
display(_data)
make_chart(data=_data, ax=ax)
def show_chart(self, field_reference: str, fields_comparison: [str]):
w_bins = IntSlider(min=2, max=10, value=2)
w_fields_comparison = widgets.SelectMultiple(
description='Xs:',
options=[i for i in self.data.keys()],
selected_labels=fields_comparison
)
w_field_reference = widgets.Dropdown(
description='Y:',
options=[i for i in self.data.keys()],
selected_label=field_reference
)
return interactive(
self._interative_show_chart,
field_reference=w_field_reference,
fields_comparison=w_fields_comparison,
bins=w_bins
)
def _interative_show_panel_chart(
self, field_reference: str, fields_comparison: [str], bins
):
ax = plt.figure().gca()
_data = process_query(
data=self.data,
field_reference=field_reference,
fields_comparison=fields_comparison,
bins=bins
)
display(_data)
make_chart(data=_data, ax=ax)
def show_panel_chart(self, field_reference: str):
w_bins = IntSlider(min=2, max=10, value=2)
w_field_reference = widgets.Dropdown(
description='Y:',
options=[i for i in self.data.keys()],
selected_label=field_reference
)
w_fields_comparison = widgets.SelectMultiple(
description='Xs:',
options=[i for i in self.data.keys()],
selected_labels=[
i for i in self.data.keys() if not i == field_reference
]
)
return interactive(
self._interative_show_panel_chart,
field_reference=w_field_reference,
fields_comparison=w_fields_comparison,
bins=w_bins
)
def __repr__(self):
return ''
daw = DataAnalysisWidget.load('data/train.csv')
daw.prepare_data({
'Survived': {1: 'Survived', 0: 'Died'},
'Pclass': {1: 'Class1', 2: 'Class2', 3: 'Class3'},
'Sex': {},
'Embarked': {'C': 'Cherbourg', 'Q': 'Queenstown', 'S': 'Southampton'}
})
daw.summary()
daw.show_chart(
field_reference='Survived',
fields_comparison=['Sex']
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The plt interface is what we will use most often, as we shall see throughout this chapter.
Step2: Throughout this section, we will adjust this style as needed.
Step3: After running this command (it needs to be done only once per kernel/session), any cell within the notebook that creates a plot will embed a PNG image of the resulting graphic
Step4: Saving Figures to File
Step5: We now have a file called my_figure.png in the current working directory
Step6: To confirm that it contains what we think it contains, let's use the IPython Image object to display the contents of this file
Step7: In savefig(), the file format is inferred from the extension of the given filename.
Step8: Note that when saving your figure, it's not necessary to use plt.show() or related commands discussed earlier.
Step9: It is important to note that this interface is stateful
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
import numpy as np
x = np.linspace(0, 10, 100)
fig = plt.figure()
plt.plot(x, np.sin(x), '-')
plt.plot(x, np.cos(x), '--');
fig.savefig('my_figure.png')
!ls -lh my_figure.png
from IPython.display import Image
Image('my_figure.png')
fig.canvas.get_supported_filetypes()
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(x, np.sin(x))
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(x, np.cos(x));
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(x, np.sin(x))
ax[1].plot(x, np.cos(x));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Target Configuration
Step2: Workload Configuration and Execution
Step3: Parse Trace and Profiling Data
Step4: Trace visualization
Step5: Latency Analysis
Step6: Latency Plots
Step7: Activations Analysis
Step8: Activations Plots
Step9: Runtimes Analysis
Step10: Runtimes Plots
|
<ASSISTANT_TASK:>
Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%matplotlib inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
# Support for workload generation
from wlgen import RTA, Ramp
# Support for trace analysis
from trace import Trace
# Support for plotting
import numpy
import pandas as pd
import matplotlib.pyplot as plt
import trappy
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
"password" : 'juno',
# Folder where all the results will be collected
"results_dir" : "TraceAnalysis_TaskLatencies",
# Define devlib modules to load
"exclude_modules" : [ 'hwmon' ],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_load_avg_cpu",
"sched_load_avg_task",
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
"rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
def experiment(te):
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# FTrace the execution of this workload
te.ftrace.start()
rtapp.run(out_dir=te.res_dir)
te.ftrace.stop()
# Collect and keep track of the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
# Collect and keep track of the Kernel Functions performance data
stats_file = os.path.join(te.res_dir, 'trace.stats')
te.ftrace.get_stats(stats_file)
# Dump platform descriptor
te.platform_dump(te.res_dir)
experiment(te)
# Base folder where tests folder are located
res_dir = te.res_dir
logging.info('Content of the output folder %s', res_dir)
!tree {res_dir}
with open(os.path.join(res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
trace_file = os.path.join(res_dir, 'trace.dat')
trace = Trace(platform, trace_file, events=my_conf['ftrace']['events'])
trappy.plotter.plot_trace(trace.ftrace)
print trace.data_frame.latency_df.__doc__
# Report full set of task status informations available from the trace
trace.data_frame.latency_df('ramp').head()
# Report information on sched_switch events
df = trace.data_frame.trace_event('sched_switch')
df.head()
print trace.data_frame.latency_wakeup_df.__doc__
# Report WAKEUP events and their duration
trace.data_frame.latency_wakeup_df('ramp').head()
print trace.data_frame.latency_preemption_df.__doc__
# Report PREEMPTION events and their duration
trace.data_frame.latency_preemption_df('ramp').head()
print trace.analysis.latency.plotLatency.__doc__
# Plot latency events for a specified task
latency_stats_df = trace.analysis.latency.plotLatency('ramp')
# Plot statistics on task latencies
latency_stats_df.T
print trace.analysis.latency.plotLatencyBands.__doc__
# Plot latency events for a specified task
trace.analysis.latency.plotLatencyBands('ramp')
# Zoom into a spefific time frame
trace.setXTimeRange(4.28,4.29)
trace.analysis.latency.plotLatencyBands('ramp')
print trace.data_frame.activations_df.__doc__
# Report the sequence of activations intervals:
# Time: wakeup time
# activation_internal: time interval wrt previous wakeup
trace.data_frame.activations_df('ramp').head()
print trace.analysis.latency.plotActivations.__doc__
# Plot activation internvals for a specified task
activations_df = trace.analysis.latency.plotActivations('ramp', threshold_ms=120)
# Plot statistics on task activation intervals
activations_df.T
print trace.data_frame.runtimes_df.__doc__
# Report the sequence of running times:
# Time: task block time (i.e. sleep or exit)
# running_time: cumulative ruinning times since last wakeup event
trace.data_frame.runtimes_df('ramp').head()
print trace.analysis.latency.plotRuntimes.__doc__
# Plot activation internvals for a specified task
runtimes_df = trace.analysis.latency.plotRuntimes('ramp', threshold_ms=120)
# Plot statistics on task running times
runtimes_df.T
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Two kinds of vector products we'll see
Step2: 2) Use the function outer(vector1, vector2) to find the outer product of rv and cv. Does the order of the arguments matter?
Step3: II. Complex vectors
Step4: The complex conjugate changes the sign of the imaginary part
Step5: 3) Use dot() and .conjugate() to find the dot product of v1 and it's own conjugate
Step6: III. Matrices
Step7: Can also use the matrix type which is like array but restricts to 2D. Also, matrix adds .H and .I methods for hermitian and inverse, respectively. For more information, see Stack Overflow question #4151128
Step8: Examples
Step9: Interpret this result
Step10: Signs are opposite compared to the book, but it turns out that (-) doesn't matter in the interpretation of eigenvectors
Step11: Part 2
Step12: Vector products in QuTiP
Step13: Matrix in QuTiP
Step14: Practice
Step15: Problem 1.8
|
<ASSISTANT_TASK:>
Python Code:
from numpy import array, dot, outer, sqrt, matrix
from numpy.linalg import eig, eigvals
from matplotlib.pyplot import hist
%matplotlib inline
rv = array([1,2]) # a row vector
rv
cv = array([[3],[4]]) # a column vector
cv
dot(rv,cv)
dot(cv,rv)
outer(rv,cv)
outer(cv,rv)
# Complex numbers in python have a j term:
a = 1+2j
v1 = array([1+2j, 3+2j, 5+1j, 4+0j])
v1.conjugate()
dot(v1.conjugate(),v1)
# a two-dimensional array
m1 = array([[2,1],[2,1]])
m1
# can find transpose with the T method:
m1.T
# find the eigenvalues and eigenvectors of a matrix:
eig(m1)
m2 = matrix( [[2,1],[2,1]])
m2.H
eig(m2)
# use a question mark to get help on a command
eig?
M14 = array([[0,1],[-2,3]])
eig(M14)
1/sqrt(2) # this is the value for both entries in the first eigenvector
1/sqrt(5) # this is the first value in the second eigenvector
2/sqrt(5) # this is the second value in the second eigenvector
eigvals(M14)
M16 = array([[0,-1j],[1j,0]])
evals, evecs = eig(M16)
evecs
evecs[:,0]
evecs[:,1]
dot(evecs[:,0].conjugate(),evecs[:,1])
from qutip import *
# Create a row vector:
qv = Qobj([[1,2]])
qv
# Find the corresponding column vector
qv.dag()
qv2 = Qobj([[1+2j,4-1j]])
qv2
qv2.dag()
qv2*qv2.dag() # inner product (dot product)
qv2.dag()*qv2 # outer product
qm = Qobj([[1,2],[2,1]])
qm
qm.eigenenergies() # in quantum (as we will learn) eigenvalues often correspond to energy levels
evals, evecs = qm.eigenstates()
evecs
evecs[0]
# Solution
n, bins, patches = hist([10,13,14,14,6,8,7,9,12,14,13,11,10,7,7],bins=5,range=(5,14))
# Solution
n
# Solution
pvals = n/n.sum()
# Solution
from sympy import *
c,a,x = symbols("c a x")
Q.positive((c,a))
first = integrate(c*exp(-a*x),(x,0,oo),conds='none')
print("first = ",first)
second = integrate(a*exp(-a*x),(x,0,oo),conds='none')
print("second = ",second)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Create CFNCluster
Step2: After you verified the project information, you can execute the pipeline. When the job is done, you will see the log infomration returned from the cluster.
Step3: Run the pipeline with the specific operation.
Step4: To delete the cluster, you just need to set the cluster name and call the below function.
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
sys.path.append(os.getcwd().replace("notebooks", "cfncluster"))
## S3 input and output address.
s3_input_files_address = "s3://path/to/input folder"
s3_output_files_address = "s3://path/to/output folder"
## CFNCluster name
your_cluster_name = "testonco"
## The private key pair for accessing cluster.
private_key = "/path/to/private_key.pem"
## If delete cfncluster after job is done.
delete_cfncluster = False
import CFNClusterManager, ConnectionManager
## Create a new cluster
master_ip_address = CFNClusterManager.create_cfn_cluster(cluster_name=your_cluster_name)
ssh_client = ConnectionManager.connect_master(hostname=master_ip_address,
username="ec2-user",
private_key_file=private_key)
import PipelineManager
## You can call this function to check the disease names included in the annotation.
PipelineManager.check_disease_name()
## Define the disease name from the below list of disease names.
disease_name = "BreastCancer"
import PipelineManager
## define operation
## calculate: calculate correlation;"
## oslom_cluster: clustering the gene moudules;"
## print_oslom_cluster_json: print json files;"
## all: run all operations;"
operation = "all"
## run the pipeline
PipelineManager.run_analysis(ssh_client, disease_name, operation, s3_input_files_address, s3_output_files_address)
import CFNClusterManager
if delete_cfncluster == True:
CFNClusterManager.delete_cfn_cluster(cluster_name=your_cluster_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Un graphe
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
# tutoriel_graphe
noeuds = {0: 'le', 1: 'silences', 2: 'quelques', 3: '\xe9crit', 4: 'non-dits.', 5: 'Et', 6: 'risque', 7: '\xe0', 8: "qu'elle,", 9: 'parfois', 10: 'aim\xe9', 11: 'lorsque', 12: 'que', 13: 'plus', 14: 'les', 15: 'Minelli,', 16: "n'oublierai", 17: 'je', 18: 'prises', 19: 'sa', 20: 'la', 21: 'jeune,', 22: "qu'elle,", 23: '\xe0', 24: 'ont', 25: "j'ai", 26: 'chemin', 27: '\xe9tranger', 28: 'lente', 29: 'de', 30: 'voir', 31: 'quand', 32: 'la', 33: 'recul,', 34: 'de', 35: 'trop', 36: 'ce', 37: 'Je', 38: 'Il', 39: "l'extr\xeame", 40: "J'ai", 41: 'silences,', 42: "qu'elle,", 43: 'le', 44: 'trace,', 45: 'avec', 46: 'seras', 47: 'dire,', 48: 'femme', 49: 'soit'}
arcs = {(3, 15): None, (46, 47): None, (42, 33): None, (35, 45): None, (1, 14): None, (22, 26): None, (26, 28): None, (43, 29): None, (40, 41): None, (29, 44): None, (17, 3): None, (32, 37): None, (24, 19): None, (46, 34): None, (11, 19): None, (34, 49): None, (22, 2): None, (37, 48): None, (14, 12): None, (3, 10): None, (5, 18): None, (12, 24): None, (34, 32): None, (45, 39): None, (37, 26): None, (33, 45): None, (34, 47): None, (36, 31): None, (29, 47): None, (13, 11): None, (12, 21): None, (2, 16): None, (5, 4): None, (33, 35): None, (28, 49): None, (25, 49): None, (21, 0): None, (3, 13): None, (18, 24): None, (12, 7): None, (13, 15): None, (11, 1): None, (16, 23): None, (37, 45): None, (27, 32): None, (32, 41): None, (8, 24): None, (10, 1): None, (2, 24): None, (24, 11): None, (2, 14): None, (47, 36): None, (48, 39): None, (30, 25): None, (30, 43): None, (15, 14): None, (26, 27): None, (6, 8): None, (20, 10): None, (19, 17): None, (5, 7): None, (44, 25): None, (27, 38): None, (2, 0): None, (3, 18): None, (3, 9): None, (25, 33): None, (42, 48): None, (2, 15): None, (26, 48): None, (26, 38): None, (7, 8): None, (8, 4): None}
from mlstatpy.graph.graphviz_helper import draw_graph_graphviz
draw_graph_graphviz(noeuds, arcs, "image.png")
from IPython.display import Image
Image("image.png", width=400)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The next step would be to create an instance of the System class. This instance is used as a handle to the simulation system. At any time, only one instance of the System class can exist.
Step2: It can be used to store and manipulate the crucial system parameters like the time step and the size of the simulation box (<tt>time_step</tt>, and <tt>box_l</tt>).
Step3: Placing and accessing particles
Step4: The particle properties can be accessed using standard numpy slicing syntax
Step5: Many objects in ESPResSo have a string representation, and thus can be displayed via python's <tt>print</tt> function
Step6: Setting up non-bonded interactions
Step7: In a periodic system it is in general not straight forward to calculate all non-bonded interactions. Due to the periodicity and to speed up calculations usually a cut-off $r_{cut}$ for infinite-range potentials like Lennard-Jones is applied, such that $V(r>r_c) = 0$. The potential can be shifted to zero at the cutoff value to ensure continuity using the <tt>shift='auto'</tt> option of espressomd.interactions.LennardJonesInteraction.
Step8: Exercise
Step9: Exercise
Step10: Choosing the thermodynamic ensemble, thermostat
Step11: Exercise
Step12: Exercise
Step13: Since the ensemble average $\langle E_\text{kin}\rangle=3/2 N k_B T$ is related to the temperature,
Step14: Exercise
Step15: We plot the autocorrelation function and the fit to visually confirm a roughly exponential decay
Step16: For statistical analysis, we only want uncorrelated samples.
Step17: For comparison to literature values we need to account for the error made by the LJ truncation.
Step18: This value differs quite strongly from the uncorrected one but agrees well with the literature value $U^i = -5.38$ given in Table 1 of Ref. <a href='#[6]'>[6]</a>.
Step19: Exercise
Step20: Exercise
Step21: We now plot the experimental radial distribution.
|
<ASSISTANT_TASK:>
Python Code:
import espressomd
required_features = ["LENNARD_JONES"]
espressomd.assert_features(required_features)
from espressomd import observables, accumulators, analyze
# Importing other relevant python modules
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
np.random.seed(42)
plt.rcParams.update({'font.size': 22})
# System parameters
N_PART = 200
DENSITY = 0.75
BOX_L = np.power(N_PART / DENSITY, 1.0 / 3.0) * np.ones(3)
# Test solution of Exercise 1
assert isinstance(system, espressomd.System)
SKIN = 0.4
TIME_STEP = 0.01
system.time_step = TIME_STEP
system.cell_system.skin = SKIN
# Test that now we have indeed N_PART particles in the system
assert len(system.part) == N_PART
# Access position of a single particle
print("position of particle with id 0:", system.part[0].pos)
# Iterate over the first five particles for the purpose of demonstration.
# For accessing all particles, use a slice: system.part[:]
for i in range(5):
print("id", i, "position:", system.part[i].pos)
print("id", i, "velocity:", system.part[i].v)
# Obtain all particle positions
cur_pos = system.part[:].pos
print(system.part[0])
# use LJ units: EPS=SIG=1
LJ_EPS = 1.0
LJ_SIG = 1.0
LJ_CUT = 2.5 * LJ_SIG
assert (BOX_L - 2 * SKIN > LJ_CUT).all()
F_TOL = 1e-2
DAMPING = 30
MAX_STEPS = 10000
MAX_DISPLACEMENT = 0.01 * LJ_SIG
EM_STEP = 10
# check that after the exercise the total energy is negative
assert system.analysis.energy()['total'] < 0
# reset clock
system.time = 0.
# Parameters for the Langevin thermostat
# reduced temperature T* = k_B T / LJ_EPS
TEMPERATURE = 0.827 # value from Tab. 1 in [6]
GAMMA = 1.0
# Integration parameters
STEPS_PER_SAMPLE = 20
N_SAMPLES = 1000
times = np.zeros(N_SAMPLES)
e_total = np.zeros_like(times)
e_kin = np.zeros_like(times)
T_inst = np.zeros_like(times)
plt.figure(figsize=(10, 6))
plt.plot(times, T_inst, label='$T_{\\mathrm{inst}}$')
plt.plot(times, [TEMPERATURE] * len(times), label='$T$ set by thermostat')
plt.legend()
plt.xlabel('t')
plt.ylabel('T')
plt.show()
# Use only the data after the equilibration period in the beginning
warmup_time = 15
e_total = e_total[times > warmup_time]
e_kin = e_kin[times > warmup_time]
times = times[times > warmup_time]
times -= times[0]
def autocor(x):
x = np.asarray(x)
mean = x.mean()
var = np.var(x)
xp = x - mean
corr = analyze.autocorrelation(xp) / var
return corr
def fit_correlation_time(data, ts):
data = np.asarray(data)
data /= data[0]
def fitfn(t, t_corr): return np.exp(-t / t_corr)
popt, pcov = optimize.curve_fit(fitfn, ts, data)
return popt[0]
print(steps_per_uncorrelated_sample)
plt.figure(figsize=(10, 6))
plt.plot(times, e_total_autocor, label='data')
plt.plot(times, np.exp(-times / corr_time), label='exponential fit')
plt.plot(2 * [steps_per_uncorrelated_sample * system.time_step],
[min(e_total_autocor), 1], label='sampling interval')
plt.xlim(left=-2, right=50)
plt.ylim(top=1.2, bottom=-0.15)
plt.legend()
plt.xlabel('t')
plt.ylabel('total energy autocorrelation')
plt.show()
print(f'mean potential energy = {mean_pot_energy:.2f} +- {SEM_pot_energy:.2f}')
tail_energy_per_particle = 8. / 3. * np.pi * DENSITY * LJ_EPS * \
LJ_SIG**3 * (1. / 3. * (LJ_SIG / LJ_CUT)**9 - (LJ_SIG / LJ_CUT)**3)
mean_pot_energy_corrected = mean_pot_energy + tail_energy_per_particle
print(f'corrected mean potential energy = {mean_pot_energy_corrected:.2f}')
# Parameters for the radial distribution function
N_BINS = 100
R_MIN = 0.0
R_MAX = system.box_l[0] / 2.0
system.integrator.run(N_SAMPLES * steps_per_uncorrelated_sample)
fig, ax = plt.subplots(figsize=(10, 7))
ax.plot(rs, rdf, label='simulated')
plt.legend()
plt.xlabel('r')
plt.ylabel('RDF')
# comparison to literature
def calc_literature_rdf(rs, temperature, density, LJ_eps, LJ_sig):
T_star = temperature / LJ_eps
rho_star = density * LJ_sig**3
# expression of the factors Pi from Equations 2-8 with coefficients qi from Table 1
# expression for a,g
def P(q1, q2, q3, q4, q5, q6, q7, q8, q9): return \
q1 + q2 * np.exp(-q3 * T_star) + q4 * np.exp(-q5 * T_star) + q6 / rho_star + q7 / rho_star**2 \
+ q8 * np.exp(-q3 * T_star) / rho_star**3 + q9 * \
np.exp(-q5 * T_star) / rho_star**4
a = P(9.24792, -2.64281, 0.133386, -1.35932, 1.25338,
0.45602, -0.326422, 0.045708, -0.0287681)
g = P(0.663161, -0.243089, 1.24749, -2.059, 0.04261,
1.65041, -0.343652, -0.037698, 0.008899)
# expression for c,k
def P(q1, q2, q3, q4, q5, q6, q7, q8): return \
q1 + q2 * np.exp(-q3 * T_star) + q4 * rho_star + q5 * rho_star**2 + q6 * rho_star**3 \
+ q7 * rho_star**4 + q8 * rho_star**5
c = P(-0.0677912, -1.39505, 0.512625, 36.9323, -
36.8061, 21.7353, -7.76671, 1.36342)
k = P(16.4821, -0.300612, 0.0937844, -61.744,
145.285, -168.087, 98.2181, -23.0583)
# expression for b,h
def P(q1, q2, q3): return q1 + q2 * np.exp(-q3 * rho_star)
b = P(-8.33289, 2.1714, 1.00063)
h = P(0.0325039, -1.28792, 2.5487)
# expression for d,l
def P(q1, q2, q3, q4): return q1 + q2 * \
np.exp(-q3 * rho_star) + q4 * rho_star
d = P(-26.1615, 27.4846, 1.68124, 6.74296)
l = P(-6.7293, -59.5002, 10.2466, -0.43596)
# expression for s
def P(q1, q2, q3, q4, q5, q6, q7, q8): return \
(q1 + q2 * rho_star + q3 / T_star + q4 / T_star**2 + q5 / T_star**3) \
/ (q6 + q7 * rho_star + q8 * rho_star**2)
s = P(1.25225, -1.0179, 0.358564, -0.18533,
0.0482119, 1.27592, -1.78785, 0.634741)
# expression for m
def P(q1, q2, q3, q4, q5, q6): return \
q1 + q2 * np.exp(-q3 * T_star) + q4 / T_star + \
q5 * rho_star + q6 * rho_star**2
m = P(-5.668, -3.62671, 0.680654, 0.294481, 0.186395, -0.286954)
# expression for n
def P(q1, q2, q3): return q1 + q2 * np.exp(-q3 * T_star)
n = P(6.01325, 3.84098, 0.60793)
# fitted expression (=theoretical curve)
# slightly more than 1 to smooth out the discontinuity in the range [1.0, 1.02]
theo_rdf_cutoff = 1.02
theo_rdf = 1 + 1 / rs**2 * (np.exp(-(a * rs + b)) * np.sin(c * rs + d)
+ np.exp(-(g * rs + h)) * np.cos(k * rs + l))
theo_rdf[np.nonzero(rs <= theo_rdf_cutoff)] = \
s * np.exp(-(m * rs + n)**4)[np.nonzero(rs <= theo_rdf_cutoff)]
return theo_rdf
theo_rdf = calc_literature_rdf(rs, TEMPERATURE, DENSITY, LJ_EPS, LJ_SIG)
ax.plot(rs, theo_rdf, label='literature')
ax.legend()
fig
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm4', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network Architecture
Step2: Training
Step3: Denoising
Step4: Checking out the performance
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
assert conv1.get_shape().as_list() == [None, 28, 28, 16], print(conv1.get_shape().as_list())
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='SAME')
# Now 14x14x16
assert maxpool1.get_shape().as_list() == [None, 14, 14, 16], print(maxpool1.get_shape().as_list())
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='SAME')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='SAME')
# Now 4x4x8
assert encoded.get_shape().as_list() == [None, 4, 4, 8], print(encoded.get_shape().as_list())
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
assert upsample1.get_shape().as_list() == [None, 7, 7, 8], print(upsample1.get_shape().as_list())
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
assert upsample2.get_shape().as_list() == [None, 14, 14, 8], print(upsample2.get_shape().as_list())
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='SAME', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
upsample2.get_shape()
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Вычисление RSS
Step2: Вычисление отклика
|
<ASSISTANT_TASK:>
Python Code:
import numpy
T = numpy.array([10, 13, 17, 20, 19, 21, 14, 8, 5, 10])
wT = T * [0, 0, 0, 0, 0, 1, 1, 0, 0, 0]
X = numpy.zeros((10, 3))
X[:, 0] = numpy.ones(10).T
X[:, 1] = T.T
X[:, 2] = (wT ** 2).T
Y = numpy.array([1, 1.2, 1.5, 1.4, 1.6, 2.1, 1.7, 0.9, 0.7, 1.1]).T
Theta = numpy.linalg.inv(X.T @ X) @ X.T @ Y
print('Theta =', Theta)
RSS = (Y - X @ Theta).T @ (Y - X @ Theta)
print(RSS)
Xa = numpy.array([1, 15, 225])
wYa = Xa.T @ Theta
print('t = 15, w = 1:\t', wYa)
Xb = numpy.array([1, 15, 0])
wYb = Xb.T @ Theta
print('t = 15, w = 0:\t', wYb)
Xc = numpy.array([1, 40, 1600])
wYc = Xc.T @ Theta
print('t = 40, w = 1:\t', wYc)
Xd = numpy.array([1, 40, 0])
wYd = Xd.T @ Theta
print('t = 40, w = 0:\t', wYd)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building a Custom Widget - Email widget
Step2: sync=True traitlets
Step3: Define the view
Step4: Render method
Step5: Test
Step6: Making the widget stateful
Step7: Accessing the model from the view
Step8: Dynamic updates
Step9: This allows us to update the value from the Python kernel to the views. Now to get the value updated from the front-end to the Python kernel (when the input is not disabled) we can do it using the model.set method.
Step10: Test
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from traitlets import Unicode, Bool, validate, TraitError
from ipywidgets import DOMWidget, register
@register
class Email(DOMWidget):
_view_name = Unicode('EmailView').tag(sync=True)
_view_module = Unicode('email_widget').tag(sync=True)
_view_module_version = Unicode('0.1.0').tag(sync=True)
%%javascript
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
});
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
// Define the EmailView
var EmailView = widgets.DOMWidgetView.extend({
});
return {
EmailView: EmailView
}
});
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
var EmailView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.email_input = document.createElement('input');
this.email_input.type = 'email';
this.email_input.value = 'example@example.com';
this.email_input.disabled = true;
this.el.appendChild(this.email_input);
},
});
return {
EmailView: EmailView
};
});
Email()
from traitlets import Unicode, Bool, validate, TraitError
from ipywidgets import DOMWidget, register
@register
class Email(DOMWidget):
_view_name = Unicode('EmailView').tag(sync=True)
_view_module = Unicode('email_widget').tag(sync=True)
_view_module_version = Unicode('0.1.0').tag(sync=True)
# Attributes
value = Unicode('example@example.com', help="The email value.").tag(sync=True)
disabled = Bool(False, help="Enable or disable user changes.").tag(sync=True)
# Basic validator for the email value
@validate('value')
def _valid_value(self, proposal):
if proposal['value'].count("@") != 1:
raise TraitError('Invalid email value: it must contain an "@" character')
if proposal['value'].count(".") == 0:
raise TraitError('Invalid email value: it must contain at least one "." character')
return proposal['value']
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
var EmailView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.email_input = document.createElement('input');
this.email_input.type = 'email';
this.email_input.value = this.model.get('value');
this.email_input.disabled = this.model.get('disabled');
this.el.appendChild(this.email_input);
},
});
return {
EmailView: EmailView
};
});
Email(value='john.doe@domain.com', disabled=True)
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
var EmailView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.email_input = document.createElement('input');
this.email_input.type = 'email';
this.email_input.value = this.model.get('value');
this.email_input.disabled = this.model.get('disabled');
this.el.appendChild(this.email_input);
// Python -> JavaScript update
this.model.on('change:value', this.value_changed, this);
this.model.on('change:disabled', this.disabled_changed, this);
},
value_changed: function() {
this.email_input.value = this.model.get('value');
},
disabled_changed: function() {
this.email_input.disabled = this.model.get('disabled');
},
});
return {
EmailView: EmailView
};
});
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
var EmailView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.email_input = document.createElement('input');
this.email_input.type = 'email';
this.email_input.value = this.model.get('value');
this.email_input.disabled = this.model.get('disabled');
this.el.appendChild(this.email_input);
// Python -> JavaScript update
this.model.on('change:value', this.value_changed, this);
this.model.on('change:disabled', this.disabled_changed, this);
// JavaScript -> Python update
this.email_input.onchange = this.input_changed.bind(this);
},
value_changed: function() {
this.email_input.value = this.model.get('value');
},
disabled_changed: function() {
this.email_input.disabled = this.model.get('disabled');
},
input_changed: function() {
this.model.set('value', this.email_input.value);
this.model.save_changes();
},
});
return {
EmailView: EmailView
};
});
email = Email(value='john.doe@domain.com', disabled=False)
email
email.value
email.value = 'jane.doe@domain.com'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's investigate the data that we just loaded. A dataset contains the original data (digits.images), a 2 dimensional data array and some metadata about the dataset.
Step2: The original data is however always normalized to a single dimensional array. This leads to the digits.data array being of dimension len(digits.images) (the number of images) x len(digits.images[i]) (the one-dimensional image data)
Step3: Our goal is now to train a machine learning algorithm with the given digits dataset, so that it can use what it has learned to later predict or classify new digits. In this case, we'll have 9 target classes (numbers 0-9).
Step4: For example, for digits.images[3], we have digits.target[3] == 3, as the digits.images[3] contains the number 3.
Step5: The algorithm that we will use to do the classification is a so-called estimator. Well-known matematical estimators inlcude
Step6: We now train the classifier with all but the last item in the dataset (using python's [
Step7: Now you can predict new values, in particular, we can ask to the classifier what is the digit of our last image in the digits dataset, which we have not used to train the classifier
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
digits = datasets.load_digits()
%matplotlib inline
from matplotlib import pyplot
# Show first 10 images
for i in xrange(10):
pyplot.figure(i+1)
ax = pyplot.gca() # gca = get current axis
ax.imshow(digits.images[i],cmap=pyplot.cm.binary)
digits.data
digits.target
%matplotlib inline
print "Class for digits.images[3] =", digits.target[3]
pyplot.imshow(digits.images[3],cmap=pyplot.cm.binary)
from sklearn import svm
clf = svm.SVC(gamma=0.001, C=100)
clf.fit(digits.data[:-1], digits.target[:-1])
result = clf.predict(digits.data[-1:])
%matplotlib inline
print "Class for digits.images[-1] =", result[0]
pyplot.imshow(digits.images[-1],cmap=pyplot.cm.binary)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.
Step2: With our materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
Step4: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step5: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step6: We now must create a geometry that is assigned a root universe and export it to XML.
Step7: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.
Step8: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
Step9: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
Step10: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
Step11: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
Step12: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step13: Tally Data Processing
Step14: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
Step15: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Step16: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
Step17: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
Step18: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
Step19: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
Step20: Verification with OpenMOC
Step21: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
Step22: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
Step23: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
Step24: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
Step25: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias
Step26: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
Step27: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
import openmoc
import openmc
import openmc.mgxs as mgxs
import openmc.data
from openmc.openmoc_compatible import get_openmoc_geometry
%matplotlib inline
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.region = +min_x & -max_x & +min_y & -max_y
root_cell.fill = pin_cell_universe
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups([0., 0.625, 20.0e6])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups([0., 0.058, 0.14, 0.28,
0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells().values()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.FissionXS(groups=fine_groups, nu=True)
xs_library[cell.id]['nu-scatter'] = mgxs.ScatterMatrixXS(groups=fine_groups, nu=True)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1E-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain to the cell
xs_library[cell.id][rxn_type].domain = cell
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.append(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.082.h5')
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
# Extract the 8-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
# Create a figure of the U-235 continuous-energy fission cross section
fig = openmc.plot_xs('U235', ['fission'])
# Get the axis to use for plotting the MGXS
ax = fig.gca()
# Extract energy group bounds and MGXS values to plot
fission = xs_library[fuel_cell.id]['fission']
energy_groups = fission.energy_groups
x = energy_groups.group_edges
y = fission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')
y = np.squeeze(y)
# Fix low energy bound
x[0] = 1.e-5
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
ax.plot(x, y, drawstyle='steps', color='r', linewidth=3)
ax.set_title('U-235 Fission Cross Section')
ax.legend(['Continuous', 'Multi-Group'])
ax.set_xlim((x.min(), x.max()))
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H1']['mean']
o16 = df[df['nuclide'] == 'O16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.values
o16 = o16.values
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Show the plot on screen
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Daten verarbeiten
Step2: Eine zusätzliche Variable namens name_in_grossbuchstaben enthält nun den eingegebenen Namen, jedoch komplett in Grossbuchstaben geschrieben.
Step3: Ein nützlich(er)es Programm
Step4: Um die Anzahl Buchstaben zu zählen, kommt die Funktion len() zum Einsatz (kurz für "length"). Hierbei handelt es sich nicht um eine "Methode" wie zuvor upper(), welche direkt auf dem Namen aufgerufen werden kann, sondern um eine Funktion, der man eine Variable übergeben muss (ähnlich wie print())
Step5: Die ermittelte Anzahl Buchstaben, die der eingegebene Name hat, kann nun mittels print() ausgegeben werden
Step6: Ein einfacher Taschenrechner
Step7: Diese beiden Zahlen können nun mit dem Plus-Operator + addiert werden
Step8: Das Ergebnis kann wieder mit print() ausgegeben werden
Step9: 1000 + 2000 = 10002000
Step10: Nun hat man es mit richtigen Zahlen zu tun, mit denen man auch rechnen kann
Step11: Die Berechnung funktioniert nun korrekt!
|
<ASSISTANT_TASK:>
Python Code:
name = input("Wie heisst du? ")
name_in_grossbuchstaben = name.upper()
print(name_in_grossbuchstaben)
name = input("Wie heisst du? ")
anzahl_buchstaben = len(name)
print(anzahl_buchstaben)
zahl_1 = input("Bitte gib eine Zahl ein: ")
zahl_2 = input("Bitte gib noch eine Zahl ein: ")
summe = zahl_1 + zahl_2
print(summe)
zahl_1 = int(zahl_1)
zahl_2 = int(zahl_2)
summe = zahl_1 + zahl_2
print(summe)
zahl_1 = 5
zahl_2 = 3
# Ab hier musst du den Code erweitern:
summe =
differenz =
produkt =
quotient =
potenz =
# Ab hier kannst du den Code so lassen, wie er ist:
print(f'{zahl_1} + {zahl_2} = {summe}')
print(f'{zahl_1} - {zahl_2} = {differenz}')
print(f'{zahl_1} * {zahl_2} = {produkt}')
print(f'{zahl_1} / {zahl_2} = {quotient}')
print(f'{zahl_1} ** {zahl_2} = {potenz}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Find the groups in a molecule
Step2: Calculate thermodynamic properties of the molecule
Step3: Find the groups in a molecule
Step4: Calculate thermodynamic properties of the molecule
|
<ASSISTANT_TASK:>
Python Code:
import pgradd
print(pgradd.__file__)
from pgradd.GroupAdd import GroupLibrary
import pgradd.ThermoChem
lib = GroupLibrary.Load('GRWSurface2018')
groups = lib.GetDescriptors('C(CC([Pt])([Pt])[Pt])([Pt])([Pt])[Pt]')
print('Group Frequency')
print('----- ---------')
for x in groups:
print('{0:20s} {1:g}'.format(x, groups[x]))
thermochem = lib.Estimate(groups, 'thermochem')
S = thermochem.get_S(T=298.15, units='cal/mol/K')
H = thermochem.get_H(T=298.15, units='kcal/mol')
G = thermochem.get_G(T=298.15, units='kJ/mol')
HoRT = thermochem.get_HoRT(298.15)
print('Entropy(298 K) = {0:5.2f} [cal/mol/K]'.format(S))
print('Enthalpy(298 K) = {0:5.2f} [kcal/mol]'.format(H))
print('Gibb''s(298 K) = {0:5.2f} [kJ/mol]'.format(G))
print('Dimensionless Enthalpy(298 K) = {0:5.2f}'.format(HoRT))
import numpy as np
from pmutt import plot_1D
from matplotlib import pyplot as plt
T = np.linspace(300, 1500)
fig1, ax1 = plot_1D(thermochem, x_name='T', x_values=T,
methods=('get_H', 'get_S', 'get_G'),
get_H_kwargs={'units':'kcal/mol'},
get_S_kwargs={'units':'cal/mol/K'},
get_G_kwargs={'units': 'kJ/mol'})
fig1.set_figheight(10)
ax1[0].set_ylabel('H (kcal/mol)')
ax1[1].set_ylabel('S (cal/mol/K)')
ax1[2].set_ylabel('G (kJ/mol)')
ax1[0].set_xlabel('Temperature [K]')
ax1[1].set_xlabel('Temperature [K]')
ax1[2].set_xlabel('Temperature [K]')
plt.tight_layout()
plt.show()
from pmutt.empirical.nasa import Nasa
from pmutt.io.thermdat import write_thermdat
N = []
N.append(Nasa.from_model(thermochem, name='CCH2C(S)', T_low=300, T_high=1500, phase='S', elements={'C': 3, 'H': 2}))
print(write_thermdat(N))
groups = lib.GetDescriptors('C([Pt])(O)C')
print('Group Frequency')
print('----- ---------')
for x in groups:
print('{0:20s} {1:g}'.format(x, groups[x]))
thermochem = lib.Estimate(groups, 'thermochem')
S = thermochem.get_S(T=298.15, units='cal/mol/K')
H = thermochem.get_H(T=298.15, units='kcal/mol')
G = thermochem.get_G(T=298.15, units='kJ/mol')
HoRT = thermochem.get_HoRT(298.15)
print('Entropy(298 K) = {0:5.2f} [cal/mol/K]'.format(S))
print('Enthalpy(298 K) = {0:5.2f} [kcal/mol]'.format(H))
print('Gibb''s(298 K) = {0:5.2f} [kJ/mol]'.format(G))
print('Dimensionless Enthalpy(298 K) = {0:5.2f}'.format(HoRT))
import numpy as np
from pmutt import plot_1D
from matplotlib import pyplot as plt
T = np.linspace(300, 1500)
fig1, ax1 = plot_1D(thermochem, x_name='T', x_values=T,
methods=('get_H', 'get_S', 'get_G'),
get_H_kwargs={'units':'kcal/mol'},
get_S_kwargs={'units':'cal/mol/K'},
get_G_kwargs={'units': 'kJ/mol'})
fig1.set_figheight(10)
ax1[0].set_ylabel('H (kcal/mol)')
ax1[1].set_ylabel('S (cal/mol/K)')
ax1[2].set_ylabel('G (kJ/mol)')
ax1[0].set_xlabel('Temperature [K]')
ax1[1].set_xlabel('Temperature [K]')
ax1[2].set_xlabel('Temperature [K]')
plt.tight_layout()
plt.show()
from pmutt.empirical.nasa import Nasa
from pmutt.io.thermdat import write_thermdat
N = []
N.append(Nasa.from_model(thermochem, name='CH3CHOH(S)', T_low=300, T_high=1500, phase='S', elements={'C': 2, 'H': 5, 'O':1}))
print(write_thermdat(N))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make sure to restart your kernel to ensure this change has taken place.
Step2: To start, we'll create our list of users, movies and features. While the users and movies represent elements in our database, for a content-based filtering method the features of the movies are likely hand-engineered and rely on domain knowledge to provide the best embedding space. Here we use the categories of Action, Sci-Fi, Comedy, Cartoon, and Drama to describe our movies (and thus our users).
Step3: Initialize our users, movie ratings, and features
Step4: Computing the user feature matrix
Step5: Next we normalize each user feature vector to sum to 1. Normalizing isn't strictly neccesary, but it makes it so that rating magnitudes will be comparable between users.
Step6: Ranking feature relevance for each user
Step7: Determining movie recommendations.
Step8: The computation above finds the similarity measure between each user and each movie in our database. To focus only on the ratings for new movies, we apply a mask to the all_users_ratings matrix.
Step9: Finally, let's grab and print out the top 2 rated movies for each user.
|
<ASSISTANT_TASK:>
Python Code:
!python3 -m pip freeze | grep tensorflow==2 || \
python3 -m pip --install tensorflow
import tensorflow as tf
users = ["Ryan", "Danielle", "Vijay", "Chris"]
movies = [
"Star Wars",
"The Dark Knight",
"Shrek",
"The Incredibles",
"Bleu",
"Memento",
]
features = ["Action", "Sci-Fi", "Comedy", "Cartoon", "Drama"]
num_users = len(users)
num_movies = len(movies)
num_feats = len(features)
num_recommendations = 2
# Each row represents a user's rating for the different movies.
users_movies = tf.constant(
[
[4, 6, 8, 0, 0, 0],
[0, 0, 10, 0, 8, 3],
[0, 6, 0, 0, 3, 7],
[10, 9, 0, 5, 0, 2],
],
dtype=tf.float32,
)
# Features of the movies one-hot encoded.
# e.g. columns could represent
# ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama']
movies_feats = tf.constant(
[
[1, 1, 0, 0, 1],
[1, 1, 0, 0, 0],
[0, 0, 1, 1, 0],
[1, 0, 1, 1, 0],
[0, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
],
dtype=tf.float32,
)
users_feats = tf.matmul(users_movies, movies_feats)
users_feats
users_feats = users_feats / tf.reduce_sum(users_feats, axis=1, keepdims=True)
users_feats
top_users_features = tf.nn.top_k(users_feats, num_feats)[1]
top_users_features
for i in range(num_users):
feature_names = [features[int(index)] for index in top_users_features[i]]
print(f"{users[i]}: {feature_names}")
users_ratings = tf.matmul(users_feats, tf.transpose(movies_feats))
users_ratings
users_unseen_movies = tf.equal(users_movies, tf.zeros_like(users_movies))
ignore_matrix = tf.zeros_like(tf.cast(users_movies, tf.float32))
users_ratings_new = tf.where(users_unseen_movies, users_ratings, ignore_matrix)
users_ratings_new
top_movies = tf.nn.top_k(users_ratings_new, num_recommendations)[1]
top_movies
for i in range(num_users):
movie_names = [movies[index] for index in top_movies[i]]
print(f"{users[i]}: {movie_names}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selecting cell bags
Step2: Note
Step3: Observations and dimensions
Step4: Note the value of h1.cellvalobs(ob) is actually a pair composed of the heading cell and its value. This is is because we can over-ride its output value without actually rewriting the original table, as we shall see.
Step5: Conversion segments and output
Step6: WDA Technical CSV
Step7: Note If you were wondering what the processTIMEUNIT=False was all about in the ConversionSegment constructor, it's a feature to help the WDA output automatically set the TIMEUNIT column according to whether it should be Year, Month, or Quarter.
|
<ASSISTANT_TASK:>
Python Code:
# Load in the functions
from databaker.framework import *
# Load the spreadsheet
tabs = loadxlstabs("example1.xls")
# Select the first table
tab = tabs[0]
print("The unordered bag of cells for this table looks like:")
print(tab)
# Preview the table as a table inline
savepreviewhtml(tab)
bb = tab.is_bold()
print("The cells with bold font are", bb)
print("The", len(bb), "cells immediately below these bold font cells are", bb.shift(DOWN))
cc = tab.filter("Cars")
print("The single cell with the text 'Cars' is", cc)
cc.assert_one() # proves there is only one cell in this bag
print("Everything in the column below the 'Cars' cell is", cc.fill(DOWN))
hcc = tab.filter("Cars").expand(DOWN)
print("If you wanted to include the 'Cars' heading, then use expand", hcc)
print("You can print the cells in row-column order if you don't mind unfriendly code")
shcc = sorted(hcc.unordered_cells, key=lambda Cell:(Cell.y, Cell.x))
print(shcc)
print("It can be easier to see the set of cells coloured within the table")
savepreviewhtml(hcc)
"All the cells that have an 'o' in them:", tab.regex(".*?o")
# We get the array of observations by selecting its corner and expanding down and to the right
obs = tab.excel_ref('B4').expand(DOWN).expand(RIGHT)
savepreviewhtml(obs)
# the two main headings are in a row and a column
r1 = tab.excel_ref('B3').expand(RIGHT)
r2 = tab.excel_ref('A3').fill(DOWN)
# here we pass in a list containing two cell bags and get two colours
savepreviewhtml([r1, r2])
# HDim is made from a bag of cells, a name, and an instruction on how to look it up
# from an observation cell.
h1 = HDim(r1, "Vehicles", DIRECTLY, ABOVE)
# Here is an example cell
cc = tab.excel_ref('C5')
# You can preview a dimension as well as just a cell bag
savepreviewhtml([h1, cc])
# !!! This is the important look-up stage from a cell into a dimension
print("Cell", cc, "matches", h1.cellvalobs(cc), "in dimension", h1.label)
# You can start to see through to the final result of all this work when you
# print out the lookup values for every observation in the table at once.
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# You can change an output value like this:
h1.AddCellValueOverride("Cars", "Horses")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# Alternatively, you can override by the reference to a single cell to a value
# (This will work even if the cell C3 is empty, which helps with filling in blank headings)
h1.AddCellValueOverride(tab.excel_ref('C3'), "Submarines")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# You can override the header value for an individual observation element.
b4cell = tab.excel_ref('B4')
h1.AddCellValueOverride(b4cell, "Clouds")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# The preview table shows how things have changed
savepreviewhtml([h1, obs])
wob = tab.excel_ref('A1')
print("Wrong-Obs", wob, "maps to", h1.cellvalobs(wob), " <--- ie Nothing")
h1.AddCellValueOverride(None, "Who knows?")
print("After giving a default value Wrong-Obs", wob, "now maps to", h1.cellvalobs(wob))
# The default even works if the cell bag set is empty. In which case we have a special
# constant case that maps every observation to the same value
h3 = HDimConst("Category", "Beatles")
for ob in obs:
print("Obs", ob, "maps to", h3.cellvalobs(ob))
dimensions = [
HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE),
HDim(r1, "Vehicles", DIRECTLY, ABOVE),
HDim(r2, "Name", DIRECTLY, LEFT),
HDimConst("Category", "Beatles")
]
c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=False)
savepreviewhtml(c1)
# If the table is too big, we can preview it in another file is openable in another browser window.
# (It's very useful if you are using two computer screens.)
savepreviewhtml(c1, "preview.html", verbose=False)
print("Looking up all the observations against all the dimensions and print them out")
for ob in c1.segment:
print(c1.lookupobs(ob))
df = c1.topandas()
df
print(writetechnicalCSV(None, c1))
# This is how to write to a file
writetechnicalCSV("exampleWDA.csv", c1)
# We can read this file back in to a list of pandas dataframes
dfs = readtechnicalCSV("exampleWDA.csv")
print(dfs[0])
# See that the `2014` no longer ends with `.0`
c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=True)
c1.topandas()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare the data
Step2: Remove the first record (because it is not a valid data example) and a trailing
Step3: We store the training and test data splits locally as CSV files.
Step4: Define dataset metadata
Step5: Create tf.data.Dataset objects for training and validation
Step6: Create model inputs
Step7: Encode input features
Step8: Deep Neural Decision Tree
Step9: Deep Neural Decision Forest
Step10: Finally, let's set up the code that will train and evaluate the model.
Step11: Experiment 1
Step12: Experiment 2
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow import keras
from tensorflow.keras import layers
import math
CSV_HEADER = [
"age",
"workclass",
"fnlwgt",
"education",
"education_num",
"marital_status",
"occupation",
"relationship",
"race",
"gender",
"capital_gain",
"capital_loss",
"hours_per_week",
"native_country",
"income_bracket",
]
train_data_url = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
)
train_data = pd.read_csv(train_data_url, header=None, names=CSV_HEADER)
test_data_url = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test"
)
test_data = pd.read_csv(test_data_url, header=None, names=CSV_HEADER)
print(f"Train dataset shape: {train_data.shape}")
print(f"Test dataset shape: {test_data.shape}")
test_data = test_data[1:]
test_data.income_bracket = test_data.income_bracket.apply(
lambda value: value.replace(".", "")
)
train_data_file = "train_data.csv"
test_data_file = "test_data.csv"
train_data.to_csv(train_data_file, index=False, header=False)
test_data.to_csv(test_data_file, index=False, header=False)
# A list of the numerical feature names.
NUMERIC_FEATURE_NAMES = [
"age",
"education_num",
"capital_gain",
"capital_loss",
"hours_per_week",
]
# A dictionary of the categorical features and their vocabulary.
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
"workclass": sorted(list(train_data["workclass"].unique())),
"education": sorted(list(train_data["education"].unique())),
"marital_status": sorted(list(train_data["marital_status"].unique())),
"occupation": sorted(list(train_data["occupation"].unique())),
"relationship": sorted(list(train_data["relationship"].unique())),
"race": sorted(list(train_data["race"].unique())),
"gender": sorted(list(train_data["gender"].unique())),
"native_country": sorted(list(train_data["native_country"].unique())),
}
# A list of the columns to ignore from the dataset.
IGNORE_COLUMN_NAMES = ["fnlwgt"]
# A list of the categorical feature names.
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())
# A list of all the input features.
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
# A list of column default values for each feature.
COLUMN_DEFAULTS = [
[0.0] if feature_name in NUMERIC_FEATURE_NAMES + IGNORE_COLUMN_NAMES else ["NA"]
for feature_name in CSV_HEADER
]
# The name of the target feature.
TARGET_FEATURE_NAME = "income_bracket"
# A list of the labels of the target features.
TARGET_LABELS = [" <=50K", " >50K"]
from tensorflow.keras.layers import StringLookup
target_label_lookup = StringLookup(
vocabulary=TARGET_LABELS, mask_token=None, num_oov_indices=0
)
def get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128):
dataset = tf.data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=CSV_HEADER,
column_defaults=COLUMN_DEFAULTS,
label_name=TARGET_FEATURE_NAME,
num_epochs=1,
header=False,
na_value="?",
shuffle=shuffle,
).map(lambda features, target: (features, target_label_lookup(target)))
return dataset.cache()
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
if feature_name in NUMERIC_FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.float32
)
else:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.string
)
return inputs
def encode_inputs(inputs):
encoded_features = []
for feature_name in inputs:
if feature_name in CATEGORICAL_FEATURE_NAMES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
# Create a lookup to convert a string values to an integer indices.
# Since we are not using a mask token, nor expecting any out of vocabulary
# (oov) token, we set mask_token to None and num_oov_indices to 0.
lookup = StringLookup(
vocabulary=vocabulary, mask_token=None, num_oov_indices=0
)
# Convert the string input values into integer indices.
value_index = lookup(inputs[feature_name])
embedding_dims = int(math.sqrt(lookup.vocabulary_size()))
# Create an embedding layer with the specified dimensions.
embedding = layers.Embedding(
input_dim=lookup.vocabulary_size(), output_dim=embedding_dims
)
# Convert the index values to embedding representations.
encoded_feature = embedding(value_index)
else:
# Use the numerical features as-is.
encoded_feature = inputs[feature_name]
if inputs[feature_name].shape[-1] is None:
encoded_feature = tf.expand_dims(encoded_feature, -1)
encoded_features.append(encoded_feature)
encoded_features = layers.concatenate(encoded_features)
return encoded_features
class NeuralDecisionTree(keras.Model):
def __init__(self, depth, num_features, used_features_rate, num_classes):
super(NeuralDecisionTree, self).__init__()
self.depth = depth
self.num_leaves = 2 ** depth
self.num_classes = num_classes
# Create a mask for the randomly selected features.
num_used_features = int(num_features * used_features_rate)
one_hot = np.eye(num_features)
sampled_feature_indicies = np.random.choice(
np.arange(num_features), num_used_features, replace=False
)
self.used_features_mask = one_hot[sampled_feature_indicies]
# Initialize the weights of the classes in leaves.
self.pi = tf.Variable(
initial_value=tf.random_normal_initializer()(
shape=[self.num_leaves, self.num_classes]
),
dtype="float32",
trainable=True,
)
# Initialize the stochastic routing layer.
self.decision_fn = layers.Dense(
units=self.num_leaves, activation="sigmoid", name="decision"
)
def call(self, features):
batch_size = tf.shape(features)[0]
# Apply the feature mask to the input features.
features = tf.matmul(
features, self.used_features_mask, transpose_b=True
) # [batch_size, num_used_features]
# Compute the routing probabilities.
decisions = tf.expand_dims(
self.decision_fn(features), axis=2
) # [batch_size, num_leaves, 1]
# Concatenate the routing probabilities with their complements.
decisions = layers.concatenate(
[decisions, 1 - decisions], axis=2
) # [batch_size, num_leaves, 2]
mu = tf.ones([batch_size, 1, 1])
begin_idx = 1
end_idx = 2
# Traverse the tree in breadth-first order.
for level in range(self.depth):
mu = tf.reshape(mu, [batch_size, -1, 1]) # [batch_size, 2 ** level, 1]
mu = tf.tile(mu, (1, 1, 2)) # [batch_size, 2 ** level, 2]
level_decisions = decisions[
:, begin_idx:end_idx, :
] # [batch_size, 2 ** level, 2]
mu = mu * level_decisions # [batch_size, 2**level, 2]
begin_idx = end_idx
end_idx = begin_idx + 2 ** (level + 1)
mu = tf.reshape(mu, [batch_size, self.num_leaves]) # [batch_size, num_leaves]
probabilities = keras.activations.softmax(self.pi) # [num_leaves, num_classes]
outputs = tf.matmul(mu, probabilities) # [batch_size, num_classes]
return outputs
class NeuralDecisionForest(keras.Model):
def __init__(self, num_trees, depth, num_features, used_features_rate, num_classes):
super(NeuralDecisionForest, self).__init__()
self.ensemble = []
# Initialize the ensemble by adding NeuralDecisionTree instances.
# Each tree will have its own randomly selected input features to use.
for _ in range(num_trees):
self.ensemble.append(
NeuralDecisionTree(depth, num_features, used_features_rate, num_classes)
)
def call(self, inputs):
# Initialize the outputs: a [batch_size, num_classes] matrix of zeros.
batch_size = tf.shape(inputs)[0]
outputs = tf.zeros([batch_size, num_classes])
# Aggregate the outputs of trees in the ensemble.
for tree in self.ensemble:
outputs += tree(inputs)
# Divide the outputs by the ensemble size to get the average.
outputs /= len(self.ensemble)
return outputs
learning_rate = 0.01
batch_size = 265
num_epochs = 10
hidden_units = [64, 64]
def run_experiment(model):
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
print("Start training the model...")
train_dataset = get_dataset_from_csv(
train_data_file, shuffle=True, batch_size=batch_size
)
model.fit(train_dataset, epochs=num_epochs)
print("Model training finished")
print("Evaluating the model on the test data...")
test_dataset = get_dataset_from_csv(test_data_file, batch_size=batch_size)
_, accuracy = model.evaluate(test_dataset)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
num_trees = 10
depth = 10
used_features_rate = 1.0
num_classes = len(TARGET_LABELS)
def create_tree_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
features = layers.BatchNormalization()(features)
num_features = features.shape[1]
tree = NeuralDecisionTree(depth, num_features, used_features_rate, num_classes)
outputs = tree(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
tree_model = create_tree_model()
run_experiment(tree_model)
num_trees = 25
depth = 5
used_features_rate = 0.5
def create_forest_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
features = layers.BatchNormalization()(features)
num_features = features.shape[1]
forest_model = NeuralDecisionForest(
num_trees, depth, num_features, used_features_rate, num_classes
)
outputs = forest_model(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
forest_model = create_forest_model()
run_experiment(forest_model)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you need greater control over the positioning of the axes within the canvas, or want to add multiple axes to one canvas, it's necessary to create the canvas and axes explicitly, then use the axes to plot your data. For example, you can use the bounds argument to specify explicit (xmin, xmax, ymin, ymax) bounds for the axes using canvas coordinates (note that canvas coordinates always increase from top to bottom, unlike cartesian coordinates)
Step2: You can also use negative values to specify values relative to the right and bottom sides of the canvas, instead of the (default) left and top sides, greatly simplifying the layout
Step3: Furthermore, the bounds parameters can use any
Step4: Of course, most of the time this level of control isn't necessary. Instead, the grid argument allows us to easily position each set of axes on a regular grid that covers the canvas. Note that you can control the axes position on the grid in a variety of ways
Step5: You can also use the gutter argument to control the space between cells in the grid
Step6: Sometimes, particularly when embedding axes to produce a figure-within-a-figure, the corner argument can be used to position axes relative to one of eight "corner" positions within the canvas. The corner argument takes a (position, inset, width, height) tuple
Step7: Here are all the positions supported by the corner argument
|
<ASSISTANT_TASK:>
Python Code:
import numpy
y = numpy.linspace(0, 1, 20) ** 2
import toyplot
toyplot.plot(y, width=300);
canvas = toyplot.Canvas(width=600, height=300)
axes1 = canvas.axes(bounds=(20, 280, 20, 280))
axes1.plot(y)
axes2 = canvas.axes(bounds=(320, 580, 20, 280))
axes2.plot(1 - y);
canvas = toyplot.Canvas(width=600, height=300)
axes1 = canvas.axes(bounds=(20, 280, 20, -20))
axes1.plot(y)
axes2 = canvas.axes(bounds=(-280, -20, 20, -20))
axes2.plot(1 - y);
canvas = toyplot.Canvas(width="20cm", height="2in")
axes1 = canvas.axes(bounds=("1cm", "5cm", "10%", "90%"))
axes1.plot(y)
axes2 = canvas.axes(bounds=("6cm", "-1cm", "10%", "90%"))
axes2.plot(1 - y);
canvas = toyplot.Canvas(width=600, height=300)
axes1 = canvas.axes(grid=(1, 2, 0))
axes1.plot(y)
axes2 = canvas.axes(grid=(1, 2, 1))
axes2.plot(1 - y);
canvas = toyplot.Canvas(width=600, height=300)
axes1 = canvas.axes(grid=(1, 2, 0), gutter=15)
axes1.plot(y)
axes2 = canvas.axes(grid=(1, 2, 1), gutter=15)
axes2.plot(1 - y);
x = numpy.random.normal(size=100)
y = numpy.random.normal(size=100)
canvas = toyplot.Canvas(width="5in")
canvas.axes().plot(numpy.linspace(0, 1) ** 0.5)
canvas.axes(corner=("bottom-right", "1in", "1.5in", "1.5in")).scatterplot(x, y);
canvas = toyplot.Canvas(width="10cm")
for position in ["top-left", "top", "top-right", "right", "bottom-right", "bottom", "bottom-left", "left"]:
canvas.axes(corner=(position, "1cm", "2cm", "2cm"), label=position)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem statement
Step2: Next, we need to define the bounds of the dimensions of the search space we want to explore, and (optionally) the starting point
Step5: Optimize all the things!
Step7: As a baseline, let us also compare with random search in the space of hyper-parameters, which is equivalent to sklearn.model_selection.RandomizedSearchCV.
Step8: Convergence plot
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10, 6)
from sklearn.datasets import load_boston
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
boston = load_boston()
X, y = boston.data, boston.target
reg = GradientBoostingRegressor(n_estimators=50, random_state=0)
def objective(params):
max_depth, learning_rate, max_features, min_samples_split, min_samples_leaf = params
reg.set_params(max_depth=max_depth,
learning_rate=learning_rate,
max_features=max_features,
min_samples_split=min_samples_split,
min_samples_leaf=min_samples_leaf)
return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1, scoring="mean_absolute_error"))
space = [(1, 5), # max_depth
(10**-5, 10**-1, "log-uniform"), # learning_rate
(1, X.shape[1]), # max_features
(2, 30), # min_samples_split
(1, 30)] # min_samples_leaf
x0 = [3, 0.01, 6, 2, 1]
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, x0=x0, n_calls=50, random_state=0)
"Best score=%.4f" % res_gp.fun
print(Best parameters:
- max_depth=%d
- learning_rate=%.6f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d % (res_gp.x[0], res_gp.x[1],
res_gp.x[2], res_gp.x[3],
res_gp.x[4]))
from skopt import forest_minimize
res_forest = forest_minimize(objective, space, x0=x0, n_calls=50, random_state=0)
"Best score=%.4f" % res_forest.fun
print(Best parameters:
- max_depth=%d
- learning_rate=%.6f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d % (res_forest.x[0], res_forest.x[1],
res_forest.x[2], res_forest.x[3],
res_forest.x[4]))
from skopt import dummy_minimize
res_dummy = dummy_minimize(objective, space, x0=x0, n_calls=50, random_state=0)
"Best score=%.4f" % res_dummy.fun
print(Best parameters:
- max_depth=%d
- learning_rate=%.4f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d % (res_dummy.x[0], res_dummy.x[1],
res_dummy.x[2], res_dummy.x[3],
res_dummy.x[4]))
from skopt.plots import plot_convergence
plot_convergence(("gp_optimize", res_gp),
("forest_optimize", res_forest),
("dummy_optimize", res_dummy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: (1b) Usando LabeledPoint
Step4: Visualização 1
Step5: (1c) Deslocando os rótulos
Step6: (1d) Conjuntos de treino, validação e teste
Step7: Part 2
Step10: (2b) Erro quadrático médio
Step11: (2c) RMSE do baseline para os conjuntos de treino, validação e teste
Step12: Visualização 2
Step14: Parte 3
Step16: (3b) Use os pesos para fazer a predição
Step18: (3c) Gradiente descendente
Step19: (3d) Treinando o modelo na base de dados
Step20: Visualização 3
Step21: Part 4
Step22: (4b) Predição
Step23: (4c) Avaliar RMSE
Step24: (4d) Grid search
Step25: Visualização 5
Step26: (4e) Grid Search para o valor de alfa e número de iterações
Step28: Parte 5
Step29: (5b) Construindo um novo modelo
Step30: (5c) Avaliando o modelo de interação
|
<ASSISTANT_TASK:>
Python Code:
sc = SparkContext.getOrCreate()
# carregar base de dados
from test_helper import Test
import os.path
baseDir = os.path.join('Data')
inputPath = os.path.join('millionsong.txt')
fileName = os.path.join(baseDir, inputPath)
numPartitions = 2
rawData = sc.textFile(fileName, numPartitions)
# EXERCICIO
numPoints = rawData.count()
print numPoints
samplePoints = rawData.take(5)
print samplePoints
# TEST Load and check the data (1a)
Test.assertEquals(numPoints, 6724, 'incorrect value for numPoints')
Test.assertEquals(len(samplePoints), 5, 'incorrect length for samplePoints')
from pyspark.mllib.regression import LabeledPoint
import numpy as np
# Here is a sample raw data point:
# '2001.0,0.884,0.610,0.600,0.474,0.247,0.357,0.344,0.33,0.600,0.425,0.60,0.419'
# In this raw data point, 2001.0 is the label, and the remaining values are features
# EXERCICIO
def parsePoint(line):
Converts a comma separated unicode string into a `LabeledPoint`.
Args:
line (unicode): Comma separated unicode string where the first element is the label and the
remaining elements are features.
Returns:
LabeledPoint: The line is converted into a `LabeledPoint`, which consists of a label and
features.
Point = line.split(",")
return LabeledPoint(Point[0], Point[1:])
parsedSamplePoints = map(parsePoint,samplePoints)
firstPointFeatures = parsedSamplePoints[0].features
firstPointLabel = parsedSamplePoints[0].label
print firstPointFeatures, firstPointLabel
d = len(firstPointFeatures)
print d
# TEST Using LabeledPoint (1b)
Test.assertTrue(isinstance(firstPointLabel, float), 'label must be a float')
expectedX0 = [0.8841,0.6105,0.6005,0.4747,0.2472,0.3573,0.3441,0.3396,0.6009,0.4257,0.6049,0.4192]
Test.assertTrue(np.allclose(expectedX0, firstPointFeatures, 1e-4, 1e-4),
'incorrect features for firstPointFeatures')
Test.assertTrue(np.allclose(2001.0, firstPointLabel), 'incorrect label for firstPointLabel')
Test.assertTrue(d == 12, 'incorrect number of features')
#insert a graphic inline
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
sampleMorePoints = rawData.take(50)
parsedSampleMorePoints = map(parsePoint, sampleMorePoints)
dataValues = map(lambda lp: lp.features.toArray(), parsedSampleMorePoints)
#print dataValues
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot
fig, ax = preparePlot(np.arange(.5, 11, 1), np.arange(.5, 49, 1), figsize=(8,7), hideLabels=True,
gridColor='#eeeeee', gridWidth=1.1)
image = plt.imshow(dataValues,interpolation='nearest', aspect='auto', cmap=cm.Greys)
for x, y, s in zip(np.arange(-.125, 12, 1), np.repeat(-.75, 12), [str(x) for x in range(12)]):
plt.text(x, y, s, color='#999999', size='10')
plt.text(4.7, -3, 'Feature', color='#999999', size='11'), ax.set_ylabel('Observation')
pass
# EXERCICIO
parsedDataInit = rawData.map(lambda x: parsePoint(x))
onlyLabels = parsedDataInit.map(lambda x: x.label)
minYear = onlyLabels.min()
maxYear = onlyLabels.max()
print maxYear, minYear
# TEST Find the range (1c)
Test.assertEquals(len(parsedDataInit.take(1)[0].features), 12,
'unexpected number of features in sample point')
sumFeatTwo = parsedDataInit.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedDataInit has unexpected values')
yearRange = maxYear - minYear
Test.assertTrue(yearRange == 89, 'incorrect range for minYear to maxYear')
# Debug
parsedDataInit.take(1)
# EXERCICIO
parsedData = parsedDataInit.map(lambda x: LabeledPoint(x.label - minYear, x.features))
# Should be a LabeledPoint
print type(parsedData.take(1)[0])
# View the first point
print '\n{0}'.format(parsedData.take(1))
# TEST Shift labels (1d)
oldSampleFeatures = parsedDataInit.take(1)[0].features
newSampleFeatures = parsedData.take(1)[0].features
Test.assertTrue(np.allclose(oldSampleFeatures, newSampleFeatures),
'new features do not match old features')
sumFeatTwo = parsedData.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedData has unexpected values')
minYearNew = parsedData.map(lambda lp: lp.label).min()
maxYearNew = parsedData.map(lambda lp: lp.label).max()
Test.assertTrue(minYearNew == 0, 'incorrect min year in shifted data')
Test.assertTrue(maxYearNew == 89, 'incorrect max year in shifted data')
# EXERCICIO
weights = [.8, .1, .1]
seed = 42
parsedTrainData, parsedValData, parsedTestData = parsedData.randomSplit(weights, seed)
parsedTrainData.cache()
parsedValData.cache()
parsedTestData.cache()
nTrain = parsedTrainData.count()
nVal = parsedValData.count()
nTest = parsedTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print parsedData.count()
# TEST Training, validation, and test sets (1e)
Test.assertEquals(parsedTrainData.getNumPartitions(), numPartitions,
'parsedTrainData has wrong number of partitions')
Test.assertEquals(parsedValData.getNumPartitions(), numPartitions,
'parsedValData has wrong number of partitions')
Test.assertEquals(parsedTestData.getNumPartitions(), numPartitions,
'parsedTestData has wrong number of partitions')
Test.assertEquals(len(parsedTrainData.take(1)[0].features), 12,
'parsedTrainData has wrong number of features')
sumFeatTwo = (parsedTrainData
.map(lambda lp: lp.features[2])
.sum())
sumFeatThree = (parsedValData
.map(lambda lp: lp.features[3])
.reduce(lambda x, y: x + y))
sumFeatFour = (parsedTestData
.map(lambda lp: lp.features[4])
.reduce(lambda x, y: x + y))
Test.assertTrue(np.allclose([sumFeatTwo, sumFeatThree, sumFeatFour],
2526.87757656, 297.340394298, 184.235876654),
'parsed Train, Val, Test data has unexpected values')
Test.assertTrue(nTrain + nVal + nTest == 6724, 'unexpected Train, Val, Test data set size')
Test.assertEquals(nTrain, 5371, 'unexpected value for nTrain')
Test.assertEquals(nVal, 682, 'unexpected value for nVal')
Test.assertEquals(nTest, 671, 'unexpected value for nTest')
# EXERCICIO
averageTrainYear = (parsedTrainData
.map(lambda x: x.label)
.mean()
)
print averageTrainYear
# TEST Average label (2a)
Test.assertTrue(np.allclose(averageTrainYear, 53.9316700801),
'incorrect value for averageTrainYear')
# EXERCICIO
def squaredError(label, prediction):
Calculates the the squared error for a single prediction.
Args:
label (float): The correct value for this observation.
prediction (float): The predicted value for this observation.
Returns:
float: The difference between the `label` and `prediction` squared.
return np.square(label - prediction)
def calcRMSE(labelsAndPreds):
Calculates the root mean squared error for an `RDD` of (label, prediction) tuples.
Args:
labelsAndPred (RDD of (float, float)): An `RDD` consisting of (label, prediction) tuples.
Returns:
float: The square root of the mean of the squared errors.
return np.sqrt(labelsAndPreds.map(lambda (x,y): squaredError(x,y)).mean())
labelsAndPreds = sc.parallelize([(3., 1.), (1., 2.), (2., 2.)])
# RMSE = sqrt[((3-1)^2 + (1-2)^2 + (2-2)^2) / 3] = 1.291
exampleRMSE = calcRMSE(labelsAndPreds)
print exampleRMSE
# TEST Root mean squared error (2b)
Test.assertTrue(np.allclose(squaredError(3, 1), 4.), 'incorrect definition of squaredError')
Test.assertTrue(np.allclose(exampleRMSE, 1.29099444874), 'incorrect value for exampleRMSE')
#Debug
parsedTrainData.take(1)
# EXERCICIO -> (rótulo, predição)
labelsAndPredsTrain = parsedTrainData.map(lambda x:(x.label, averageTrainYear))
rmseTrainBase = calcRMSE(labelsAndPredsTrain)
labelsAndPredsVal = parsedValData.map(lambda x:(x.label, averageTrainYear))
rmseValBase = calcRMSE(labelsAndPredsVal)
labelsAndPredsTest = parsedTestData.map(lambda x:(x.label, averageTrainYear))
rmseTestBase = calcRMSE(labelsAndPredsTest)
print 'Baseline Train RMSE = {0:.3f}'.format(rmseTrainBase)
print 'Baseline Validation RMSE = {0:.3f}'.format(rmseValBase)
print 'Baseline Test RMSE = {0:.3f}'.format(rmseTestBase)
# TEST Training, validation and test RMSE (2c)
Test.assertTrue(np.allclose([rmseTrainBase, rmseValBase, rmseTestBase],
[21.305869, 21.586452, 22.136957]), 'incorrect RMSE value')
from matplotlib.colors import ListedColormap, Normalize
from matplotlib.cm import get_cmap
cmap = get_cmap('YlOrRd')
norm = Normalize()
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, lp.label))
.map(lambda (l, p): squaredError(l, p))
.collect())
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 100, 20), np.arange(0, 100, 20))
plt.scatter(actual, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.5)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
pass
predictions = np.asarray(parsedValData
.map(lambda lp: averageTrainYear)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, averageTrainYear))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(53.0, 55.0, 0.5), np.arange(0, 100, 20))
ax.set_xlim(53, 55)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.3)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
from pyspark.mllib.linalg import DenseVector
# EXERCICIO
def gradientSummand(weights, lp):
Calculates the gradient summand for a given weight and `LabeledPoint`.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
weights (DenseVector): An array of model weights (betas).
lp (LabeledPoint): The `LabeledPoint` for a single observation.
Returns:
DenseVector: An array of values the same length as `weights`. The gradient summand.
return (weights.dot(lp.features) - lp.label) * lp.features
exampleW = DenseVector([1, 1, 1])
exampleLP = LabeledPoint(2.0, [3, 1, 4])
summandOne = gradientSummand(exampleW, exampleLP)
print summandOne
exampleW = DenseVector([.24, 1.2, -1.4])
exampleLP = LabeledPoint(3.0, [-1.4, 4.2, 2.1])
summandTwo = gradientSummand(exampleW, exampleLP)
print summandTwo
# TEST Gradient summand (3a)
Test.assertTrue(np.allclose(summandOne, [18., 6., 24.]), 'incorrect value for summandOne')
Test.assertTrue(np.allclose(summandTwo, [1.7304,-5.1912,-2.5956]), 'incorrect value for summandTwo')
# EXERCICIO
def getLabeledPrediction(weights, observation):
Calculates predictions and returns a (label, prediction) tuple.
Note:
The labels should remain unchanged as we'll use this information to calculate prediction
error later.
Args:
weights (np.ndarray): An array with one weight for each features in `trainData`.
observation (LabeledPoint): A `LabeledPoint` that contain the correct label and the
features for the data point.
Returns:
tuple: A (label, prediction) tuple.
return ( observation.label, weights.dot(observation.features) )
weights = np.array([1.0, 1.5])
predictionExample = sc.parallelize([LabeledPoint(2, np.array([1.0, .5])),
LabeledPoint(1.5, np.array([.5, .5]))])
labelsAndPredsExample = predictionExample.map(lambda lp: getLabeledPrediction(weights, lp))
print labelsAndPredsExample.collect()
# TEST Use weights to make predictions (3b)
Test.assertEquals(labelsAndPredsExample.collect(), [(2.0, 1.75), (1.5, 1.25)],
'incorrect definition for getLabeledPredictions')
# EXERCICIO
def linregGradientDescent(trainData, numIters):
Calculates the weights and error for a linear regression model trained with gradient descent.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
trainData (RDD of LabeledPoint): The labeled data for use in training the model.
numIters (int): The number of iterations of gradient descent to perform.
Returns:
(np.ndarray, np.ndarray): A tuple of (weights, training errors). Weights will be the
final weights (one weight per feature) for the model, and training errors will contain
an error (RMSE) for each iteration of the algorithm.
# The length of the training data
n = trainData.count()
# The number of features in the training data
d = len(trainData.take(1)[0].features)
w = np.zeros(d)
alpha = 1.0
# We will compute and store the training error after each iteration
errorTrain = np.zeros(numIters)
for i in range(numIters):
# Use getLabeledPrediction from (3b) with trainData to obtain an RDD of (label, prediction)
# tuples. Note that the weights all equal 0 for the first iteration, so the predictions will
# have large errors to start.
labelsAndPredsTrain = trainData.map(lambda x: getLabeledPrediction(w, x))
errorTrain[i] = calcRMSE(labelsAndPredsTrain)
# Calculate the `gradient`. Make use of the `gradientSummand` function you wrote in (3a).
# Note that `gradient` sould be a `DenseVector` of length `d`.
gradient = trainData.map(lambda x: gradientSummand(w, x)).sum()
# Update the weights
alpha_i = alpha / (n * np.sqrt(i+1))
w -= alpha_i*gradient
return w, errorTrain
# create a toy dataset with n = 10, d = 3, and then run 5 iterations of gradient descent
# note: the resulting model will not be useful; the goal here is to verify that
# linregGradientDescent is working properly
exampleN = 10
exampleD = 3
exampleData = (sc
.parallelize(parsedTrainData.take(exampleN))
.map(lambda lp: LabeledPoint(lp.label, lp.features[0:exampleD])))
print exampleData.take(2)
exampleNumIters = 5
exampleWeights, exampleErrorTrain = linregGradientDescent(exampleData, exampleNumIters)
print exampleWeights
# TEST Gradient descent (3c)
expectedOutput = [48.88110449, 36.01144093, 30.25350092]
Test.assertTrue(np.allclose(exampleWeights, expectedOutput), 'value of exampleWeights is incorrect')
expectedError = [79.72013547, 30.27835699, 9.27842641, 9.20967856, 9.19446483]
Test.assertTrue(np.allclose(exampleErrorTrain, expectedError),
'value of exampleErrorTrain is incorrect')
# EXERCICIO
numIters = 50
weightsLR0, errorTrainLR0 = linregGradientDescent(parsedTrainData, numIters)
labelsAndPreds = parsedValData.map(lambda x: getLabeledPrediction(weightsLR0, x))
rmseValLR0 = calcRMSE(labelsAndPreds)
print 'Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}'.format(rmseValBase,
rmseValLR0)
# TEST Train the model (3d)
expectedOutput = [22.64535883, 20.064699, -0.05341901, 8.2931319, 5.79155768, -4.51008084,
15.23075467, 3.8465554, 9.91992022, 5.97465933, 11.36849033, 3.86452361]
Test.assertTrue(np.allclose(weightsLR0, expectedOutput), 'incorrect value for weightsLR0')
norm = Normalize()
clrs = cmap(np.asarray(norm(np.log(errorTrainLR0))))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(2, 6, 1))
ax.set_ylim(2, 6)
plt.scatter(range(0, numIters), np.log(errorTrainLR0), s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xlabel('Iteration'), ax.set_ylabel(r'$\log_e(errorTrainLR0)$')
pass
norm = Normalize()
clrs = cmap(np.asarray(norm(errorTrainLR0[6:])))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(17, 22, 1))
ax.set_ylim(17.8, 21.2)
plt.scatter(range(0, numIters-6), errorTrainLR0[6:], s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xticklabels(map(str, range(6, 66, 10)))
ax.set_xlabel('Iteration'), ax.set_ylabel(r'Training Error')
pass
from pyspark.mllib.regression import LinearRegressionWithSGD
# Values to use when training the linear regression model
numIters = 500 # iterations
alpha = 1.0 # step
miniBatchFrac = 1.0 # miniBatchFraction
reg = 1e-1 # regParam
regType = 'l2' # regType
useIntercept = True # intercept
# EXERCICIO
firstModel = LinearRegressionWithSGD.train(parsedTrainData, iterations = numIters, step = alpha, miniBatchFraction = 1.0,
regParam=reg,regType=regType, intercept=useIntercept)
# weightsLR1 stores the model weights; interceptLR1 stores the model intercept
weightsLR1 = firstModel.weights
interceptLR1 = firstModel.intercept
print weightsLR1, interceptLR1
# TEST LinearRegressionWithSGD (4a)
expectedIntercept = 13.3335907631
expectedWeights = [16.682292427, 14.7439059559, -0.0935105608897, 6.22080088829, 4.01454261926, -3.30214858535,
11.0403027232, 2.67190962854, 7.18925791279, 4.46093254586, 8.14950409475, 2.75135810882]
Test.assertTrue(np.allclose(interceptLR1, expectedIntercept), 'incorrect value for interceptLR1')
Test.assertTrue(np.allclose(weightsLR1, expectedWeights), 'incorrect value for weightsLR1')
# EXERCICIO
samplePoint = parsedTrainData.take(1)[0]
samplePrediction = firstModel.predict(samplePoint.features)
print samplePrediction
# TEST Predict (4b)
Test.assertTrue(np.allclose(samplePrediction, 56.8013380112),
'incorrect value for samplePrediction')
# EXERCICIO
labelsAndPreds = parsedValData.map(lambda x: (x.label, firstModel.predict(x.features)))
rmseValLR1 = calcRMSE(labelsAndPreds)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}' +
'\n\tLR1 = {2:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1)
# TEST Evaluate RMSE (4c)
Test.assertTrue(np.allclose(rmseValLR1, 19.691247), 'incorrect value for rmseValLR1')
# EXERCICIO
bestRMSE = rmseValLR1
bestRegParam = reg
bestModel = firstModel
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
for reg in [1e-10, 1e-5, 1]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda x: (x.label, model.predict(x.features)))
rmseValGrid = calcRMSE(labelsAndPreds)
print rmseValGrid
if rmseValGrid < bestRMSE:
bestRMSE = rmseValGrid
bestRegParam = reg
bestModel = model
rmseValLRGrid = bestRMSE
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n' +
'\tLRGrid = {3:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1, rmseValLRGrid)
# TEST Grid search (4d)
Test.assertTrue(np.allclose(17.017170, rmseValLRGrid), 'incorrect value for rmseValLRGrid')
predictions = np.asarray(parsedValData
.map(lambda lp: bestModel.predict(lp.features))
.collect())
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, bestModel.predict(lp.features)))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))
ax.set_xlim(15, 82), ax.set_ylim(-5, 105)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=.5)
ax.set_xlabel('Predicted'), ax.set_ylabel(r'Actual')
pass
# EXERCICIO
reg = bestRegParam
modelRMSEs = []
for alpha in [1e-5, 10]:
for numIters in [500, 5]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))
rmseVal = calcRMSE(labelsAndPreds)
print 'alpha = {0:.0e}, numIters = {1}, RMSE = {2:.3f}'.format(alpha, numIters, rmseVal)
modelRMSEs.append(rmseVal)
# TEST Vary alpha and the number of iterations (4e)
expectedResults = sorted([56.969705, 56.892949, 355124752.221221])
Test.assertTrue(np.allclose(sorted(modelRMSEs)[:3], expectedResults), 'incorrect value for modelRMSEs')
# EXERCICIO
import itertools
def twoWayInteractions(lp):
Creates a new `LabeledPoint` that includes two-way interactions.
Note:
For features [x, y] the two-way interactions would be [x^2, x*y, y*x, y^2] and these
would be appended to the original [x, y] feature list.
Args:
lp (LabeledPoint): The label and features for this observation.
Returns:
LabeledPoint: The new `LabeledPoint` should have the same label as `lp`. Its features
should include the features from `lp` followed by the two-way interaction features.
newfeats = <COMPLETAR>
return LabeledPoint(lp.label, <COMPLETAR>)
#return lp
print twoWayInteractions(LabeledPoint(0.0, [2, 3]))
# Transform the existing train, validation, and test sets to include two-way interactions.
trainDataInteract = parsedTrainData.map(twoWayInteractions)
valDataInteract = parsedValData.map(twoWayInteractions)
testDataInteract = parsedTestData.map(twoWayInteractions)
# TEST Add two-way interactions (5a)
twoWayExample = twoWayInteractions(LabeledPoint(0.0, [2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayExample.features),
sorted([2.0, 3.0, 4.0, 6.0, 6.0, 9.0])),
'incorrect features generatedBy twoWayInteractions')
twoWayPoint = twoWayInteractions(LabeledPoint(1.0, [1, 2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayPoint.features),
sorted([1.0,2.0,3.0,1.0,2.0,3.0,2.0,4.0,6.0,3.0,6.0,9.0])),
'incorrect features generated by twoWayInteractions')
Test.assertEquals(twoWayPoint.label, 1.0, 'incorrect label generated by twoWayInteractions')
Test.assertTrue(np.allclose(sum(trainDataInteract.take(1)[0].features), 40.821870576035529),
'incorrect features in trainDataInteract')
Test.assertTrue(np.allclose(sum(valDataInteract.take(1)[0].features), 45.457719932695696),
'incorrect features in valDataInteract')
Test.assertTrue(np.allclose(sum(testDataInteract.take(1)[0].features), 35.109111632783168),
'incorrect features in testDataInteract')
# EXERCICIO
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
reg = 1e-10
modelInteract = LinearRegressionWithSGD.train(trainDataInteract, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPredsInteract = valDataInteract.<COMPLETAR>
rmseValInteract = calcRMSE(labelsAndPredsInteract)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n\tLRGrid = ' +
'{3:.3f}\n\tLRInteract = {4:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1,
rmseValLRGrid, rmseValInteract)
# TEST Build interaction model (5b)
Test.assertTrue(np.allclose(rmseValInteract, 15.6894664683), 'incorrect value for rmseValInteract')
# EXERCICIO
labelsAndPredsTest = testDataInteract.<COMPLETAR>
rmseTestInteract = calcRMSE(labelsAndPredsTest)
print ('Test RMSE:\n\tBaseline = {0:.3f}\n\tLRInteract = {1:.3f}'
.format(rmseTestBase, rmseTestInteract))
# TEST Evaluate interaction model on test data (5c)
Test.assertTrue(np.allclose(rmseTestInteract, 16.3272040537),
'incorrect value for rmseTestInteract')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First algorithm
Step2: Note 1
Step3: Representing Cities and Distance
Step4: Distance between cities
Step5: A cool thing is to be able to plot a tour
Step6: We are ready to test our algorithm
Step7: Improving the algorithm
Step8: Results of the improvement
Step9: It takes a few seconds on my machine to solve this problem. In general, the function exact_non_redundant_TSP() looks at $(n-1)!$ tours for an $n$-city problem, and each tour has $n$ cities, so the time for $n$ cities should be roughly proportional to $n!$. This means that the time grows rapidly with the number of cities; we'd need longer than the age of the Universe to run exact_non_redundant_TSP() on just 24 cities
Step10: (In Python, as in the formal mathematical theory of computability, lambda is the symbol for function, so "lambda x
Step11: greedy_TSP() can handle bigger problems
Step12: But... don't be greedy!
Step13: Elements to take into account solving problems with genetic algorithms
Step14: The toolbox stored the setup of the algorithm. It describes the different elements to take into account.
Step15: Individual representation and evaluation
Step16: Let's now define that our individuals are composed by indexes that referr to elements of cities and, correspondingly, the population is composed by individuals.
Step17: Defining the crossover and mutation operators can be a challenging task.
Step18: Evaluation can be easily defined from the total_distance() definition.
Step19: We will employ tournament selection with size 3.
Step20: Lets' run the algorithm with a population of 100 individuals and 400 generations.
Step21: We can now review the results
Step22: It is interesting to assess how the fitness of the population changed as the evolution process took place.
Step23: We are all set now but lets run again the genetic algorithm configured to collect the statistics that we want to gather
Step24: Plotting mean and minimium fitness as evolution took place.
Step25: How has the population evolved?
Step26: Note
Step27: Plotting the individuals and their fitness (color-coded)
Step28: We can now plot the population as the evolutionary process progressed. Darker blue colors imply better fitness.
Step29: Comprarison with greedy_TSP()
Step30: The genetic algorithm outperformed the greedy approach at a viable computational cost.
Step31: The next step takes some time to execute. Use the video controls to see the evolution in animated form.
Step32: Embeding the previous animation in the online notebook makes it really big. I have removed the result of the previous cell and created a .gif version of the animation for online viewing.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
import random, operator
import time
import itertools
import numpy
import math
%matplotlib inline
random.seed(time.time()) # planting a random seed
def exact_TSP(cities):
"Generate all possible tours of the cities and choose the shortest one."
return shortest(alltours(cities))
def shortest(tours):
"Return the tour with the minimum total distance."
return min(tours, key=total_distance)
alltours = itertools.permutations # The permutation function is already defined in the itertools module
cities = {1, 2, 3}
list(alltours(cities))
def total_distance(tour):
"The total distance between each pair of consecutive cities in the tour."
return sum(distance(tour[i], tour[i-1])
for i in range(len(tour)))
City = complex # Constructor for new cities, e.g. City(300, 400)
def distance(A, B):
"The Euclidean distance between two cities."
return abs(A - B)
A = City(300, 0)
B = City(0, 400)
distance(A, B)
def generate_cities(n):
"Make a set of n cities, each with random coordinates."
return set(City(random.randrange(10, 890),
random.randrange(10, 590))
for c in range(n))
cities8, cities10, cities100, cities1000 = generate_cities(8), generate_cities(10), generate_cities(100), generate_cities(1000)
cities8
def plot_tour(tour, alpha=1, color=None):
# Plot the tour as blue lines between blue circles, and the starting city as a red square.
plotline(list(tour) + [tour[0]], alpha=alpha, color=color)
plotline([tour[0]], 'rs', alpha=alpha)
# plt.show()
def plotline(points, style='bo-', alpha=1, color=None):
"Plot a list of points (complex numbers) in the 2-D plane."
X, Y = XY(points)
if color:
plt.plot(X, Y, style, alpha=alpha, color=color)
else:
plt.plot(X, Y, style, alpha=alpha)
def XY(points):
"Given a list of points, return two lists: X coordinates, and Y coordinates."
return [p.real for p in points], [p.imag for p in points]
tour = exact_TSP(cities8)
plot_tour(tour)
def all_non_redundant_tours(cities):
"Return a list of tours, each a permutation of cities, but each one starting with the same city."
start = first(cities)
return [[start] + list(tour)
for tour in itertools.permutations(cities - {start})]
def first(collection):
"Start iterating over collection, and return the first element."
for x in collection: return x
def exact_non_redundant_TSP(cities):
"Generate all possible tours of the cities and choose the shortest one."
return shortest(all_non_redundant_tours(cities))
all_non_redundant_tours({1, 2, 3})
%timeit exact_TSP(cities8)
%timeit exact_non_redundant_TSP(cities8)
%timeit exact_non_redundant_TSP(cities10)
def greedy_TSP(cities):
"At each step, visit the nearest neighbor that is still unvisited."
start = first(cities)
tour = [start]
unvisited = cities - {start}
while unvisited:
C = nearest_neighbor(tour[-1], unvisited)
tour.append(C)
unvisited.remove(C)
return tour
def nearest_neighbor(A, cities):
"Find the city in cities that is nearest to city A."
return min(cities, key=lambda x: distance(x, A))
cities = generate_cities(9)
%timeit exact_non_redundant_TSP(cities)
plot_tour(exact_non_redundant_TSP(cities))
%timeit greedy_TSP(cities)
plot_tour(greedy_TSP(cities))
%timeit greedy_TSP(cities100)
plot_tour(greedy_TSP(cities100))
%timeit greedy_TSP(cities1000)
plot_tour(greedy_TSP(cities1000))
from deap import algorithms, base, creator, tools
num_cities = 30
cities = generate_cities(num_cities)
toolbox = base.Toolbox()
creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
creator.create("Individual", list, fitness=creator.FitnessMin)
toolbox.register("indices", numpy.random.permutation, len(cities))
toolbox.register("individual", tools.initIterate, creator.Individual,
toolbox.indices)
toolbox.register("population", tools.initRepeat, list,
toolbox.individual)
toolbox.register("mate", tools.cxOrdered)
toolbox.register("mutate", tools.mutShuffleIndexes, indpb=0.05)
def create_tour(individual):
return [list(cities)[e] for e in individual]
def evaluation(individual):
'''Evaluates an individual by converting it into
a list of cities and passing that list to total_distance'''
return (total_distance(create_tour(individual)),)
toolbox.register("evaluate", evaluation)
toolbox.register("select", tools.selTournament, tournsize=3)
pop = toolbox.population(n=100)
%%time
result, log = algorithms.eaSimple(pop, toolbox,
cxpb=0.8, mutpb=0.2,
ngen=400, verbose=False)
best_individual = tools.selBest(result, k=1)[0]
print('Fitness of the best individual: ', evaluation(best_individual)[0])
plot_tour(create_tour(best_individual))
fit_stats = tools.Statistics(key=operator.attrgetter("fitness.values"))
fit_stats.register('mean', numpy.mean)
fit_stats.register('min', numpy.min)
result, log = algorithms.eaSimple(toolbox.population(n=100), toolbox,
cxpb=0.5, mutpb=0.2,
ngen=400, verbose=False,
stats=fit_stats)
plt.figure(1, figsize=(11, 4), dpi=500)
plots = plt.plot(log.select('min'),'c-', log.select('mean'), 'b-', antialiased=True)
plt.legend(plots, ('Minimum fitness', 'Mean fitness'))
plt.ylabel('Fitness')
plt.xlabel('Iterations')
pop_stats = tools.Statistics(key=numpy.copy)
pop_stats.register('pop', numpy.copy) # -- copies the populations themselves
pop_stats.register('fitness', # -- computes and stores the fitnesses
lambda x : [evaluation(a) for a in x])
result, log = algorithms.eaSimple(toolbox.population(n=100), toolbox,
cxpb=0.5, mutpb=0.2,
ngen=400, verbose=False,
stats=pop_stats)
def plot_population(record, min_fitness, max_fitness):
'''
Plots all individuals in a population.
Darker individuals have a better fitness.
'''
pop = record['pop']
fits = record['fitness']
index = sorted(range(len(fits)), key=lambda k: fits[k])
norm=colors.Normalize(vmin=min_fitness,
vmax=max_fitness)
sm = cmx.ScalarMappable(norm=norm,
cmap=plt.get_cmap('PuBu'))
for i in range(len(index)):
color = sm.to_rgba(max_fitness - fits[index[i]][0])
plot_tour(create_tour(pop[index[i]]), alpha=0.5, color=color)
min_fitness = numpy.min(log.select('fitness'))
max_fitness = numpy.max(log.select('fitness'))
plt.figure(1, figsize=(11,11), dpi=500)
for i in range(0, 12):
plt.subplot(4,3,i+1)
it = int(math.ceil((len(log)-1.)/15))
plt.title('t='+str(it*i))
plot_population(log[it*i], min_fitness, max_fitness)
%timeit total_distance(greedy_TSP(cities))
print('greedy_TSP() distance: ', total_distance(greedy_TSP(cities)))
print('Genetic algorithm best distance: ', evaluation(best_individual)[0])
from JSAnimation import IPython_display
from matplotlib import animation
def update_plot_tour(plot, points, alpha=1, color='blue'):
'A function for updating a plot with an individual'
X, Y = XY(list(points) + [points[0]])
plot.set_data(X, Y)
plot.set_color(color)
return plot
def init():
'Initialization of all plots to empty data'
for p in list(tour_plots):
p.set_data([], [])
return tour_plots
def animate(i):
'Updates all plots to match frame _i_ of the animation'
pop = log[i]['pop']
fits = log[i]['fitness']
index = sorted(range(len(fits)), key=lambda k: fits[k])
norm=colors.Normalize(vmin=min_fitness,
vmax=max_fitness)
sm = cmx.ScalarMappable(norm=norm,
cmap=plt.get_cmap('PuBu'))
for j in range(len(tour_plots)):
color = sm.to_rgba(max_fitness - fits[index[j]][0])
update_plot_tour(tour_plots[j],
create_tour(pop[index[j]]),
alpha=0.5, color=color)
return tour_plots
fig = plt.figure()
ax = plt.axes(xlim=(0, 900), ylim=(0, 600))
tour_plots = [ax.plot([], [], 'bo-', alpha=0.1) for i in range(len(log[0]['pop']))]
tour_plots = [p[0] for p in tour_plots]
animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=60, blit=True)
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=60, blit=True)
anim.save('tsp-populations.gif', writer='imagemagick')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gaussian Process Regression in TensorFlow Probability
Step3: Example
Step5: We'll put priors on the kernel hyperparameters, and write the joint distribution of the hyperparameters and observed data using tfd.JointDistributionNamed.
Step6: We can sanity-check our implementation by verifying that we can sample from the prior, and compute the log-density of a sample.
Step7: Now let's optimize to find the parameter values with highest posterior probability. We'll define a variable for each parameter, and constrain their values to be positive.
Step8: To condition the model on our observed data, we'll define a target_log_prob function, which takes the (still to be inferred) kernel hyperparameters.
Step9: Note
Step10: Let's sanity-check the sampler by examining the hyperparameter traces.
Step11: Now instead of constructing a single GP with the optimized hyperparameters, we construct the posterior predictive distribution as a mixture of GPs, each defined by a sample from the posterior distribution over hyperparameters. This approximately integrates over the posterior parameters via Monte Carlo sampling to compute the marginal predictive distribution at unobserved locations.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfk = tfp.math.psd_kernels
tf.enable_v2_behavior()
from mpl_toolkits.mplot3d import Axes3D
%pylab inline
# Configure plot defaults
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['grid.color'] = '#666666'
%config InlineBackend.figure_format = 'png'
def sinusoid(x):
return np.sin(3 * np.pi * x[..., 0])
def generate_1d_data(num_training_points, observation_noise_variance):
Generate noisy sinusoidal observations at a random set of points.
Returns:
observation_index_points, observations
index_points_ = np.random.uniform(-1., 1., (num_training_points, 1))
index_points_ = index_points_.astype(np.float64)
# y = f(x) + noise
observations_ = (sinusoid(index_points_) +
np.random.normal(loc=0,
scale=np.sqrt(observation_noise_variance),
size=(num_training_points)))
return index_points_, observations_
# Generate training data with a known noise level (we'll later try to recover
# this value from the data).
NUM_TRAINING_POINTS = 100
observation_index_points_, observations_ = generate_1d_data(
num_training_points=NUM_TRAINING_POINTS,
observation_noise_variance=.1)
def build_gp(amplitude, length_scale, observation_noise_variance):
Defines the conditional dist. of GP outputs, given kernel parameters.
# Create the covariance kernel, which will be shared between the prior (which we
# use for maximum likelihood training) and the posterior (which we use for
# posterior predictive sampling)
kernel = tfk.ExponentiatedQuadratic(amplitude, length_scale)
# Create the GP prior distribution, which we will use to train the model
# parameters.
return tfd.GaussianProcess(
kernel=kernel,
index_points=observation_index_points_,
observation_noise_variance=observation_noise_variance)
gp_joint_model = tfd.JointDistributionNamed({
'amplitude': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'length_scale': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'observation_noise_variance': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'observations': build_gp,
})
x = gp_joint_model.sample()
lp = gp_joint_model.log_prob(x)
print("sampled {}".format(x))
print("log_prob of sample: {}".format(lp))
# Create the trainable model parameters, which we'll subsequently optimize.
# Note that we constrain them to be strictly positive.
constrain_positive = tfb.Shift(np.finfo(np.float64).tiny)(tfb.Exp())
amplitude_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='amplitude',
dtype=np.float64)
length_scale_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='length_scale',
dtype=np.float64)
observation_noise_variance_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='observation_noise_variance_var',
dtype=np.float64)
trainable_variables = [v.trainable_variables[0] for v in
[amplitude_var,
length_scale_var,
observation_noise_variance_var]]
def target_log_prob(amplitude, length_scale, observation_noise_variance):
return gp_joint_model.log_prob({
'amplitude': amplitude,
'length_scale': length_scale,
'observation_noise_variance': observation_noise_variance,
'observations': observations_
})
# Now we optimize the model parameters.
num_iters = 1000
optimizer = tf.optimizers.Adam(learning_rate=.01)
# Use `tf.function` to trace the loss for more efficient evaluation.
@tf.function(autograph=False, jit_compile=False)
def train_model():
with tf.GradientTape() as tape:
loss = -target_log_prob(amplitude_var, length_scale_var,
observation_noise_variance_var)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
return loss
# Store the likelihood values during training, so we can plot the progress
lls_ = np.zeros(num_iters, np.float64)
for i in range(num_iters):
loss = train_model()
lls_[i] = loss
print('Trained parameters:')
print('amplitude: {}'.format(amplitude_var._value().numpy()))
print('length_scale: {}'.format(length_scale_var._value().numpy()))
print('observation_noise_variance: {}'.format(observation_noise_variance_var._value().numpy()))
# Plot the loss evolution
plt.figure(figsize=(12, 4))
plt.plot(lls_)
plt.xlabel("Training iteration")
plt.ylabel("Log marginal likelihood")
plt.show()
# Having trained the model, we'd like to sample from the posterior conditioned
# on observations. We'd like the samples to be at points other than the training
# inputs.
predictive_index_points_ = np.linspace(-1.2, 1.2, 200, dtype=np.float64)
# Reshape to [200, 1] -- 1 is the dimensionality of the feature space.
predictive_index_points_ = predictive_index_points_[..., np.newaxis]
optimized_kernel = tfk.ExponentiatedQuadratic(amplitude_var, length_scale_var)
gprm = tfd.GaussianProcessRegressionModel(
kernel=optimized_kernel,
index_points=predictive_index_points_,
observation_index_points=observation_index_points_,
observations=observations_,
observation_noise_variance=observation_noise_variance_var,
predictive_noise_variance=0.)
# Create op to draw 50 independent samples, each of which is a *joint* draw
# from the posterior at the predictive_index_points_. Since we have 200 input
# locations as defined above, this posterior distribution over corresponding
# function values is a 200-dimensional multivariate Gaussian distribution!
num_samples = 50
samples = gprm.sample(num_samples)
# Plot the true function, observations, and posterior samples.
plt.figure(figsize=(12, 4))
plt.plot(predictive_index_points_, sinusoid(predictive_index_points_),
label='True fn')
plt.scatter(observation_index_points_[:, 0], observations_,
label='Observations')
for i in range(num_samples):
plt.plot(predictive_index_points_, samples[i, :], c='r', alpha=.1,
label='Posterior Sample' if i == 0 else None)
leg = plt.legend(loc='upper right')
for lh in leg.legendHandles:
lh.set_alpha(1)
plt.xlabel(r"Index points ($\mathbb{R}^1$)")
plt.ylabel("Observation space")
plt.show()
num_results = 100
num_burnin_steps = 50
sampler = tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=target_log_prob,
step_size=tf.cast(0.1, tf.float64)),
bijector=[constrain_positive, constrain_positive, constrain_positive])
adaptive_sampler = tfp.mcmc.DualAveragingStepSizeAdaptation(
inner_kernel=sampler,
num_adaptation_steps=int(0.8 * num_burnin_steps),
target_accept_prob=tf.cast(0.75, tf.float64))
initial_state = [tf.cast(x, tf.float64) for x in [1., 1., 1.]]
# Speed up sampling by tracing with `tf.function`.
@tf.function(autograph=False, jit_compile=False)
def do_sampling():
return tfp.mcmc.sample_chain(
kernel=adaptive_sampler,
current_state=initial_state,
num_results=num_results,
num_burnin_steps=num_burnin_steps,
trace_fn=lambda current_state, kernel_results: kernel_results)
t0 = time.time()
samples, kernel_results = do_sampling()
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
(amplitude_samples,
length_scale_samples,
observation_noise_variance_samples) = samples
f = plt.figure(figsize=[15, 3])
for i, s in enumerate(samples):
ax = f.add_subplot(1, len(samples) + 1, i + 1)
ax.plot(s)
# The sampled hyperparams have a leading batch dimension, `[num_results, ...]`,
# so they construct a *batch* of kernels.
batch_of_posterior_kernels = tfk.ExponentiatedQuadratic(
amplitude_samples, length_scale_samples)
# The batch of kernels creates a batch of GP predictive models, one for each
# posterior sample.
batch_gprm = tfd.GaussianProcessRegressionModel(
kernel=batch_of_posterior_kernels,
index_points=predictive_index_points_,
observation_index_points=observation_index_points_,
observations=observations_,
observation_noise_variance=observation_noise_variance_samples,
predictive_noise_variance=0.)
# To construct the marginal predictive distribution, we average with uniform
# weight over the posterior samples.
predictive_gprm = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(logits=tf.zeros([num_results])),
components_distribution=batch_gprm)
num_samples = 50
samples = predictive_gprm.sample(num_samples)
# Plot the true function, observations, and posterior samples.
plt.figure(figsize=(12, 4))
plt.plot(predictive_index_points_, sinusoid(predictive_index_points_),
label='True fn')
plt.scatter(observation_index_points_[:, 0], observations_,
label='Observations')
for i in range(num_samples):
plt.plot(predictive_index_points_, samples[i, :], c='r', alpha=.1,
label='Posterior Sample' if i == 0 else None)
leg = plt.legend(loc='upper right')
for lh in leg.legendHandles:
lh.set_alpha(1)
plt.xlabel(r"Index points ($\mathbb{R}^1$)")
plt.ylabel("Observation space")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Shaolin Dashboard Introduction
Step2: Dashboard containing a single widget.
Step3: Dashboard containing three components in a row
Step4: A column with two rows of components
Step5: <a id='interactivity'></a>
Step6: <a id='interactivity_default'></a>
Step7: <a id='styling'></a>
Step8: In this example we are going to modify the following
|
<ASSISTANT_TASK:>
Python Code:
%%HTML
<style>
.container { width:100% !important; }
.input{ width:60% !important;
align: center;
}
.text_cell{ width:70% !important;
font-size: 16px;}
.title {align:center !important;}
</style>
from IPython.display import Image #this is for displaying the widgets in the web version of the notebook
from shaolin.core.dashboard import Dashboard
dash = Dashboard(['fs$D=fs'])
dash.widget
Image(filename='dashboards_data/img_1.png')
dash = Dashboard(['r$N=row',['##Row$N=text','fs$D=fs','tog$d=Toggle']])
#dash.widget
Image(filename='dashboards_data/img_2.png')
#Column that gives a name to the Dashboard.
test_dash = ['c$N=array_scaler',
#first row only has one widget. This element can be just a string
['@(0,100,1, (0., 100.))$N=scale_slider&d=range',
#Two widgets in the second row. As this element has childrens it has to be a list.
['r$N=main_row',['@dd$d=Apply&N=dd_sel&val=one&o=["one","two","three"]','@True$N=scale_chk&d=Scale']]
]
]
dboard = Dashboard(test_dash)
#dboard.widget
Image(filename='dashboards_data/img_3.png')
dboard.mode_dict
A = Dashboard(['@dd$d=Dropdown&N=dd_sel&val=one&o=["one","two","three"]'])
B = Dashboard(['dd$d=Dropdown&N=dd_sel&val=one&o=["one","two","three"]'],mode='interactive')
print("Interactive dropdown",A.mode_dict,"\n Active dropdown",B.mode_dict)
A.mode, B.mode
#dboard.state_manager
Image(filename='dashboards_data/img_4.png')
A= Dashboard(test_dash,state='array_scaler.pkl',name='A') #instantiating with the path of a saved layout.
custom_layout = dict(A.state)
B = Dashboard(test_dash,state=custom_layout) #instantiating with a dict
C = Dashboard(test_dash)#Init with default layout.
C.state = custom_layout #dinamically changing the layout
#Dashboard(['r$N=row',[A,B,C]]).widget #This is the new widgets.Hbox(children=[A.widget, B.widget, C.widget])
Image(filename='dashboards_data/img_5.png')
C.state
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Command mode vs Edit mode
Step2: Access to documentation and Code completion
Step3: Local shell commands execution
Step4: We can also use variables as parameters by passing them wrapped in "{}"
Step5: Output of a local shell command can also be captured, for example to be post-processed in python
|
<ASSISTANT_TASK:>
Python Code:
a = 1
b = 2
def my_simple_sum(a, b):
Simple addition
:param a: fist number
:param b: second number
print "Sum is:", a+b
my_simple_sum(a,b)
# Further down in the code we do some changes
a = 100
# than we can go back and re-execute just the previous cell
# Use TAB to complete the function name
# Use SHIFT+Tab after the '(' to access
my_simple_sum(2,3)
!pwd
!date
folder = "../"
!ls -la {folder} | wc -l
output = !find ../../ipynb/ -name "*.ipynb"
print "Available notebooks:"
for line in output:
print line.replace('../../ipynb/', ' ')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Acquisition
Step2: Parsing
Step3: Visualization
Step4: Using PyBEL Functions
|
<ASSISTANT_TASK:>
Python Code:
import json
import requests
import os
import time
import networkx as nx
import pybel
from pybel.constants import *
import pybel_tools
from pybel_tools.visualization import to_jupyter
pybel.__version__
pybel_tools.__version__
time.asctime()
res = requests.get("http://causalbionet.com/Networks/GetJSONGraphFile?networkId=hox_2.0_hs").json()
graph = pybel.BELGraph()
parser = pybel.parser.BelParser(graph)
def get_citation(evidence):
return {
CITATION_NAME: evidence['citation']['name'],
CITATION_TYPE: evidence['citation']['type'],
CITATION_REFERENCE: evidence['citation']['id']
}
annotation_map = {
'tissue': 'Tissue',
'disease': 'Disease',
'species_common_name': 'Species'
}
species_map = {
'human': '9606',
'rat': '10116',
'mouse': '10090'
}
annotation_value_map = {
'Species': species_map
}
for edge in res['graph']['edges']:
for evidence in edge['metadata']['evidences']:
if 'citation' not in evidence or not evidence['citation']:
continue
parser.control_parser.clear()
parser.control_parser.citation = get_citation(evidence)
parser.control_parser.evidence = evidence['summary_text']
d = {}
if 'biological_context' in evidence:
annotations = evidence['biological_context']
if annotations['tissue']:
d['Tissue'] = annotations['tissue']
if annotations['disease']:
d['Disease'] = annotations['disease']
if annotations['species_common_name']:
d['Species'] = species_map[annotations['species_common_name'].lower()]
parser.control_parser.annotations.update(d)
bel = '{source} {relation} {target}'.format_map(edge)
try:
parser.parseString(bel)
except Exception as e:
print(e, bel)
to_jupyter(graph)
pybel.to_database(graph)
pybel.get_ver
with open(os.path.join(os.environ['BMS_BASE'], 'cbn', 'Human-2.0', 'Hox-2.0-Hs.jgf')) as f:
graph_jgif_dict = json.load(f)
%%time
graph = pybel.from_cbn_jgif(graph_jgif_dict)
bel_lines = pybel.to_bel_lines(graph)
graph_reloaded = pybel.from_lines(bel_lines)
to_jupyter(graph_reloaded)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data and structuring it
Step2: The Genre and Subgenre categories have multiple comma separated values. I'm going to keep just the first value and drop the others for the category.
Step3: Top 10 artists having the most number of albums in the list
Step4: Year wise count of number of albums
Step5: Top genres and subgenres
Step6: Songs in each genre by year
Step7: Rock subgenres over the years
Step8: Getting a summary of the top 10 Albums
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from bokeh.charts import Donut, HeatMap, Histogram, Line, Scatter, show, output_notebook, output_file
from bokeh.plotting import figure
output_notebook()
album_list = pd.read_excel('albumlist.xls')
album_list.dtypes
album_list.head()
#efficent method to do the same thing commented below
for lab, row in album_list.iterrows():
album_list.loc[lab, "Genre_Refined"] = row["Genre"].split(',')[0]
album_list.loc[lab, "Subgenre_Refined"] = row["Subgenre"].split(',')[0]
#add Genre_Refined column by selecting just the first value
#album_list['Genre_Refined'] = album_list['Genre']
#for i in range(len(album_list)):
#album_list['Genre_Refined'][i] = album_list['Genre'][i].split(',')[0]
#add Subgenre_Refined column by selecting just the first value
#album_list['Subgenre_Refined'] = album_list['Subgenre']
#for i in range(len(album_list)):
#album_list['Subgenre_Refined'][i] = album_list['Subgenre'][i].split(',')[0]
album_list.head()
album_list.dtypes
#get a count of how many times each artist made it into the list
artists_count = album_list.groupby(['Artist'], as_index=False).count()
#find the top 10 artists
top_artists = artists_count.sort_values(by='Number', ascending=False).head(10)
top_artists = top_artists.reset_index().drop(['index', 'Album','Year','Genre','Subgenre', 'Genre_Refined', 'Subgenre_Refined'], axis=1)
top_artists.head(10)
#get the artists and corresponding counts into two lists for plotting
top_artists_list = top_artists.Artist.values.tolist()
top_artists_count = top_artists.Number.astype(float).values.tolist()
#visualize the data using bokeh
#output_file("top_artists.html", title="top artists")
p = figure(x_range=top_artists_list, plot_height = 500, plot_width = 500)
#set x-axis properties
p.xgrid.visible = False
p.xaxis.major_label_orientation = 3.14/4
p.xaxis.axis_label = 'Artist Name'
#set y-axis properties
p.ygrid.visible = False
p.yaxis.axis_label = 'Album Count'
#draw circles
p.circle(y=top_artists_count, x=top_artists_list, size=15, fill_color="black")
show(p)
#get count of albums in each year
yearwise_albums = album_list.groupby(['Year'], as_index=False).count()
yearwise_albums = yearwise_albums.sort_values(by='Year').reset_index().drop(['index', 'Album', 'Artist','Genre','Subgenre','Subgenre_Refined','Genre_Refined'], axis=1)
yearwise_albums.head()
#visulaizing the data using bokeh line graphs
#output_file("yearwise_albums.html", title="yearwise_albums")
line = Line(data=yearwise_albums, x='Year', y='Number')
line.yaxis.axis_label = 'Number of Albums'
show(line)
#pivot the data and get a subset of the pivoted data where each subgenre has a count of more than 5
pivoted = pd.pivot_table(album_list, index=['Genre_Refined', 'Subgenre_Refined'], values=['Number'], aggfunc='count')
pivoted_subset = pivoted[pivoted['Number'] > 5]
pivoted_subset = pivoted_subset.reset_index()
pivoted_subset
#visualizing the data using the bokeh donut chart
#output_file("donut.html", title="donut")
from bokeh.palettes import Purples9 as palette1
palette1 = palette1[::-1]
d = Donut(pivoted_subset, label=['Genre_Refined', 'Subgenre_Refined'], values='Number',
text_font_size='15pt', plot_height=1000, plot_width=1000, palette=palette1)
show(d)
#getting yearwise data for each genre
yearwise_data = album_list.groupby(['Year', 'Genre_Refined'], as_index=False).count()
yearwise_data = yearwise_data.sort_values(by='Year').reset_index().drop(['index', 'Album', 'Artist','Genre','Subgenre','Subgenre_Refined'], axis=1)
yearwise_data.head(25)
#visualizing the data using a bokeh heatmap
#output_file("yearwise_genre.html", title="yearwise_subgenre")
from bokeh.palettes import Reds9 as palette2
palette2 = palette2[::-1]
hm_year = HeatMap(yearwise_data, x='Year', y='Genre_Refined', values='Number', stat=None,
width=750, plot_height=500, palette=palette2)
#y-axis properties
hm_year.yaxis.axis_label = 'Genre'
hm_year.yaxis.major_label_orientation = 'horizontal'
show(hm_year)
#count subgenres yearwise and subset it for rock music
yearwise_subgenres = album_list.groupby(['Year', 'Genre_Refined', 'Subgenre_Refined'], as_index=False).count()
rock_subgenres_yearwise = yearwise_subgenres[yearwise_subgenres['Genre_Refined'] == 'Rock'].reset_index().drop(['index', 'Album', 'Artist','Genre','Subgenre'], axis=1)
rock_subgenres_yearwise.head()
#visualizing the data using bokeh scatterplot
#output_file("rock_subgenres_yearwise.html", title="rock_subgenres_yearwise")
hm_rock_subgenres = Scatter(rock_subgenres_yearwise, x='Year', y='Subgenre_Refined', width=800, plot_height=800)
#x-axis properties
hm_rock_subgenres.xgrid.visible = False
#y-axis properties
hm_rock_subgenres.yaxis.major_label_orientation = 'horizontal'
hm_rock_subgenres.yaxis.axis_label = 'Subgenres of Rock'
hm_rock_subgenres.ygrid.visible = False
show(hm_rock_subgenres)
#top 10 albums
top_albums = album_list.head(10)
#Get artists and albums into a new data frame
top_albums_a = top_albums['Artist']
top_albums_b = top_albums['Album']
top_albums_final = pd.concat([top_albums_a, top_albums_b], axis=1)
#groupby and summarize
top_albums_chart = top_albums_final.groupby(['Artist', 'Album']).count()
top_albums_chart
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enoncé
Step2: Q1
Step3: Q2
Step4: Q3
Step5: La question sous-jacente est
Step6: Q2
Step7: Q3
Step8: Q4
Step9: Q5
Step10: Q6
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from sklearn.datasets import load_iris as load_data
from pandas import DataFrame
data = load_data()
df = DataFrame(data.data, columns=data.feature_names)
df['fleur'] = [data.target_names[t] for t in data.target]
df.tail()
from io import StringIO
buffer = StringIO()
df.to_csv(buffer, index=False)
text = buffer.getvalue()
text[:300]
r = df.to_json(orient='records')
r[:400]
locations = {'virginica': ['Florida', 'Georgia'],
'setosa': ['Maine', 'Alaska', 'Quebec'],
'versicolor': ['Quebec', 'Georgia', 'Ireland', 'Main']}
from io import StringIO
buffer = StringIO()
df.to_csv(buffer, index=False)
text = buffer.getvalue()
text[:300]
df.to_csv("fleurs.csv", index=False)
import os
os.listdir(".")
import pandas
df2 = pandas.read_csv("fleurs.csv")
df2.head()
virtuel = StringIO(text)
df3 = pandas.read_csv(virtuel)
df3.head()
json_text = df.to_json(orient='records')
json_text[:400]
import json
res = json.loads(json_text)
for i, r in enumerate(res):
print(i, type(r), r)
if i >= 5:
break
res[3]['sepal width (cm)']
virtuel = StringIO(json_text)
res2 = json.load(virtuel)
res2[:3]
html_text = df.to_html(index=False)
print(html_text[:500])
df_html = pandas.read_html(html_text)
df_html[0].tail()
df_html = pandas.read_html(html_text + html_text)
len(df_html)
df.head()
locations = {'virginica': ['Florida', 'Georgia'],
'setosa': ['Maine', 'Alaska', 'Quebec'],
'versicolor': ['Quebec', 'Georgia', 'Ireland', 'Main']}
obs = []
for fleur, loc in locations.items():
for l in loc:
obs.append({"fleur": fleur, "location": l})
obs
df_locations = pandas.DataFrame(obs)
df_locations
merged = df.merge(df_locations, left_on="fleur", right_on="fleur")
merged.head(10)
merged.shape
locations
obs2 = []
for fleur, loc in locations.items():
obs2.append({"fleur": fleur, "location": loc})
obs2
df_locations2 = pandas.DataFrame(obs2)
df_locations2
merged = df.merge(df_locations2, left_on="fleur", right_on="fleur")
merged.head(10)
json_text = merged.to_json(orient='records')
json_text[:200]
df.to_excel("data.xlsx", index=False)
dfe = pandas.read_excel("data.xlsx", engine='openpyxl')
dfe.tail()
from zipfile import ZipFile
with ZipFile('data.zip', 'w') as myzip:
myzip.write('data.xlsx')
myzip.write("2020_json_xml.ipynb")
import glob
glob.glob("*.zip")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by generating a random network using the Delaunay class. This will repreent an imported network
Step2: This network generator adds nicely defined boundary pores around the edges/faces of the network. Let's remove these for the sake of this example
Step3: This network does not have any geometrical properties on it when generated. To mimic the situation of an imported network, let's manually enter some values for 'pore.diameter'. We'll just assign random numbers to illustrate the point
Step4: Now when we print the network we'll see all the topological data ('pore.coords' and 'throat.conns'), all the labels that were added by the generator (e.g. 'pore.left'), as well as the new geometry info we just added ('pore.diameter')
Step5: OpenPNM was designed to work by assigning geomtrical information to Geometry objects. The presence of 'pore.diameter' on the network can be a problem in some cases. For instance, let's add some boundary pores to the left edge
Step6: Visualizing this networks shows the newl added pores where we intended
Step7: Now we have internal pores (red) and boundary pores (green). We would like to assign geometrical information to the boundary pores that we just created. This is typically done by creating a Geometry object, then either assigning numerical values or attaching a pore-scale model that calculates the values. The problem is that OpenPNM prevents you from having 'pore.diameter' on the network AND a geometry object at the same time.
Step8: Now we we try to assign 'pore.diameter', we'll get the following exception (The "try-except" structure is used for the purpose of this notebook example, but is not needed in an actual script)
Step9: The solution is to remove the geometrical information from the network before adding the boundary pores, and place them on their own geometry. In this example it is easy to transfer the 'pore.diameter' array, but in the case of a real extracted network there could be quite a few arrays to move. OpenPNM has a facility for doing this
Step10: Here we pass the network to the Imported geometry class. This class literally removes all numerical data from the network to itself. Everything is moved except topological info ('pore.coords' and 'throat.conns') and labels ('pore.left').
Step11: Printing geo reveals that the 'pore.diameter' array has been transferred from the network automatically
Step12: Now that the geometrical information is properly assigned to a geometry object, we can now use OpenPNM as intended. Let's extend this network by adding a single new pore.
Step13: The new pore can clearly be seen outside the top-right corner of the domain.
Step14: We can now create a geometry just for this single pore and we will be free to add any properties we wish
Step15: Note that the network has the ability to fetch the 'pore.diameter' array from the geometry sub-domain object and create a single full array containing the values from all the locations. In the printout below we can see the value of 2.0 in the very last element, which is where new pores are added to the list.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import openpnm as op
import matplotlib.pyplot as plt
ws = op.Workspace()
ws.settings['loglevel'] = 50 # Supress warnings, but see error messages
np.random.seed(0)
pn = op.network.Delaunay(shape=[1, 1, 0], points=100)
op.topotools.trim(network=pn, pores=pn.pores('boundary'))
fig, ax = plt.subplots(1, 1, figsize=[5, 5])
op.topotools.plot_coordinates(network=pn, c='r', ax=ax)
op.topotools.plot_connections(network=pn, ax=ax)
pn['pore.diameter'] = np.random.rand(pn.Np)
print(pn)
Ps = pn['pore.surface']*(pn['pore.coords'][:, 0] < 0.1)
Ps = pn.toindices(Ps)
op.topotools.add_boundary_pores(network=pn, pores=Ps,
move_to=[0, None, None],
apply_label='left')
fig, ax = plt.subplots(figsize=[7, 7])
ax = op.topotools.plot_coordinates(network=pn, pores=pn.pores('left', mode='not'), c='r', ax=ax)
ax = op.topotools.plot_coordinates(network=pn, pores=pn.pores('left'), c='g', ax=ax)
ax = op.topotools.plot_connections(network=pn, ax=ax)
Ps = pn.pores('left')
Ts = pn.find_neighbor_throats(pores=Ps)
geo_bndry = op.geometry.GenericGeometry(network=pn, pores=Ps, throats=Ts)
try:
geo_bndry['pore.diameter'] = 0
except Exception as e:
print(e)
pn = op.network.Delaunay(shape=[1, 1, 0], points=100)
pn['pore.diameter'] = np.random.rand(pn.Np)
geo = op.geometry.Imported(network=pn)
print(geo)
op.topotools.extend(network=pn, pore_coords = [[1.2, 1.2, 0]], labels='new')
fig, ax = plt.subplots(figsize=[7, 7])
fig = op.topotools.plot_coordinates(network=pn, pores=pn.pores('left', mode='not'), c='r', ax=ax)
fig = op.topotools.plot_coordinates(network=pn, pores=pn.pores('left'), c='g', ax=ax)
fig = op.topotools.plot_connections(network=pn, ax=ax)
geo2 = op.geometry.GenericGeometry(network=pn, pores=pn.pores('new'))
geo2['pore.diameter'] = 2.0
print(geo2)
print(pn['pore.diameter'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Dotstar object
Step2: Class Methods
|
<ASSISTANT_TASK:>
Python Code:
from pyDrivers import dotstar
ds = dotstar.Dotstar(led_count=72*3,init_brightness=0)
while True:
for current_led in range (4, ds.led_count-4):
ds.set(current_led-4, 0, 0, 0, 0)
ds.set(current_led-2, 10, 100, 0, 0)
ds.set(current_led-1, 50, 200, 0, 0)
ds.set(current_led, 50, 250, 0, 0)
ds.set(current_led+1, 50, 200, 0, 0)
ds.set(current_led+2, 50, 150, 0, 0)
ds.set(current_led+4, 0, 0, 0, 0)
ds.draw()
for current_led in range(ds.led_count-5, 4, -1):
ds.set(current_led-3,10,100,0,0)
ds.set(current_led-2,10,150,0,0)
ds.set(current_led-1,50,200,0,0)
ds.set(current_led,50,250,0,0)
ds.set(current_led+1,50,200,0,0)
ds.set(current_led+2,50,150,0,0)
ds.set(current_led+4,0,0,0,0)
ds.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We're going to download a csv file. What should we name it?
Step2: Now we need a URL to a CSV file out on the Internet.
Step3: The output shows we successfully downloaded the file and saved it
Step4: We will use the writer method to write data to a file by passing in the name of the new file as the first argument and delimiter as the the second.
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
import csv
downloaded_file = "banklist.csv"
urlretrieve("https://s3.amazonaws.com/datanicar/banklist.csv", downloaded_file)
filtered_file = open('california_banks.csv', 'w', newline='')
# create our output
output = csv.writer(filtered_file, delimiter=',')
# open our downloaded file
with open(downloaded_file, 'r') as file:
# use python's csv reader to access the contents
# and create an object that represents the data
csv_data = csv.reader(file)
# write our header row to the output csv
header_row = next(csv_data)
print(header_row)
output.writerow(header_row)
# loop through each row of the csv
for row in csv_data:
# now we're going to use an IF statement
# to find items where the state field
# is equal to California
if row[2] == 'CA':
# write the row to the new csv file
output.writerow(row)
# and print the row to the terminal
print(row)
# print the data type to the terminal
print(type(row))
# print the length of the row to the terminal
print(len(row))
# otherwise continue on
else:
continue
# close the output file
filtered_file.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3.1 Facebook
Step2: 3.1.1 Scrap with HTTP requests
Step3: 3.1.1.2 Get posts
Step4: 3.1.2 Scrap with Facebook SDK
Step5: 3.2 Twitter
Step6: The code is much simpler for Twitter than Facebook because Tweepy handles much of the dirty work, like paging.
Step7: 4 Prepare and save data
Step8: Now that we collected everything, let's save it in two SQLite databases.
Step9: 5 Data analysis
Step10: 5.1 Number of posts
Step11: 5.2 Most liked
Step12: 5.3 Engagement
Step13: 5.4 Text length
Step14: 5.5 Posting time
Step15: Let's look if the time of posting influence the number of likes. Do you see a peak at 5am ? Do you really think we should post at 5am ? What's going on here ?
|
<ASSISTANT_TASK:>
Python Code:
# Number of posts / tweets to retrieve.
# Small value for development, then increase to collect final data.
n = 4000 # 20
import configparser
# Read the confidential token.
credentials = configparser.ConfigParser()
credentials.read('credentials.ini')
token = credentials.get('facebook', 'token')
# Or token = 'YOUR-FB-ACCESS-TOKEN'
import requests # pip install requests
import facebook # pip install facebook-sdk
import pandas as pd
page = 'EPFL.ch'
# 1. Form URL.
url = 'https://graph.facebook.com/{}?fields=likes&access_token={}'.format(page, token)
#print(url)
# 2. Get data.
data = requests.get(url).json()
print('data:', data)
# Optionally, check for errors. Most probably the session has expired.
if 'error' in data.keys():
raise Exception(data)
# 3. Extract data.
print('{} has {} likes'.format(page, data['likes']))
# 1. Form URL. You can click that url and see the returned JSON in your browser.
fields = 'id,created_time,message,likes.limit(0).summary(1),comments.limit(0).summary(1)'
url = 'https://graph.facebook.com/{}/posts?fields={}&access_token={}'.format(page, fields, token)
#print(url)
# Create the pandas DataFrame, a table which columns are post id, message, created time, #likes and #comments.
fb = pd.DataFrame(columns=['id', 'text', 'time', 'likes', 'comments'])
# The outer loop is to query FB multiple times, as FB sends at most 100 posts at a time.
while len(fb) < n:
# 2. Get the data from FB. At most 100 posts.
posts = requests.get(url).json()
# 3. Here we extract information for each of the received post.
for post in posts['data']:
# The information is stored in a dictionary.
serie = dict(id=post['id'], time=post['created_time'])
try:
serie['text'] = post['message']
except KeyError:
# Let's say we are not interested in posts without text.
continue
serie['likes'] = post['likes']['summary']['total_count']
serie['comments'] = post['comments']['summary']['total_count']
# Add the dictionary as a new line to our pandas DataFrame.
fb = fb.append(serie, ignore_index=True)
try:
# That URL is returned by FB to access the next 'page', i.e. the next 100 posts.
url = posts['paging']['next']
except KeyError:
# No more posts.
break
fb[:5]
g = facebook.GraphAPI(token, version='2.7')
# We limit to 10 because it's slow.
posts = g.get_connections(page, 'posts', limit=10)
if 'error' in posts.keys():
# Most probably the session has expired.
raise Exception(data)
for post in posts['data']:
pid = post['id']
try:
text = post['message']
except KeyError:
continue
time = post['created_time']
likes = g.get_connections(pid, 'likes', summary=True, limit=0)
nlikes = likes['summary']['total_count']
comments = g.get_connections(pid, 'comments', summary=True, limit=0)
ncomments = comments['summary']['total_count']
print('{:6d} {:6d} {} {}'.format(nlikes, ncomments, time, text[:50]))
import tweepy # pip install tweepy
auth = tweepy.OAuthHandler(credentials.get('twitter', 'consumer_key'), credentials.get('twitter', 'consumer_secret'))
auth.set_access_token(credentials.get('twitter', 'access_token'), credentials.get('twitter', 'access_secret'))
api = tweepy.API(auth)
user = 'EPFL_en'
followers = api.get_user(user).followers_count
print('{} has {} followers'.format(user, followers))
tw = pd.DataFrame(columns=['id', 'text', 'time', 'likes', 'shares'])
for tweet in tweepy.Cursor(api.user_timeline, screen_name=user).items(n):
serie = dict(id=tweet.id, text=tweet.text, time=tweet.created_at)
serie.update(dict(likes=tweet.favorite_count, shares=tweet.retweet_count))
tw = tw.append(serie, ignore_index=True)
#fb.id = fb.id.astype(int)
fb.likes = fb.likes.astype(int)
fb.comments = fb.comments.astype(int)
tw.id = tw.id.astype(int)
tw.likes = tw.likes.astype(int)
tw.shares = tw.shares.astype(int)
from datetime import datetime
def convert_time(row):
return datetime.strptime(row['time'], '%Y-%m-%dT%H:%M:%S+0000')
fb['time'] = fb.apply(convert_time, axis=1)
from IPython.display import display
display(fb[:5])
display(tw[:5])
import os
folder = os.path.join('..', 'data', 'social_media')
try:
os.makedirs(folder)
except FileExistsError:
pass
filename = os.path.join(folder, 'facebook.sqlite')
fb.to_sql('facebook', 'sqlite:///' + filename, if_exists='replace')
filename = os.path.join(folder, 'twitter.sqlite')
tw.to_sql('twitter', 'sqlite:///' + filename, if_exists='replace')
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
date = datetime(2016, 9, 4)
datestr = date.strftime('%Y-%m-%d')
print('Number of posts after {}: {}'.format(datestr, sum(fb.time > date)))
print('Number of tweets after {}: {}'.format(datestr, sum(tw.time > date)))
display(fb.sort_values(by='likes', ascending=False)[:5])
display(tw.sort_values(by='likes', ascending=False)[:5])
pd.concat([fb.describe(), tw.loc[:,'likes':'shares'].describe()], axis=1)
fig, axs = plt.subplots(1, 4, figsize=(15, 5))
fb.likes.plot(kind='box', ax=axs[0]);
fb.comments.plot(kind='box', ax=axs[1]);
tw.likes.plot(kind='box', ax=axs[2]);
tw.shares.plot(kind='box', ax=axs[3]);
fb.hist(bins=20, log=True, figsize=(15, 5));
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
tw.loc[:,'likes'].hist(bins=20, log=True, ax=axs[0]);
tw.loc[tw.shares < 200, 'shares'].hist(bins=20, log=True, ax=axs[1]);
def text_length(texts):
lengths = np.empty(len(texts), dtype=int)
for i, text in enumerate(texts):
lengths[i] = len(text)
plt.figure(figsize=(15, 5))
prop = lengths.min(), '{:.2f}'.format(lengths.mean()), lengths.max()
plt.title('min = {}, mean={}, max = {}'.format(*prop))
plt.hist(lengths, bins=20)
text_length(tw.text)
text_length(fb.text)
fb.id.groupby(fb.time.dt.hour).count().plot(kind='bar', alpha=0.4, color='y', figsize=(15,5));
tw.id.groupby(tw.time.dt.hour).count().plot(kind='bar', alpha=0.4, color='g', figsize=(15,5));
fb.likes.groupby(fb.time.dt.hour).mean().plot(kind='bar', figsize=(15,5));
plt.figure()
tw.likes.groupby(tw.time.dt.hour).mean().plot(kind='bar', figsize=(15,5));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vamos a definir una función como ejemplo, si la definimos y la usamos, obtendremos el resultado esperado
Step2: Sin embargo es muy aburrido ¿Que pasa si queremos sondear la función por valores interesantes?
Step3: Nota que el deslizador empieza en el valor que le dimos y en el punto medio. Ademas, si solo le damos valores enteros, el programa va a suponer que no nos interesan los valores flotantes.
Step4: Ejercicio
Step5: Interactividad de gráficas
Step6: En este caso voy a utilizar una rotación alrededor de $z$ y una traslación en el eje $x$, asi que defino estas funciones
Step7: Y voy a escribir una función robot, la cual va a tomar como argumentos mis variables, es decir los grados de libertad, y con estos valores calcular las transformaciones necesarias para obtener las posiciones de cada articulación y del actuador final. Una vez que tengo todas las posiciones, tan solo tengo que graficarlas para poder manipular esta grafica
Step8: En este caso utilizo los rangos de valores (0, τ) ya que es toda la vuelta completa.
|
<ASSISTANT_TASK:>
Python Code:
# Se importan widgets de IPython para interactuar con la funcion
from ipywidgets import interact, fixed
# Si la linea anterior no funciona, se puede quitar el comentario a la siguiente linea
#from IPython.html.widgets import interact, fixed
# Se define la constante τ, la cual representa la cantidad de radianes
# en una vuelta completa
from numpy import pi
τ = 2*pi
f = lambda x: x**2 + 5
f(1)
i1 = interact(f, x=5)
i2 = interact(f, x=(0, τ))
# YOUR CODE HERE
raise NotImplementedError()
i3 = interact(g, x=(0.0, 2.0))
from nose.tools import assert_almost_equal
assert_almost_equal(i3.widget.result, 0, 0)
%matplotlib inline
from matplotlib.pyplot import figure, plot, style
from mpl_toolkits.mplot3d import Axes3D
style.use("ggplot")
def r_z(θ):
# Se importan funciones necesarias de la libreria numpy
from numpy import matrix, sin, cos
# Se calcula la matriz de transformación a devolver
A = matrix([[cos(θ), -sin(θ), 0, 0],
[sin(θ), cos(θ), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
return A
def t_x(x):
# Se importan funciones necesarias de la libreria numpy
from numpy import matrix, sin, cos
# Se calcula la matriz de transformación a devolver
A = matrix([[1, 0, 0, x],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
return A
def robot(q1=0, q2=0):
'''
Esta función calcula y grafica la cinemática directa de un manipulador planar rotacional
de dos grados de libertad (pendulo doble).
Necesita que la libreria matplotlib sea inicializada, al menos importando la función figure.
'''
# Se importan funciones necesarias de la libreria numpy
from numpy import matrix, sin, cos
# Se definen constantes
l1 = 1
l2 = 1
# Se define el punto origen
o0 = matrix([[0], [0], [0], [1]])
# Se calculas las trasnformaciones H1 y H2
H1 = r_z(q1)*t_x(l1)
H2 = r_z(q2)*t_x(l2)
# Se calcula la cinematica directa
o1 = H1*o0
o2 = H1*H2*o0
# Define arreglos con las coordenadas x, y, y z de cada punto
xs = [o0.item(0), o1.item(0), o2.item(0)]
ys = [o0.item(1), o1.item(1), o2.item(1)]
zs = [o0.item(2), o1.item(2), o2.item(2)]
# Define el cuadro general en donde se diuja la gráfica
f1 = figure(figsize=(8, 8))
# Agrega el area para graficar a nuestra figura, y la define como un espacio tridimensional
a1 = f1.add_subplot(111, projection='3d')
# Utiliza los datos en xs, ys y zs para graficar una linea con bolitas en cada extremo
a1.plot(xs, ys, zs, "-o")
# Define los limites de la grafica en cada eje
a1.set_xlim(-2.1, 2.1)
a1.set_ylim(-2.1, 2.1)
a1.set_zlim(-0.1, 1.1);
interact(robot, q1=(0, τ), q2=(0, τ))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overriding the __repr__ method
Step2: IPython expands on this idea and allows objects to declare other, rich representations including
Step3: A few points
Step4: Images
Step5: Returning an Image object from an expression will automatically display it
Step6: An image can also be displayed from raw data or a URL.
Step8: HTML
Step9: You can also use the %%html cell magic to accomplish the same thing.
Step10: You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected.
Step11: Pass a string of JavaScript source code to the JavaScript object and then display it.
Step12: The same thing can be accomplished using the %%javascript cell magic
Step14: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
Step15: Audio
Step16: A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
Step17: Video
Step18: External sites
Step19: Links to local files
Step20: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
|
<ASSISTANT_TASK:>
Python Code:
class Ball(object):
pass
b = Ball()
b.__repr__()
print(b)
class Ball(object):
def __repr__(self):
return 'TEST'
b = Ball()
print(b)
from IPython.display import display
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
from IPython.display import Image
i = Image(filename='./ipython-image.png')
display(i)
i
Image(url='http://python.org/images/python-logo.gif')
from IPython.display import HTML
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display_HTML(h)
%%html
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
%%html
<style>
#notebook {
background-color: skyblue;
font-family: times new roman;
}
</style>
from IPython.display import Javascript
js = Javascript('alert("hi")');
display(js)
%%javascript
alert("hi");
Javascript(
$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')
)
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("./flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
from IPython.display import Audio
Audio("./scrubjay.mp3")
import numpy as np
max_time = 3
f1 = 120.0
f2 = 124.0
rate = 8000.0
L = 3
times = np.linspace(0,L,rate*L)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
Audio(data=signal, rate=rate)
from IPython.display import YouTubeVideo
YouTubeVideo('sjfsUzECqK0')
from IPython.display import IFrame
IFrame('https://ipython.org', width='100%', height=350)
from IPython.display import FileLink, FileLinks
FileLink('../Visualization/Matplotlib.ipynb')
FileLinks('./')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Uniform random variables are super important because they are the basis from which we generate other random variables, such as binomial, normal, exponential etc.
Step2: Notice that x_i is in the global space not the runif() space.
Step3: Let's try it out
Step4: Exercise
Step5: Exercise
Step6: Random variable density function estimate
|
<ASSISTANT_TASK:>
Python Code:
import random
print(random.random())
print(random.random())
print(random.random())
a = 16807
m = pow(2,31)-1
DFLT_SEED = 666
x_i = DFLT_SEED # this is our x_i that changes each runif01() call
def runif01():
"Return a random value in U(0,1)"
global x_i
x_i = a * x_i % m
# display(callsviz(varnames=['a','m','x_i']))
return x_i / float(m)
from lolviz import callsviz
runif01()
[runif01() for i in range(4)]
def runif(a,b):
"Return a random value in U(a,b)"
if b<a: # swap
t = a
a = b
b = t
return runif01()*(b-a) + a
print([runif(0,10) for i in range(3)])
print([runif(5,6) for i in range(3)])
def setseed(s):
"Update the seed global variable but ensure seed > 0"
global x_i
if s <= 0: s = 666
x_i = s
setseed(501)
print([runif01() for i in range(3)])
print([runif(5,6) for i in range(3)])
import matplotlib.pyplot as plt
# jupyter notebook command (ignore)
%matplotlib inline
sample = [runif01() for i in range(5000)] # Get 5000 random variables
plt.figure(figsize=(4, 1.5))
plt.hist(sample, bins=10, density=True, alpha=0.3)
plt.xlabel('Random value from U(0,1)')
plt.ylabel('Probability')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data from disk and set up the dataframes
Step3: Use fully_merged_messages_df and address_book_df for analysis, they contain all messages with columns for the sender and all contacts, respectively
Step4: Table and graph of who you text the most
Step5: Steamgraph
Step6: Draw the graph!
Step7: Wordcloud
Step8: Texts you've sent
Step9: Texts to/from a specific contact
Step10: Diving deeper into the actual text
Step11: Preprocessing and data munging for TFIDF
Step12: Create TFIDF matrix for all contacts
Step13: Helper methods to leverage the TFIDF matrix
Step14: Words that identify a specific contact
Step15: Words that identify the difference between two contacts
Step19: Looking at language progression over the years
Step20: My top words over the years
Step23: Top words over the years from/to a specific contact
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from __future__ import division
import copy
import json
import re
import string
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn # To improve the chart styling.
import wordtree
from IPython.display import display
from IPython.display import HTML
from IPython.display import Javascript
from wordcloud import STOPWORDS
import ipywidgets as widgets
from wordcloud import WordCloud
import iphone_connector
%matplotlib inline
matplotlib.style.use('ggplot')
pd.set_option('display.max_colwidth', 1000)
iphone_connector.initialize()
fully_merged_messages_df, address_book_df = iphone_connector.get_cleaned_fully_merged_messages()
full_names = set(address_book_df.full_name) # Handy set to check for misspellings later on.
fully_merged_messages_df.full_name.replace('nan nan nan', 'Unknown', inplace=True)
WORDS_PER_PAGE = 450 # Based upon http://wordstopages.com/
print('\nTotal pages if all texts were printed: {0:,d} (Arial size 12, single spaced)\n'.format(
sum(fully_merged_messages_df.text.apply(lambda x: len(x.split())))//WORDS_PER_PAGE))
fully_merged_messages_df = fully_merged_messages_df.reset_index(drop=True)
fully_merged_messages_df
address_book_df
def plot_year_month_heatmap(df, trim_incomplete=True, search_term=None, figsize=(18, 10)):
Plots a heatmap of the dataframe grouped by year and month.
Args:
df: The dataframe, must contain a column named `date`.
trim_incomplete: If true, don't plot rows that lack 12 full months of data. Default True.
search_term: A case insensitive term to require in all rows of the dataframe's `text`
column. Default None.
figsize: The size of the plot as a tuple. Default (18, 10);
if search_term:
df = df[df['text'].str.contains(search_term, case=False)]
month_year_messages = pd.DataFrame(df['date'])
month_year_messages['year'] = month_year_messages.apply(lambda row: row.date.year, axis=1)
month_year_messages['month'] = month_year_messages.apply(lambda row: row.date.month, axis=1)
month_year_messages = month_year_messages.drop('date', axis=1)
month_year_messages_pivot = month_year_messages.pivot_table(index='year',
columns='month',
aggfunc=len, dropna=True)
if trim_incomplete:
month_year_messages_pivot = month_year_messages_pivot[month_year_messages_pivot.count(axis=1) == 12]
if month_year_messages_pivot.shape[0] == 0:
print('After trimming rows that didn\'t have 12 months, no rows remained, bailing out.')
return
f, ax = plt.subplots(figsize=figsize)
seaborn.heatmap(month_year_messages_pivot, annot=True, fmt=".0f", square=True, cmap="YlGnBu", ax=ax)
# Plot all text messages exchanges over the years.
plot_year_month_heatmap(fully_merged_messages_df, search_term='')
# Helper method to better support py2 and py3.
def convert_unicode_to_str_if_needed(unicode_or_str):
if type(unicode_or_str).__name__ == 'unicode':
return unicode_or_str.encode('utf-8')
return unicode_or_str
# Note "Unknown" means the number was not found in your address book.
def get_message_counts(dataframe):
return pd.Series({'Texts sent': dataframe[dataframe.is_from_me == 1].shape[0],
'Texts received': dataframe[dataframe.is_from_me == 0].shape[0],
'Texts exchanged': dataframe.shape[0]})
messages_grouped = fully_merged_messages_df.groupby('full_name').apply(get_message_counts)
messages_grouped = messages_grouped.sort_values(by='Texts exchanged', ascending=False)
widgets.interact(messages_grouped.head,
n=widgets.IntSlider(min=5, max=50, step=1, value=5, continuous_update=False,
description='Number of people to show:'))
# Helper method so we can wrap it with interact().
def _plot_most_common_text(top_n=10):
messages_grouped.head(top_n).plot(figsize=(20,10), kind='bar')
widgets.interact(_plot_most_common_text,
top_n=widgets.IntSlider(min=5, max=100, step=1, value=5, continuous_update=False,
description='Number of people to show:'))
# Restrict to the top N people you text the most so the steamgraph is legible.
TOP_N = 10 # Freely change this value.
sliced_df = fully_merged_messages_df[fully_merged_messages_df.full_name.isin(messages_grouped.head(TOP_N).index)]
grouped_by_month = sliced_df.groupby([
sliced_df.apply(lambda x: x.date.strftime('%Y/%m'), axis=1),
'full_name']
)['text'].count().to_frame()
grouped_by_month = grouped_by_month.sort_index()
# We create a dense dataframe for every year/month combination so even if a person didn't text in a specific
# year/month, we have a 0 so the steamgraph can propertly graph the value.
grouped_by_month_dense = grouped_by_month.unstack().fillna(0).stack()
# Dump the dataframe to a global JS variable so we can access it in our JS code.
# TODO(mdezube): Dump out as JSON instead.
formatted_for_steamgraph = grouped_by_month_dense.reset_index(level=1)
formatted_for_steamgraph.index.name = 'date'
formatted_for_steamgraph.columns = ['key', 'value']
Javascript("window.csvAsString='{}'".format(formatted_for_steamgraph.to_csv(index_label='date').replace('\n', '\\n')))
%%javascript
// Draw the streamgraph using d3.
element.append('<div class="chart" style="height:600px; width:100%"></div>')
element.append('<style>.axis path, .axis line' +
'{fill: none; stroke: #000;stroke-width: 2px; shape-rendering: crispEdges;}' +
'</style>')
element.append("<script src='d3.min.js'></script>")
element.append("<script src='colorbrewer.min.js'></script>")
element.append("<script src='steamgraph.js'></script>")
// Choose your favorite from https://bl.ocks.org/mbostock/5577023
var colorBrewerPalette = "Spectral";
// Set a timeout to let the JS scripts actually load into memory, this is a bit of a hack but works reliably.
setTimeout(function(){createSteamgraph(csvAsString, colorBrewerPalette)}, 200);
def generate_cloud(texts, max_words=30):
# Add more words here if you want to ignore them:
my_stopwords = STOPWORDS.copy()
my_stopwords.update(['go', 'ya', 'come', 'back', 'good', 'sound'])
words = ' '.join(texts).lower()
wordcloud = WordCloud(font_path='CabinSketch-Bold.ttf',
stopwords=my_stopwords,
background_color='black',
width=800,
height=600,
relative_scaling=1,
max_words=max_words
).generate_from_text(words)
print('Based on {0:,} texts'.format(len(texts)))
fig, ax = plt.subplots(figsize=(15,10))
ax.imshow(wordcloud)
ax.axis('off')
plt.show()
# Word cloud of the top 25 words I use based on the most recent 30,000 messages.
texts_from_me = fully_merged_messages_df[fully_merged_messages_df.is_from_me == 1].text[-30000:]
widgets.interact(
generate_cloud,
texts=widgets.fixed(texts_from_me),
max_words=widgets.IntSlider(min=5,max=50,step=1,value=10, continuous_update=False,
description='Max words to show:'))
def _word_cloud_specific_contact(max_words, from_me, contact):
contact = convert_unicode_to_str_if_needed(contact)
if contact not in full_names:
print('{} not found'.format(contact))
return
sliced_df = fully_merged_messages_df[(fully_merged_messages_df.full_name == contact) &
(fully_merged_messages_df.is_from_me == from_me)].text
generate_cloud(sliced_df, max_words)
widgets.interact(
_word_cloud_specific_contact,
max_words=widgets.IntSlider(min=5, max=50, step=1, value=10,
continuous_update=False, description='Max words to show:'),
from_me=widgets.RadioButtons(
options={'Show messages FROM me': True, 'Show messages TO me': False}, description=' '),
contact=widgets.Text(value='Mom', description='Contact name:')
)
# Note this requires an internet connection to load Google's JS library.
def get_json_for_word_tree(contact):
df = fully_merged_messages_df[(fully_merged_messages_df.full_name == contact)]
print('Exchanged {0:,} texts with {1}'.format(df.shape[0], contact))
array_for_json = [[text[1]] for text in df.text.iteritems()]
array_for_json.insert(0, [['Phrases']])
return json.dumps(array_for_json)
CONTACT_NAME = 'Mom'
ROOT_WORD = 'feel'
HTML(wordtree.get_word_tree_html(get_json_for_word_tree('Mom'),
ROOT_WORD.lower(),
lowercase=True,
tree_type='double'))
punctuation = copy.copy(string.punctuation)
punctuation += u'“”‘’\ufffc\uff0c' # Include some UTF-8 punctuation that occurred.
punct_regex = re.compile(u'[{0}]'.format(punctuation))
spaces_regex = re.compile(r'\s{2,}')
numbers_regex = re.compile(r'\d+')
def clean_text(input_str):
processed = input_str.lower()
processed = punct_regex.sub('', processed)
# Also try: processed = numbers_regex.sub('_NUMBER_', processed)
processed = numbers_regex.sub('', processed)
processed = spaces_regex.sub(' ', processed)
return processed
# The normal stopwords list contains words like "i'll" which is unprocessed.
processed_stopwords = [clean_text(word) for word in STOPWORDS]
# Group the texts by person and collapse them into a single string per person.
grouped_by_name = fully_merged_messages_df[fully_merged_messages_df.is_from_me == 0].groupby(
'full_name')['text'].apply(lambda x: ' '.join(x)).to_frame()
grouped_by_name.info(memory_usage='deep')
grouped_by_name.head(1)
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk import tokenize
import numpy as np
vectorizer = TfidfVectorizer(preprocessor=clean_text,
tokenizer=tokenize.WordPunctTokenizer().tokenize,
stop_words=processed_stopwords,
ngram_range=(1, 2), max_df=.9, max_features=50000)
tfidf_transformed_dataset = vectorizer.fit_transform(grouped_by_name.text)
word_list = pd.Series(vectorizer.get_feature_names())
print('TFIDF sparse matrix is {0}MB'.format(tfidf_transformed_dataset.data.nbytes / 1024 / 1024))
print('TFIDF matrix has shape: {0}'.format(tfidf_transformed_dataset.shape))
def get_word_summary_for_contact(contact, top_n=25):
contact = convert_unicode_to_str_if_needed(contact)
tfidf_record = _get_tfidf_record_for_contact(contact)
if tfidf_record is None:
print('"{0}" was not found.'.format(contact))
return
sorted_indices = tfidf_record.argsort()[::-1]
return pd.DataFrame({'Word': word_list.iloc[sorted_indices[:top_n]]}).reset_index(drop=True)
def get_word_summary_for_diffs(contact, other_contact, top_n=25):
contact = convert_unicode_to_str_if_needed(contact)
other_contact = convert_unicode_to_str_if_needed(other_contact)
tfidf_record_contact = _get_tfidf_record_for_contact(contact)
tfidf_record_other_contact = _get_tfidf_record_for_contact(other_contact)
if tfidf_record_contact is None or tfidf_record_other_contact is None:
# Print out the first contact not found.
contact_not_found = contact if tfidf_record_contact is None else other_contact
print('"{0}" was not found.'.format(contact_not_found))
return
sorted_indices = (tfidf_record_contact - tfidf_record_other_contact).argsort()[::-1]
return pd.DataFrame({'Word': word_list.iloc[sorted_indices[:top_n]]}).reset_index(drop=True)
# Returns the row in the TFIDF matrix for a given contact by name.
def _get_tfidf_record_for_contact(contact):
if contact not in grouped_by_name.index:
return None
row = np.argmax(grouped_by_name.index == contact)
return tfidf_transformed_dataset.getrow(row).toarray().squeeze()
widgets.interact(
get_word_summary_for_contact,
contact=widgets.Text(value='Mom', description='Contact name:', placeholder='Enter name'),
top_n=widgets.IntSlider(min=10, max=100, step=1, value=5, description='Max words to show:')
)
widgets.interact(
get_word_summary_for_diffs,
contact=widgets.Text(description='1st Contact:', placeholder='Enter 1st name'),
other_contact=widgets.Text(description='2nd Contact:', placeholder='Enter 2nd name'),
top_n=widgets.IntSlider(description='Max words to show:', min=10, max=100, step=1, value=5)
)
def top_words_by_year_from_tfidf(tfidf_by_year, years_as_list, top_n=15):
Returns a dataframe of the top words for each year by their TFIDF score.
To determine the "top", we look at one year's TFIDF - avg(other years' TFIDFs)
Args:
tfidf_by_year: TFIDF matrix with as many rows as entries in years_as_list
years_as_list: Years that are represented in the TFIDF matrix
top_n: Number of top words per year to include in the result
# Densify the tfidf matrix so we can operate on it.
tfidf_by_year_dense = tfidf_by_year.toarray()
df_by_year = []
for i in range(tfidf_by_year_dense.shape[0]):
this_year = years_as_list[i]
tfidf_this_year = tfidf_by_year_dense[i]
tfidf_other_years = np.delete(tfidf_by_year_dense, i, axis=0).mean(axis=0)
sorted_indices = (tfidf_this_year - tfidf_other_years).argsort()[::-1]
df = pd.DataFrame({this_year: word_list.iloc[sorted_indices[:top_n]]})
df = df.reset_index(drop=True)
df_by_year.append(df)
return pd.concat(df_by_year, axis=1)
def top_words_by_year_from_df(slice_of_texts_df, top_n=15, min_texts_required=100):
Returns a dataframe of the top words for each year by their TFIDF score.
Top is determined by the `top_words_by_year_from_tfidf` method.
Args:
slice_of_texts_df: A dataframe with the text messages to process
top_n: Number of top words per year to include in the result
min_texts_required: Number of texts to require in each year to not drop the record
grouped_by_year_tfidf, years = _tfidf_by_year(slice_of_texts_df, min_texts_required)
return top_words_by_year_from_tfidf(grouped_by_year_tfidf, years, top_n)
def _tfidf_by_year(slice_of_texts_df, min_texts_required=100):
Returns a TFIDF matrix of the texts grouped by year.
Years with less than `min_texts_required` texts will be dropped.
grouper = slice_of_texts_df.date.apply(lambda x: x.year)
grouped_by_year = slice_of_texts_df.groupby(grouper).apply(
lambda row: pd.Series({'count': len(row.date), 'text': ' '.join(row.text)})
)
# Drops years with less than min_texts_required texts since they won't be very meaningful.
years_to_drop = grouped_by_year[grouped_by_year['count'] < min_texts_required].index
print('Dropping year(s): {0}, each had fewer than {1} texts.'.format(
', '.join(str(year) for year in years_to_drop), min_texts_required))
grouped_by_year = grouped_by_year[grouped_by_year['count'] >= min_texts_required]
grouped_by_year.index.name = 'year'
if grouped_by_year.shape[0] == 0:
print('Bailing out, no years found with at least {0} texts.'.format(min_texts_required))
return None
grouped_by_year_tfidf = vectorizer.transform(grouped_by_year['text'])
print('Found {0} years with more than {1} texts each.'.format(grouped_by_year_tfidf.shape[0],
min_texts_required))
return grouped_by_year_tfidf, grouped_by_year.index
top_words_by_year_from_df(fully_merged_messages_df[fully_merged_messages_df.is_from_me == 1],
top_n=15)
# Wrapper method so we can use interact().
def _top_words_by_year_for_contact(contact, from_me, top_n):
contact = convert_unicode_to_str_if_needed(contact)
if contact not in full_names:
print('"{0}" not found'.format(contact))
return
# Slice to texts from/to the contact.
df = fully_merged_messages_df[(fully_merged_messages_df.is_from_me == from_me) &
(fully_merged_messages_df.full_name == contact)]
return top_words_by_year_from_df(df, top_n)
widgets.interact(
_top_words_by_year_for_contact,
contact=widgets.Text(value='Mom', description='Contact name:', placeholder='Enter name'),
from_me=widgets.RadioButtons(
options={'Show messages FROM me': True, 'Show messages TO me': False}, description=' '),
top_n=widgets.IntSlider(min=15, max=100, step=1, value=5, description='Max words to show:')
)
from sklearn.cluster import KMeans
from sklearn.decomposition import TruncatedSVD
def _top_words_by_cluster_from_tfidf(
cluster_id,
tfidf_per_sender,
cluster_for_tfidf_index,
top_n=15,
):
Returns a dataframe of the top words for each cluster by their TFIDF score.
To determine the "top", we look at one cluster's TFIDF - avg(other clusters' TFIDFs)
Args:
cluster_id: The cluster we want to find the top words for (referred to as "given cluster")
tfidf_per_sender: TFIDF matrix with as many rows as entries in cluster_for_tfidf_index
cluster_for_tfidf_index: Cluster assignment for each entry in tfidf_per_sender
top_n: Number of top words per cluster to include in the result
# First, we separate the given cluster we want to consider from all other entries.
this_cluster_records = tfidf_per_sender[cluster_for_tfidf_index == cluster_id]
other_cluster_records = tfidf_per_sender[cluster_for_tfidf_index != cluster_id]
# Next, we calculate the mean for each: the given cluster and the rest of the corpus
mean_this_cluster = np.asarray(this_cluster_records.mean(axis=0)).squeeze()
mean_other_cluster = np.asarray(other_cluster_records.mean(axis=0)).squeeze()
# Finally, we identify the words for which the given cluster shows the biggest difference.
difference = mean_this_cluster - mean_other_cluster
most_different_indicies = difference.argsort()
# Only display top_n
return most_different_indicies[::-1][:top_n]
def _tfidf_by_sender(messages_df, min_texts_required=100):
Returns a TFIDF matrix of the texts grouped by sender.
Message exchanges with less than `min_texts_required` texts will be dropped.
# First we group messages by name, then we merge each conversation into one string.
grouped_by_name = messages_df.groupby("full_name").apply(
lambda row: pd.Series({'count': len(row.full_name), 'text': ' '.join(row.text)})
)
# Drop all conversations that don't meet the requirements for minimum number of messages.
grouped_by_name = grouped_by_name[grouped_by_name['count'] >= min_texts_required]
grouped_by_name.index.name = 'full_name'
# Bail if we have no data
if grouped_by_name.shape[0] == 0:
print('Bailing out, no conversations found with at least {0} texts.'.format(min_texts_required))
return None
grouped_by_name_tfidf = vectorizer.transform(grouped_by_name['text'])
print('Found {0} conversations with at least than {1} texts each.'.format(grouped_by_name_tfidf.shape[0],
min_texts_required))
return grouped_by_name_tfidf, grouped_by_name.index
# Get the TFIDF vector for each data point and the list of receivers.
tfidf_per_sender, names_sender = _tfidf_by_sender(fully_merged_messages_df[fully_merged_messages_df.is_from_me == 0])
# First, we reduce the dimensionality of the dataset.
# This reduces the difference between the clusters found by KMeans and the 2D graphic of the clusters.
tfidf_sender_reduced_dim = TruncatedSVD(n_components=7).fit_transform(tfidf_per_sender)
# Let's run KMeans clustering on the data.
NUMBER_OF_CLUSTERS = 7
kmeans_tfidf_sender = KMeans(n_clusters=NUMBER_OF_CLUSTERS)
tfidf_per_sender_cluster_assignment = kmeans_tfidf_sender.fit_transform(tfidf_sender_reduced_dim).argmin(axis=1)
# We further reduce the dimensionality of the data, so that we can graph it.
tfidf_per_sender_2d = TruncatedSVD(n_components=2).fit_transform(tfidf_sender_reduced_dim)
clustered_tfidf_by_sender_df = pd.DataFrame({
"x": tfidf_per_sender_2d[:,0],
"y": tfidf_per_sender_2d[:,1],
"name": names_sender,
"group": ["Cluster: " + str(e) for e in tfidf_per_sender_cluster_assignment],
})
clustered_tfidf_by_sender_df.head()
import plotly.offline as py
import plotly.figure_factory as ff
import plotly.graph_objs as go
py.init_notebook_mode(connected=True)
clusters = clustered_tfidf_by_sender_df.group.unique()
def plot_data(cluster_selection):
traces = []
top_words = None
if cluster_selection == "All":
clusters_to_plot = clusters
else:
clusters_to_plot = [cluster_selection]
top_words_indexes = _top_words_by_cluster_from_tfidf(
int(cluster_selection[-1]),
tfidf_per_sender,
tfidf_per_sender_cluster_assignment
)[0:10]
top_words = word_list.iloc[top_words_indexes].to_frame()
top_words.columns = ['Top Words In Cluster']
top_words = top_words.reset_index(drop=True)
for cluster in clusters_to_plot:
cluster_data = clustered_tfidf_by_sender_df[clustered_tfidf_by_sender_df.group == cluster]
scatter = go.Scatter(
x=cluster_data["x"],
y=cluster_data["y"],
text=cluster_data["name"],
mode = 'markers',
name=cluster
)
traces.append(scatter)
py.iplot(traces)
return top_words
cluster_selection = widgets.Dropdown(
options=["All"] + list(clusters),
value="All",
description="Cluster: "
)
print('We\'ve clustered your contacts by their word usage, hover over the dots to see which '
'cluster each person is in. Adjust the dropdown to restrict to a cluster.\nDots closer '
'to each other indicate the people talk similarly.')
widgets.interact(
plot_data,
cluster_selection=cluster_selection,
)
display(cluster_selection)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that a similar transformation can be applied with compute_ems
|
<ASSISTANT_TASK:>
Python Code:
# Author: Denis Engemann <denis.engemann@gmail.com>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import EMS, compute_ems
from sklearn.model_selection import StratifiedKFold
print(__doc__)
data_path = sample.data_path()
# Preprocess the data
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
event_ids = {'AudL': 1, 'VisL': 3}
# Read data and create epochs
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(0.5, 45, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_ids, tmin=-0.2, tmax=0.5, picks=picks,
baseline=None, reject=dict(grad=4000e-13, eog=150e-6),
preload=True)
epochs.drop_bad()
epochs.pick_types(meg='grad')
# Setup the data to use it a scikit-learn way:
X = epochs.get_data() # The MEG data
y = epochs.events[:, 2] # The conditions indices
n_epochs, n_channels, n_times = X.shape
# Initialize EMS transformer
ems = EMS()
# Initialize the variables of interest
X_transform = np.zeros((n_epochs, n_times)) # Data after EMS transformation
filters = list() # Spatial filters at each time point
# In the original paper, the cross-validation is a leave-one-out. However,
# we recommend using a Stratified KFold, because leave-one-out tends
# to overfit and cannot be used to estimate the variance of the
# prediction within a given fold.
for train, test in StratifiedKFold(n_splits=5).split(X, y):
# In the original paper, the z-scoring is applied outside the CV.
# However, we recommend to apply this preprocessing inside the CV.
# Note that such scaling should be done separately for each channels if the
# data contains multiple channel types.
X_scaled = X / np.std(X[train])
# Fit and store the spatial filters
ems.fit(X_scaled[train], y[train])
# Store filters for future plotting
filters.append(ems.filters_)
# Generate the transformed data
X_transform[test] = ems.transform(X_scaled[test])
# Average the spatial filters across folds
filters = np.mean(filters, axis=0)
# Plot individual trials
plt.figure()
plt.title('single trial surrogates')
plt.imshow(X_transform[y.argsort()], origin='lower', aspect='auto',
extent=[epochs.times[0], epochs.times[-1], 1, len(X_transform)],
cmap='RdBu_r')
plt.xlabel('Time (ms)')
plt.ylabel('Trials (reordered by condition)')
# Plot average response
plt.figure()
plt.title('Average EMS signal')
mappings = [(key, value) for key, value in event_ids.items()]
for key, value in mappings:
ems_ave = X_transform[y == value]
plt.plot(epochs.times, ems_ave.mean(0), label=key)
plt.xlabel('Time (ms)')
plt.ylabel('a.u.')
plt.legend(loc='best')
plt.show()
# Visualize spatial filters across time
evoked = EvokedArray(filters, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(time_unit='s', scalings=1)
epochs.equalize_event_counts(event_ids)
X_transform, filters, classes = compute_ems(epochs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mineria de Texto
Step2: Visite https
Step3: Conceptos Fundamentales de Mineria de Texto
Step4: Conceptos Fundamentales de Mineria de Texto
|
<ASSISTANT_TASK:>
Python Code:
# Funcion para quitar todo el texto que este entre parentesis,
# lo que no sea letras y sustituir series de espacios en blanco por uno solo
def cleanup_str(raw):
rs = re.sub("\\(.*?\\)|[^a-zA-Z\\s]"," ",raw)
rs = re.sub("\\s+"," ",rs).strip().lower()
return rs
my_str =
Some people, when confronted with a problem, think
“I know, I'll use regular expressions.” Now they have two problems.
-- Jamie Zawinsk (Usenet) 1997 o fue 1999??
print(cleanup_str(my_str))
nltk.word_tokenize("conceptos fundamentales de mineria de texto")
helpers.get_bigrams(nltk.word_tokenize("conceptos fundamentales de mineria de texto"))
helpers.remove_stopwords("This is not the stopword")
helpers.stem("natural language processing and text mining")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The ODEsys class and convenience functions from previous notebook (35) has been put in two modules for easy importing. Recapping what we did last
Step2: so that is the benchmark to beat.
Step3: Just to see that everything looks alright
|
<ASSISTANT_TASK:>
Python Code:
import json
import numpy as np
import sympy as sym
from scipy2017codegen.odesys import ODEsys
from scipy2017codegen.chem import mk_rsys
watrad_data = json.load(open('../scipy2017codegen/data/radiolysis_300_Gy_s.json'))
watrad = mk_rsys(ODEsys, **watrad_data)
tout = np.logspace(-6, 3, 200) # close to one hour of operation
c0 = {'H2O': 55.4e3, 'H+': 1e-4, 'OH-': 1e-4}
y0 = [c0.get(symb.name, 0) for symb in watrad.y]
%timeit yout, info = watrad.integrate_odeint(tout, y0)
from numba import njit
watrad_numba = mk_rsys(ODEsys, **watrad_data, lambdify=lambda *args: njit(sym.lambdify(*args, modules="numpy")))
watrad_numba.integrate_odeint(tout, y0)
%timeit watrad_numba.integrate_odeint(tout, y0)
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(1, 1, figsize=(14, 6))
watrad_numba.plot_result(tout, *watrad_numba.integrate_odeint(tout, y0), ax=ax)
ax.set_xscale('log')
ax.set_yscale('log')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1-hr', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J.
Step5: Visualizing datset
Step7: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size. We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them.
Step8: Verifying data transformation
Step9: Verifyng data to be balanced across classes
Step10: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9. Also create a validation dataset for hyperparameter tuning.
Step11: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step12: Convince yourself that the data is still good after shuffling
Step13: Finally, let's save the data for later reuse
Step14: Measuring overlap
Step15: Off-the-shelf classifiers in actions
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
from IPython.display import Image
Image(filename='notMNIST_large/A/VXBkaWtlLnR0Zg==.png')
Image(filename='notMNIST_large/A/Q29zbW9zLU1lZGl1bS5vdGY=.png')
Image(filename='notMNIST_small/A/RGF5dHJpcHBlciBQbGFpbi50dGY=.png')
Image(filename='notMNIST_small/A/SHVtYW5pc3QgOTcwIEJvbGQucGZi.png')
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
train_datasets[:]
type(train_datasets)
test_datasets[:]
ex = pickle.load( open( "notMNIST_small/A.pickle", "rb" ) )
ex.shape
plt.imshow(ex[1,:,:])
plt.imshow(ex[2,:,:])
plt.imshow(ex[3,:,:])
train_freq = np.zeros(10)
test_freq = np.zeros(10)
prefs = ['A' , 'B' , 'C' , 'D' , 'E' , 'F' , 'G', 'H', 'I', 'J']
i = 0
for pref in prefs:
tr = pickle.load( open( "notMNIST_large/"+pref+".pickle", "rb" ) )
ts = pickle.load( open( "notMNIST_small/"+pref+".pickle", "rb" ) )
train_freq[i] = tr.shape[0]
test_freq[i] = ts.shape[0]
i = i + 1
print("***train_freq****")
print(train_freq)
print(train_freq/np.sum(train_freq))
print("\n***test_freq****")
print(test_freq)
print(test_freq/np.sum(test_freq))
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
train_dataset.shape
plt.imshow(train_dataset[3,:,:])
plt.imshow(test_dataset[3,:,:])
plt.imshow(valid_dataset[3,:,:])
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
oTrVal = np.zeros(200) ## sample
oTrTest = np.zeros(200) ## sample
for i in range(0,oTrVal.shape[0]):
if (i % 100 == 0):
sys.stdout.write("..%s" % i)
for j in range(0,train_dataset.shape[0]):
#if np.array_equal(train_dataset[j,:,:],valid_dataset[i,:,:]):
if np.sum(np.subtract(train_dataset[j,:,:],valid_dataset[i,:,:]))==0:
oTrVal[i] = 1
break
print("\n***Xval**")
print(np.sum(oTrVal)/oTrVal.shape[0])
for i in range(0,oTrTest.shape[0]):
if (i % 100 == 0):
sys.stdout.write("..%s" % i)
for j in range(0,train_dataset.shape[0]):
#if np.array_equal(train_dataset[j,:,:],valid_dataset[i,:,:]):
if np.sum(np.subtract(train_dataset[j,:,:],test_dataset[i,:,:]))==0:
oTrTest[i] = 1
break
print("\n***XTest**")
print(np.sum(oTrTest)/oTrTest.shape[0])
from sklearn.grid_search import GridSearchCV
from sklearn.linear_model import LogisticRegression
train_dataset_100 = train_dataset[0:100,:,:]
train_dataset_100 = np.reshape(train_dataset_100,(100,784))
train_labels_100 = train_labels[0:100]
clf = GridSearchCV(LogisticRegression(penalty='l2'),
scoring ='accuracy',
param_grid={'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]})
clf = clf.fit( train_dataset_100, train_labels_100)
print(">>> Best accuracy:"+str(clf.best_score_))
print(">>> Best Params:"+str(clf.best_params_))
train_dataset_1000 = train_dataset[0:1000,:,:]
train_dataset_1000 = np.reshape(train_dataset_1000,(1000,784))
train_labels_1000 = train_labels[0:1000]
clf = GridSearchCV(LogisticRegression(penalty='l2'),
scoring ='accuracy',
param_grid={'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]})
clf = clf.fit( train_dataset_1000, train_labels_1000)
print(">>> Best accuracy:"+str(clf.best_score_))
print(">>> Best Params:"+str(clf.best_params_))
train_dataset_5000 = train_dataset[0:5000,:,:]
train_dataset_5000 = np.reshape(train_dataset_5000,(5000,784))
train_labels_5000 = train_labels[0:5000]
clf = GridSearchCV(LogisticRegression(penalty='l2'),
scoring ='accuracy',
param_grid={'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]})
clf = clf.fit( train_dataset_5000, train_labels_5000)
print(">>> Best accuracy:"+str(clf.best_score_))
print(">>> Best Params:"+str(clf.best_params_))
from sklearn.ensemble import RandomForestClassifier
train_dataset_5000 = train_dataset[0:5000,:,:]
train_dataset_5000 = np.reshape(train_dataset_5000,(5000,784))
train_labels_5000 = train_labels[0:5000]
clf = GridSearchCV(RandomForestClassifier( n_estimators = 1000 ),
scoring ='accuracy',param_grid={})
clf = clf.fit( train_dataset_5000, train_labels_5000)
print(">>> Best accuracy:"+str(clf.best_score_))
print(">>> Best Params:"+str(clf.best_params_))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step7: Just import the code
Step8: Here we randomly select 10 images from mnist.test as input
Step9: Call the function to get result
Step10: Let 's s try to feed these adversarial images to different models
Step11: The first one is SVM(using SVC in scikit-learn), as the training process is slow, here only use first 10000 training images
Step12: These images can not fool SVM, let's try RandomForest
Step13: Even though the noise does confuse the classifier, the prediction label is not we want
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
def build_graph():
build the same graph as previous dumped model
Args:
None
Returns:
sess : tf.InteractiveSession()
x : tf.placeholder()
y_ : tf.placeholder()
y_pred, : tf.Variable()
keep_prob, : tf.placeholder()
cross_entropy : tf.Variable()
Example:
>>> build_graph()
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
def weight_variable(shape):
Create a weight variable with appropriate initialization.
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
Create a bias variable with appropriate initialization.
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
simple conv2d layer
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
a simple 2x2 max pool layer
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# First conv layer with a pool layer
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
# Second conv layer with a pool layer
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
# First Full-connect layer
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# Second Full-connect layer
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
# output layer
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
y_pred = tf.nn.softmax(y_conv)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
sess = tf.InteractiveSession()
return sess, x, y_, y_pred, keep_prob, cross_entropy
def generate_adversarial(model_path, img_list, target_class, eta=0.001,
threshold=0.99, save_path=None, file_name='adversarial', verbose=0):
generate adversarial images, note that gradient and some parts of
graph are needed during iterations, hence I decide not to pack some codes
into helper function
Args:
tensor_in: `Tensor`, input tensor.
other_tensor_in: `Tensor`, same shape as `tensor_in`, other input tensor.
my_param: `float`, coefficient for `tensor_in`.
other_param: `float`, coefficient for `other_tensor_in`.
output_collections: `tuple` of `string`s, name of the collection to
collect result of this op.
name: `string`, name of the operation.
model_path: `string`, the path to previous model
img_list: `string`, the img list that need to generate adversarial images
target_class: `int`, the wanted label
eta: `float`, learning rate (or step size), default: 0.001
threshold: `float`, the confidence we want to fool, default: 0.99 (99%)
save_path: `string`, the path to img/ folder
file_name: `string`, the name for saving file, default:'adversarial'
verbose: `int`, verbose=0, omit the training graphs, default: 0
Returns:
`np.array`: the final adversarial image for each img in img_list
Example:
>>> generate_adversarial(model_path='../model/MNIST.ckpt',
img_list=img_list, target_class=6, eta=0.01, threshold=0.99,
save_path='../img/', file_name='adversarial', verbose=1)
np.ndarray(...)
sess, x, y_, y_pred, keep_prob, cross_entropy = build_graph()
sess.run(tf.global_variables_initializer())
tf.train.Saver().restore(sess, model_path)
print('load model from', model_path)
prediction=tf.argmax(y_pred,1)
probabilities=y_pred
img_gradient = tf.gradients(cross_entropy, x)[0]
adversarial_img_list = list()
# generate versus figure
sns.set_style('white')
versus_fig = plt.figure(figsize=(9, 40))
for img_index in range(0, img_list.shape[0]):
adversarial_img = img_list[img_index: img_index+1].copy()
adversarial_label = np.zeros((1, 10))
adversarial_label[:, target_class] = 1
confidence = 0
iter_num = 0
prob_history = list()
while confidence < threshold:
probabilities_val = probabilities.eval(feed_dict=
{x: adversarial_img, keep_prob: 1.0}, session=sess)
confidence = probabilities_val[:, 6]
prob_history.append(probabilities_val[0])
gradient = img_gradient.eval(
{x: adversarial_img, y_: adversarial_label, keep_prob: 1.0})
adversarial_img -= eta * gradient
iter_num += 1
print('generate adversarial image after', iter_num, 'iterations')
# generate versus figure
ax1 = versus_fig.add_subplot(10, 3, 3*img_index+1)
ax1.axis('off')
ax1.imshow(img_list[img_index].reshape([28, 28]),
interpolation=None, cmap=plt.cm.gray)
ax1.title.set_text(
'Confidence for 2: ' + '{:.4f}'.format(prob_history[0][2])
+ '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[0][6]))
ax2 = versus_fig.add_subplot(10, 3, 3*img_index+2)
ax2.axis('off')
ax2.imshow((adversarial_img - img_list[img_index]).reshape([28, 28]),
interpolation=None, cmap=plt.cm.gray)
ax2.title.set_text('Delta')
ax3 = versus_fig.add_subplot(10, 3, 3*img_index+3)
ax3.axis('off')
ax3.imshow((adversarial_img).reshape([28, 28]),
interpolation=None, cmap=plt.cm.gray)
ax3.title.set_text(
'Confidence for 2: ' + '{:.4f}'.format(prob_history[-1][2])
+ '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[-1][6]))
print("Difference Measure:",
np.sum((adversarial_img - img_list[img_index]) ** 2))
adversarial_img_list.append(adversarial_img)
if verbose != 0:
sns.set_style('whitegrid')
colors_list = sns.color_palette("Paired", 10)
# generate Iteration figure
prob_history = np.array(prob_history)
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111)
for i, record in enumerate(prob_history.T):
plt.plot(record, color=colors_list[i])
ax.legend([str(x) for x in range(0, 10)],
loc='center left', bbox_to_anchor=(1.01, 0.5), fontsize=14)
ax.set_xlabel('Iteration')
ax.set_ylabel('Prediction Confidence')
fig.savefig(save_path + file_name + str(img_index) + '_iter.png')
versus_fig.tight_layout()
versus_fig.savefig(save_path + file_name + '_versus.png')
return np.array(adversarial_img_list)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)
%matplotlib inline
index_mask = np.where(mnist.test.labels[:, 2])[0]
subset_mask = np.random.choice(index_mask, 10)
origin_images = mnist.test.images[subset_mask]
origin_labels = mnist.test.labels[subset_mask]
ad_img = generate_adversarial(model_path='../model/MNIST.ckpt', img_list=origin_images, target_class=6, eta=0.01, threshold=0.99,
save_path='../img/', file_name='adversarial', verbose=0)
from sklearn import svm, metrics
train_images = mnist.train.images[:]
train_labels = mnist.train.labels[:]
test_images = mnist.test.images[:]
test_labels = mnist.test.labels[:]
train_labels = np.apply_along_axis(lambda x: np.where(x)[0][0], 1, train_labels)
test_labels = np.apply_along_axis(lambda x: np.where(x)[0][0], 1, test_labels)
classifier = svm.SVC(probability=True, verbose=True)
classifier.fit(train_images[0: 10000], train_labels[0: 10000])
pred_labels = classifier.predict(test_images)
print("Confusion matrix:\n%s" % metrics.confusion_matrix(test_labels, pred_labels))
print("Classification report for classifier %s:\n%s\n"
% (classifier, metrics.classification_report(test_labels, pred_labels)))
pred_labels = classifier.predict(np.squeeze(ad_img))
pred_labels
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators=200)
classifier.fit(train_images, train_labels)
pred_labels = classifier.predict(test_images)
print("Confusion matrix:\n%s" % metrics.confusion_matrix(test_labels, pred_labels))
print("Classification report for classifier %s:\n%s\n"
% (classifier, metrics.classification_report(test_labels, pred_labels)))
pred_labels = classifier.predict(np.squeeze(ad_img))
pred_labels
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)
input_shape = (img_rows, img_cols, 1)
batch_size = 128
nb_classes = 10
nb_epoch = 50
train_images = mnist.train.images.reshape((55000, 28, 28, 1))
train_labels = mnist.train.labels
test_images = mnist.test.images.reshape((10000, 28, 28, 1))
test_labels = mnist.test.labels
valid_images = mnist.validation.images.reshape((5000, 28, 28, 1))
valid_labels = mnist.validation.labels
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(train_images, train_labels, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=0, validation_data=(valid_images, valid_labels))
score = model.evaluate(test_images, test_labels, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
model.predict(ad_img.reshape((10, 28, 28, 1)))
model.predict_classes(ad_img.reshape((10, 28, 28, 1)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
|
<ASSISTANT_TASK:>
Python Code:
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex7 import *
print("Setup Complete")
# Check your answer (Run this code cell to receive credit!)
q_1.check()
# Check your answer (Run this code cell to receive credit!)
q_2.check()
# Check your answer (Run this code cell to receive credit!)
q_3.check()
# Check your answer (Run this code cell to receive credit!)
q_4.check()
# Fill in the line below with one of 1, 2, 3 or 4.
potential_leakage_feature = ____
# Check your answer
q_5.check()
#%%RM_IF(PROD)%%
potential_leakage_feature = 1
q_5.assert_check_failed()
#%%RM_IF(PROD)%%
potential_leakage_feature = 2
q_5.assert_check_passed()
#_COMMENT_IF(PROD)_
q_5.hint()
#_COMMENT_IF(PROD)_
q_5.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fitting a quadratic curve
Step2: First, generate a dataset using this model using these parameters and the following characteristics
Step3: Now fit the model to the dataset to recover estimates for the model's parameters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
from IPython.html.widgets import interact
a_true = 0.5
b_true = 2.0
c_true = -4.0
# YOUR CODE HERE
xdata=np.linspace(-5,5,30)
N=30
dy=2.0
def ymodel(a,b,c):
return a*x**2+b*x+c
ydata = a_true*x**2 + b_true * x + c_true + np.random.normal(0.0, dy, size=N)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y');
assert True # leave this cell for grading the raw data generation and plot
# YOUR CODE HERE
def chi2(theta, x, y, dy):
# theta = [b, m]
return np.sum(((y - theta[0] - theta[1] * x) / dy) ** 2)
def manual_fit(a, b, c):
modely = a*xdata**2 + b*xdata +c
plt.plot(xdata, modely)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y')
plt.text(1, 15, 'a={0:.2f}'.format(a))
plt.text(1, 12.5, 'b={0:.2f}'.format(b))
plt.text(1, 10, 'c={0:.2f}'.format(c))
plt.text(1, 8.0, '$\chi^2$={0:.2f}'.format(chi2([a,b,c],xdata,ydata, dy)))
interact(manual_fit, a=(-3.0,3.0,0.01), b=(0.0,4.0,0.01),c=(-5,5,0.1));
def deviations(theta, x, y, dy):
return (y - theta[0] - theta[1] * x) / dy
result = opt.leastsq(deviations, theta_guess, args=(xdata, ydata, dy), full_output=True)
theta_best = result[0]
theta_cov = result[1]
theta_mov = result[2]
print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
print('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2])))
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Find Peaks
Step2: Find ridge lines
Step3: For now use scipy.signal.find_peaks_cwt(), compare with my own implementation
Step6: Estimate Peak widths
Step7: Search for local minima in in the row corresponding to the peak's scale, within 3x peak scale or peak index
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import division
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
from scipy import signal
data1 = np.genfromtxt(os.path.join('..', 'tests', 'data', 'raman-785nm.txt'))
x = data1[:, 0]
y = data1[:, 1]
plt.plot(x, y)
widths = np.arange(1,71)
cwtmat = signal.cwt(y, signal.ricker, widths)
plt.imshow(cwtmat, aspect='auto', cmap='PRGn')
# Find local maxima
# make a binary array containing local maximum of transform, with same shape
lmax = np.zeros(cwtmat.shape)
for i in range(cwtmat.shape[0]):
lmax[i, signal.argrelextrema(cwtmat[i, :], np.greater)] = 1
fig, ax = plt.subplots(figsize=(15, 4))
ax.imshow(lmax, aspect='auto', cmap='gray_r')
# allocate memory
# intial location assigned to peak from the first row
peak_pos_start = np.where(lmax[0,:]==1)[0]
# current position of the ridge
peak_ridge = np.copy(peak_pos_start) # full copy
n_peaks = peak_pos_start.size
# length of the ridge
peak_len = np.ones(n_peaks)
# use the max of the ridge line to find the width of the peaks
peak_pos = np.zeros(n_peaks, dtype='int')
peak_width = np.ones(n_peaks)
peak_width_max = np.zeros(n_peaks)
# Link local maxima (find ridges)
w = 3
# for each row starting at the second
for i in range(1, lmax.shape[0]):
# for each peak
for j in range(n_peaks):
# assume it doesn't extend, and then check
extends = False
p = peak_ridge[j]
if lmax[i, p] == 1:
# if there is one below, it is part of the same ridge
extends = True
else:
# if not search around peak
for k in range(1, w):
if lmax[i, p-k] == 1:
extends = True
peak_ridge[j] -= k
break
elif lmax[i, p+k] == 1:
extends = True
peak_ridge[j] += k
break
# if it extends
if extends:
# it it longer
peak_len[j] += 1
# find width by comparing max vs. previous
if cwtmat[i, p] > peak_width_max[j]:
peak_width_max[j] = cwtmat[i, p]
peak_width[j] = i
peak_pos[j] = p
print peak_pos[:20]
print peak_width[:20]
# generate a simulated spectrum of sorts, with peak positions and the length of the ridge lines
ypeaks = np.zeros(y.shape)
ypeaks[peak_pos] = peak_len*peak_width
fig, ax = plt.subplots(figsize=(15, 4))
ax.plot(x, ypeaks)
# find peaks using the first ridge position, last ridge position as well using find_peaks
peaks = signal.find_peaks_cwt(y, wavelet=signal.ricker, widths=widths)
peaks_2 = peak_pos[np.all(((peak_width > 0), (peak_len > 5)), axis=0)]
print peaks, peaks_2
fig, ax = plt.subplots(24, figsize=(10,10))
for w in range(3):
for l in range(2, 10):
a = ax[w*8 + (l-2)]
peaks = peak_pos[np.all(((peak_width > w), (peak_len > l)), axis=0)]
a.plot(x,y)
a.plot(x[peaks], y[peaks], 'rx', label='w%i, l%i' % (w,l))
#a.legend()
# find peaks using the first ridge position, last ridge position as well using find_peaks
peaks = signal.find_peaks_cwt(y, wavelet=signal.ricker, widths=widths)
peaks_2 = peak_pos[np.all(((peak_width > 1), (peak_len > 5)), axis=0)]
fig, ax = plt.subplots(figsize=(15,5))
ax.semilogy(x,y)
ax.semilogy(x[peaks], y[peaks], 'kv', alpha=0.8)
ax.semilogy(x[peaks_2], y[peaks_2], 'rd', alpha=0.8, label='filterd width')
#ax.plot(x[peaks_3], y[peaks_3], 'bx', label='filterd length')
ax.set_ylim(200000,600000)
ax.legend()
# find peaks using the first ridge position, last ridge position as well using find_peaks
peaks = signal.find_peaks_cwt(y, wavelet=signal.ricker, widths=widths)
peaks_2 = peak_pos[np.all(((peak_width > 5), (peak_len > 20)), axis=0)]
fig, ax = plt.subplots(figsize=(15,5))
ax.plot(x,y)
ax.plot(x[peaks], y[peaks], 'kv', alpha=0.8, label='scipy')
ax.plot(x[peaks_2], y[peaks_2], 'rd', alpha=0.8, label='filterd length and width')
#ax.plot(x[peaks_3], y[peaks_3], 'bx', label='filterd length')
ax.set_ylim(200000,520000)
ax.legend()
# analyze the ricker wavelet to help build the ricker wavelet
points = 100
for a in range(2, 11, 2):
wave = signal.ricker(points, a)
plt.plot(wave)
# note, all integrate to 0
# make a haar mother wavelet
def haar2(points, a):
Returns a haar wavelet mother wavelet
1 if 0 <= t < 1/2
h(t) = -1 if 1/2 <= t < 1
0 otherwise`
Numpy version, not accurate right now
x = np.arange(0, points) - (points - 1.0) / 2
wave = np.zeros(x.shape)
amp = 2/a
wave[np.where(np.logical_and(0 <= x, x < 0.5*a))[0]] = 1
wave[np.where(np.logical_and(-0.5*a <= x, x < 1))[0]] = -1
return wave*amp
# make a haar mother wavelet
def haar(points, a):
Returns a haar wavelet mother wavelet
1 if 0 <= t < 1/2
h(t) = -1 if 1/2 <= t < 1
0 otherwise`
vec = np.arange(0, points) - (points - 1.0) / 2
wave = np.zeros(vec.shape)
amp = 2/a
for i, x in enumerate(vec):
if 0 <= x < 0.5*a:
wave[i] = 1
elif -0.5*a <= x < 1:
wave[i] = -1
return wave*amp
points = 100
for a in range(2, 11, 2):
wave = haar(points, a)
plt.step(np.arange(points), wave)
hw = signal.cwt(y, haar, widths=widths)
plt.imshow(hw, aspect='auto', cmap='PRGn')
ahw = np.abs(hw)
plt.imshow(ahw, aspect='auto', cmap='PRGn')
for p in peak_pos:
print p
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <big><center>To run a cell
Step2: Writing code is what you will do most during this course!
Step3: Help
Step4: <div class="alert alert-success">
Step5: edit mode to command mode
Step6: %%timeit
Step7: %whos
Step8: %lsmagic
Step9: Let's get started!
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(url='http://python.org/images/python-logo.gif')
# Code cell, then we are using python
print('Hello DS')
DS = 10
print(DS + 5) # Yes, we advise to use Python 3 (!)
import os
os.mkdir
my_very_long_variable_name = 3
round(3.2)
import os
os.mkdir
# An alternative is to put a question mark behind the command
os.mkdir?
import glob
glob.glob??
%psearch os.*dir
%%timeit
mylist = range(1000)
for i in mylist:
i = i**2
import numpy as np
%%timeit
np.arange(1000)**2
%whos
%lsmagic
from IPython.display import FileLink, FileLinks
FileLinks('.', recursive=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For example, consider the plot below, the scatter points are random, but for this example, lets imagine we are analzing the prices of homes. In the chart below the x axis can represent the square footage of a house and the y axis is the price of the house. It looks like there is a pretty clear correlation between sq footage and price. Let's create a function that given a sq footage can predict the price of a house
Step2: The goal of a regression function is to find a function for line that will fit between each of the points with the best fit. For the scatter plot above, we can apporximate a function that will minimuze the error between the function value $y$ at $x$ and the actual value of $y$ at $x$. The error can be measured by
Step3: Minimize $SE_{line}$
Step4: We can see intuitively from the plane above, which is error across a given $m$ & $b$, that if we find the point were error is minimum we can find a point $m$ & $b$ that is optimal for our regression line.
Step5: So our regression line is
Step6: Now, lets test it w/ a 10 square foot house (a value outside of our orginial dataset)
Step7: Scikit learn Implementation
Step8: Conclusion
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
%pylab inline
N=100
x = np.random.rand(N) *6
y = x + np.random.rand(N)*1
plt.scatter(x,y)
plt.plot([0,6],[0.5,6.2])
def se_line(n,m,b, y_2_hat, x_y_2_hat, y_hat, x_2_hat, x_hat):
val = n*y_2_hat - 2*m*(n*x_y_2_hat) - 2*b * (n*y_hat) + m**2*(n*x_2_hat) + 2*m*b*(n*x_hat) + n*b**2
return val
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import math
fig = plt.figure()
ax = fig.gca(projection='3d')
m=np.array(range(-N,N))
b=np.array(range(-N,N))
y_2_hat = (y**2).mean()
x_y_2_hat = (x * y**2).mean()
y_hat = y.mean()
x_2_hat = (x**2).mean()
x_hat = x.mean()
n=x.shape[0]
err = se_line(n,m,b,y_2_hat, x_y_2_hat, y_hat, x_2_hat, x_hat)
X,Y = np.meshgrid(m,b)
Z=err
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
x_y_hat = (x * y).mean()
y_hat = y.mean()
x_2_hat = (x**2).mean()
x_hat = x.mean()
m = (x_hat * y_hat - x_y_hat) / ((x_hat)**2 - x_2_hat)
b = y_hat - m * x_hat
house_regression = lambda x: m*x+b
(m,b)
print "y=%fx+%f"%(m,b)
custom_prediction = house_regression(10)
custom_prediction
from sklearn import linear_model
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
# inputs into vector format
_x = x.reshape(len(x),1)
_y = y.reshape(len(y),1)
regr.fit(_x,_y)
# The coefficients
print('Coefficients: \n', regr.coef_)
scikit_prediction = regr.predict(10)
print custom_prediction
print scikit_prediction[0][0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data ready for work
Step2: The nscore transformation table function
Step3: Note that the input can be data or a reference distribution function
Step4: Normal score transformation table without delustering wight
Step5: Comparing results
|
<ASSISTANT_TASK:>
Python Code:
#general imports
import matplotlib.pyplot as plt
import pygslib
from matplotlib.patches import Ellipse
import numpy as np
import pandas as pd
#make the plots inline
%matplotlib inline
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code
# so, we are adding constant elevation = 0 and a dummy BHID = 1
mydata['Zlocation']=0
mydata['bhid']=1
# printing to verify results
print (' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5))
#view data in a 2D projection
plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])
plt.colorbar()
plt.grid(True)
plt.show()
print (pygslib.gslib.__dist_transf.ns_ttable.__doc__)
dtransin,dtransout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight'])
dttable= pd.DataFrame({'z': dtransin,'y': dtransout})
print (dttable.head(3))
print (dttable.tail(3) )
print ('there was any error?: ', error!=0)
dttable.hist(bins=30)
transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],np.ones(len(mydata['Primary'])))
ttable= pd.DataFrame({'z': transin,'y': transout})
print (ttable.head(3))
print (ttable.tail(3))
ttable.hist(bins=30)
parameters_probplt = {
'iwt' : 0, #int, 1 use declustering weight
'va' : ttable.y, # array('d') with bounds (nd)
'wt' : np.ones(len(ttable.y))} # array('d') with bounds (nd), wight variable (obtained with declust?)
parameters_probpltl = {
'iwt' : 0, #int, 1 use declustering weight
'va' : dttable.y, # array('d') with bounds (nd)
'wt' : np.ones(len(dttable.y))} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval,cl,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax, \
xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
binvall,cll,xpt025l,xlqtl,xmedl,xuqtl,xpt975l,xminl, \
xmaxl,xcvrl,xmenl,xvarl,errorl = pygslib.gslib.__plot.probplt(**parameters_probpltl)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plt.plot (cl, binval, label = 'gaussian non-declustered')
plt.plot (cll, binvall, label = 'gaussian declustered')
plt.legend(loc=4)
plt.grid(True)
fig.show
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To create a transmission line of 100um
Step2: More detailed examples illustrating how to create various kinds of Media
Step3: For the purpose of microwave network analysis, the defining properties of a (single moded) transmisison line are it's characteristic impedance and propagation constant. These properties return complex numpy.ndarray's, A port impedance is also needed when different networks are connected.
Step4: The port impedance is given by z0 (lower z). Which we set to 1, just to illustrate how this works. The port impedance is used to compute impednace mismatched if circuits of different port impedance are connected.
Step5: The propagation constant is given by gamma
Step6: Lets take a look at some other Media's
Step7: Simpulate a 1cm slab of Si in half-space,
Step8: Rectangular Waveguide
Step9: The z0 argument in the Rectangular Waveguide constructor is used
Step10: Because the wave quantities are dynamic they change when the attributes
Step11: Network Synthesis
Step12: Or to create a $90^{\circ}$ section of cpw line,
Step13: Building Cicuits
Step14: When Networks with more than 2 ports need to be connected together, use
Step15: Adding networks in shunt is pretty common, so there is a Media.shunt() function to do this,
Step16: If a specific circuit is created frequently, it may make sense to
Step17: A more useful example may be to create a function for a shunt-stub tuner,
Step18: This approach lends itself to design optimization.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import skrf as rf
rf.stylely()
from skrf import Frequency
from skrf.media import CPW
freq = Frequency(75,110,101,'ghz')
cpw = CPW(freq, w=10e-6, s=5e-6, ep_r=10.6)
cpw
cpw.line(100*1e-6, name = '100um line')
freq = Frequency(75,110,101,'ghz')
cpw = CPW(freq, w=10e-6, s=5e-6, ep_r=10.6, z0 =1)
cpw
cpw.Z0[:3]
cpw.z0[:3]
cpw.gamma[:3]
from skrf.media import Freespace
freq = Frequency(10,20,101,'ghz')
air = Freespace(freq)
air
air.z0[:2] # 377ohm baby!
# plane wave in Si
si = Freespace(freq, ep_r = 11.2)
si.z0[:3] # ~110ohm
slab = air.thru() ** si.line(1, 'cm') ** air.thru()
slab.plot_s_db(n=0)
from skrf.media import RectangularWaveguide
freq = Frequency(75,110,101,'ghz')
wg = RectangularWaveguide(freq, a=100*rf.mil, z0=50) # see note below about z0
wg
air = Freespace(freq)
from matplotlib import pyplot as plt
air.plot(air.gamma.imag, label='Freespace')
wg.plot(wg.gamma.imag, label='WR10')
plt.ylabel('Propagation Constant (rad/m)')
plt.legend()
for ep_r in [9,10,11]:
cpw.ep_r = ep_r
cpw.frequency.plot(cpw.beta, label='er=%.1f'%ep_r)
plt.xlabel('Frequency [GHz]')
plt.ylabel('Propagation Constant [rad/m]')
plt.legend()
wg.short(name = 'short')
cpw.line(d=90,unit='deg', name='line')
delay_short = wg.line(d=90,unit='deg') ** wg.short()
delay_short.name = 'delay short'
delay_short
tee = cpw.tee()
delay_open = cpw.delay_open(40,'deg')
shunt_open = rf.connect(tee,1,delay_open,0)
cpw.shunt(delay_open)
delay_short = lambda d: wg.line(d,'deg')**wg.short()
delay_short(90)
def shunt_stub(med, d0, d1):
return med.line(d0,'deg')**med.shunt_delay_open(d1,'deg')
shunt_stub(cpw,10,90)
from scipy.optimize import fmin
# the load we are trying to match
load = cpw.load(.2+.2j)
# single stub circuit generator function
def shunt_stub(med, d0, d1):
return med.line(d0,'deg')**med.shunt_delay_open(d1,'deg')
# define the cost function we want to minimize (this uses sloppy namespace)
def cost(d):
# prevent negative length lines, returning high cost
if d[0] <0 or d[1] <0:
return 1e3
return (shunt_stub(cpw,d[0],d[1]) ** load)[100].s_mag.squeeze()
# initial guess of optimal delay lengths in degrees
d0 = 120,40 # initial guess
#determine the optimal delays
d_opt = fmin(cost,(120,40))
d_opt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mie scattering describes the special case of the interaction of light passing through a non-absorbing medium with a single embedded spherical object. The sphere itself can be non-absorbing, moderately absorbing, or perfectly absorbing.
Step2: Scattered Wave
Step3: Normalization of the scattered light
Step4: A similar calculation but using ez_intensities()
Step5: Rayleigh Scattering
Step6: Verifying normalization numerically
Step8: Comparison to Wiscombe's Mie Program
Step10: Wiscombe's Test Case 10
Step12: Wiscombe's Test Case 7
Step14: Comparison to Bohren & Huffmans's Mie Program
Step15: Bohren & Huffman, water droplets
Step16: van de Hulst Comparison
Step17: Comparisons with Kerker, Angular Gain
|
<ASSISTANT_TASK:>
Python Code:
#!pip install --user miepython
import numpy as np
import matplotlib.pyplot as plt
try:
import miepython
except ModuleNotFoundError:
print('miepython not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
t = np.linspace(0,2*np.pi,100)
xx = np.cos(t)
yy = np.sin(t)
fig,ax=plt.subplots(figsize=(10,8))
plt.axes().set_aspect('equal')
plt.plot(xx,yy)
plt.plot([-5,7],[0,0],'--k')
plt.annotate('incoming irradiance', xy=(-4.5,-2.3),ha='left',color='blue',fontsize=14)
for i in range(6):
y0 = i -2.5
plt.annotate('',xy=(-1.5,y0),xytext=(-5,y0),arrowprops=dict(arrowstyle="->",color='blue'))
plt.annotate('unscattered irradiance', xy=(3,-2.3),ha='left',color='blue',fontsize=14)
for i in range(6):
y0 = i -2.5
plt.annotate('',xy=(7,y0),xytext=(3,y0),arrowprops=dict(arrowstyle="->",color='blue',ls=':'))
plt.annotate('scattered\nspherical\nwave', xy=(0,1.5),ha='left',color='red',fontsize=16)
plt.annotate('',xy=(2.5,2.5),xytext=(0,0),arrowprops=dict(arrowstyle="->",color='red'))
plt.annotate(r'$\theta$',xy=(2,0.7),color='red',fontsize=14)
plt.annotate('',xy=(2,2),xytext=(2.7,0),arrowprops=dict(connectionstyle="arc3,rad=0.2", arrowstyle="<->",color='red'))
plt.xlim(-5,7)
plt.ylim(-3,3)
plt.axis('off')
plt.show()
fig,ax=plt.subplots(figsize=(10,8))
plt.axes().set_aspect('equal')
plt.scatter([0],[0],s=30)
m = 1.5
x = np.pi/3
theta = np.linspace(-180,180,180)
theta_r = np.radians(theta)
mu = np.cos(theta_r)
scat = 15 * miepython.i_unpolarized(m,x,mu)
plt.plot(scat*np.cos(theta/180*np.pi),scat*np.sin(theta/180*np.pi))
for i in range(12):
ii = i*15
xx = scat[ii]*np.cos(theta_r[ii])
yy = scat[ii]*np.sin(theta_r[ii])
# print(xx,yy)
plt.annotate('',xy=(xx,yy),xytext=(0,0),arrowprops=dict(arrowstyle="->",color='red'))
plt.annotate('incident irradiance', xy=(-4.5,-2.3),ha='left',color='blue',fontsize=14)
for i in range(6):
y0 = i -2.5
plt.annotate('',xy=(-1.5,y0),xytext=(-5,y0),arrowprops=dict(arrowstyle="->",color='blue'))
plt.annotate('unscattered irradiance', xy=(3,-2.3),ha='left',color='blue',fontsize=14)
for i in range(6):
y0 = i -2.5
plt.annotate('',xy=(7,y0),xytext=(3,y0),arrowprops=dict(arrowstyle="->",color='blue',ls=':'))
plt.annotate('scattered\nspherical wave', xy=(0,1.5),ha='left',color='red',fontsize=16)
plt.xlim(-5,7)
plt.ylim(-3,3)
#plt.axis('off')
plt.show()
m = 1.5
x = np.pi/3
theta = np.linspace(-180,180,180)
mu = np.cos(theta/180*np.pi)
scat = miepython.i_unpolarized(m,x,mu)
fig,ax = plt.subplots(1,2,figsize=(12,5))
ax=plt.subplot(121, projection='polar')
ax.plot(theta/180*np.pi,scat)
ax.set_rticks([0.05, 0.1,0.15])
ax.set_title("m=1.5, Sphere Diameter = $\lambda$/3")
plt.subplot(122)
plt.plot(theta,scat)
plt.xlabel('Exit Angle [degrees]')
plt.ylabel('Unpolarized Scattered light [1/sr]')
plt.title('m=1.5, Sphere Diameter = $\lambda$/3')
plt.ylim(0.00,0.2)
plt.show()
m = 1.33
lambda0 = 632.8 # nm
d = 200 # nm
theta = np.linspace(-180,180,180)
mu = np.cos(theta/180*np.pi)
Ipar, Iper = miepython.ez_intensities(m, d, lambda0, mu)
fig,ax = plt.subplots(1,2,figsize=(12,5))
ax=plt.subplot(121, projection='polar')
ax.plot(theta/180*np.pi,Ipar)
ax.plot(theta/180*np.pi,Iper)
ax.set_rticks([0.05, 0.1, 0.15, 0.20])
plt.title("m=%.2f, Sphere Diameter = %.0f nm, $\lambda$=%.1f nm" % (m, d, lambda0))
plt.subplot(122)
plt.plot(theta,Ipar)
plt.plot(theta,Iper)
plt.xlabel('Exit Angle [degrees]')
plt.ylabel('Unpolarized Scattered light [1/sr]')
plt.title("m=%.2f, Sphere Diameter = %.0f nm, $\lambda$=%.1f nm" % (m, d, lambda0))
plt.ylim(0.00,0.2)
plt.show()
m = 1.3
x = 0.01
theta = np.linspace(-180,180,180)
mu = np.cos(theta/180*np.pi)
ipar = miepython.i_par(m,x,mu)/2
iper = miepython.i_per(m,x,mu)/2
iun = miepython.i_unpolarized(m,x,mu)
fig,ax = plt.subplots(1,2,figsize=(12,5))
ax=plt.subplot(121, projection='polar')
ax.plot(theta/180*np.pi,iper,'r--')
ax.plot(theta/180*np.pi,ipar,'b:')
ax.plot(theta/180*np.pi,iun,'k')
ax.set_rticks([0.05, 0.1,0.15])
plt.title('m=%.2f, Sphere Parameter = %.2f' %(m,x))
plt.subplot(122)
plt.plot(theta,iper,'r--')
plt.plot(theta,ipar,'b:')
plt.plot(theta,iun,'k')
plt.xlabel('Exit Angle [degrees]')
plt.ylabel('Normalized Scattered light [1/sr]')
plt.title('m=%.2f, Sphere Parameter = %.2f' %(m,x))
plt.ylim(0.00,0.125)
plt.text(130,0.02,r"$0.5I_{per}$",color="blue", fontsize=16)
plt.text(120,0.062,r"$0.5I_{par}$",color="red", fontsize=16)
plt.text(30,0.11,r"$I_{unpolarized}$",color="black", fontsize=16)
plt.show()
m = 1.5 - 1.5j
x = 1
mu = np.linspace(-1,1,501)
intensity = miepython.i_unpolarized(m,x,mu)
qext, qsca, qback, g = miepython.mie(m,x)
a = qsca/qext
#integrate over all angles
dmu = mu[1] - mu[0]
total = 2 * np.pi * dmu * np.sum(intensity)
plt.plot(mu,intensity)
plt.xlabel(r'$\cos(\theta)$')
plt.ylabel('Unpolarized Scattering Intensity [1/sr]')
plt.title('m=%.3f%+.3fj, x=%.2f, a=%.3f, total=%.3f'%(m.real,m.imag,x,a, total))
plt.show()
MIEV0 Test Case 14: Refractive index: real 1.500 imag -1.000E+00, Mie size parameter = 1.000
Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn
0.00 1.000000 5.84080E-01 1.90515E-01 5.84080E-01 1.90515E-01 3.77446E-01 0.0000
30.00 0.866025 5.65702E-01 1.87200E-01 5.00161E-01 1.45611E-01 3.13213E-01 -0.1336
60.00 0.500000 5.17525E-01 1.78443E-01 2.87964E-01 4.10540E-02 1.92141E-01 -0.5597
90.00 0.000000 4.56340E-01 1.67167E-01 3.62285E-02 -6.18265E-02 1.20663E-01 -0.9574
x=1.0
m=1.5-1.0j
mu=np.cos(np.linspace(0,90,4) * np.pi/180)
qext, qsca, qback, g = miepython.mie(m,x)
albedo = qsca/qext
unpolar = miepython.i_unpolarized(m,x,mu) # normalized to a
unpolar /= albedo # normalized to 1
unpolar_miev = np.array([3.77446E-01,3.13213E-01,1.92141E-01,1.20663E-01])
unpolar_miev /= np.pi * qsca * x**2 # normalized to 1
ratio = unpolar_miev/unpolar
print("MIEV0 Test Case 14: m=1.500-1.000j, Mie size parameter = 1.000")
print()
print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90))
print("MIEV0 %9.5f %9.5f %9.5f %9.5f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3]))
print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3]))
print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3]))
MIEV0 Test Case 10: Refractive index: real 1.330 imag -1.000E-05, Mie size parameter = 100.000
Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn
0.00 1.000000 5.25330E+03 -1.24319E+02 5.25330E+03 -1.24319E+02 2.76126E+07 0.0000
30.00 0.866025 -5.53457E+01 -2.97188E+01 -8.46720E+01 -1.99947E+01 5.75775E+03 0.3146
60.00 0.500000 1.71049E+01 -1.52010E+01 3.31076E+01 -2.70979E+00 8.13553E+02 0.3563
90.00 0.000000 -3.65576E+00 8.76986E+00 -6.55051E+00 -4.67537E+00 7.75217E+01 -0.1645
x=100.0
m=1.33-1e-5j
mu=np.cos(np.linspace(0,90,4) * np.pi/180)
qext, qsca, qback, g = miepython.mie(m,x)
albedo = qsca/qext
unpolar = miepython.i_unpolarized(m,x,mu) # normalized to a
unpolar /= albedo # normalized to 1
unpolar_miev = np.array([2.76126E+07,5.75775E+03,8.13553E+02,7.75217E+01])
unpolar_miev /= np.pi * qsca * x**2 # normalized to 1
ratio = unpolar_miev/unpolar
print("MIEV0 Test Case 10: m=1.330-0.00001j, Mie size parameter = 100.000")
print()
print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90))
print("MIEV0 %9.5f %9.5f %9.5f %9.5f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3]))
print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3]))
print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3]))
MIEV0 Test Case 7: Refractive index: real 0.750 imag 0.000E+00, Mie size parameter = 10.000
Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn
0.00 1.000000 5.58066E+01 -9.75810E+00 5.58066E+01 -9.75810E+00 3.20960E+03 0.0000
30.00 0.866025 -7.67288E+00 1.08732E+01 -1.09292E+01 9.62967E+00 1.94639E+02 0.0901
60.00 0.500000 3.58789E+00 -1.75618E+00 3.42741E+00 8.08269E-02 1.38554E+01 -0.1517
90.00 0.000000 -1.78590E+00 -5.23283E-02 -5.14875E-01 -7.02729E-01 1.97556E+00 -0.6158
x=10.0
m=0.75
mu=np.cos(np.linspace(0,90,4) * np.pi/180)
qext, qsca, qback, g = miepython.mie(m,x)
albedo = qsca/qext
unpolar = miepython.i_unpolarized(m,x,mu) # normalized to a
unpolar /= albedo # normalized to 1
unpolar_miev = np.array([3.20960E+03,1.94639E+02,1.38554E+01,1.97556E+00])
unpolar_miev /= np.pi * qsca * x**2 # normalized to 1
ratio = unpolar_miev/unpolar
print("MIEV0 Test Case 7: m=0.75, Mie size parameter = 10.000")
print()
print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90))
print("MIEV0 %9.5f %9.5f %9.5f %9.5f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3]))
print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3]))
print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3]))
BHMie Test Case 14, Refractive index = 1.5000-1.0000j, Size parameter = 1.0000
Angle Cosine S1 S2
0.00 1.0000 -8.38663e-01 -8.64763e-01 -8.38663e-01 -8.64763e-01
0.52 0.8660 -8.19225e-01 -8.61719e-01 -7.21779e-01 -7.27856e-01
1.05 0.5000 -7.68157e-01 -8.53697e-01 -4.19454e-01 -3.72965e-01
1.57 0.0000 -7.03034e-01 -8.43425e-01 -4.44461e-02 6.94424e-02
x=1.0
m=1.5-1j
mu=np.cos(np.linspace(0,90,4) * np.pi/180)
qext, qsca, qback, g = miepython.mie(m,x)
albedo = qsca/qext
unpolar = miepython.i_unpolarized(m,x,mu) # normalized to a
unpolar /= albedo # normalized to 1
s1_bh = np.empty(4,dtype=complex)
s1_bh[0] = -8.38663e-01 - 8.64763e-01*1j
s1_bh[1] = -8.19225e-01 - 8.61719e-01*1j
s1_bh[2] = -7.68157e-01 - 8.53697e-01*1j
s1_bh[3] = -7.03034e-01 - 8.43425e-01*1j
s2_bh = np.empty(4,dtype=complex)
s2_bh[0] = -8.38663e-01 - 8.64763e-01*1j
s2_bh[1] = -7.21779e-01 - 7.27856e-01*1j
s2_bh[2] = -4.19454e-01 - 3.72965e-01*1j
s2_bh[3] = -4.44461e-02 + 6.94424e-02*1j
# BHMie seems to normalize their intensities to 4 * pi * x**2 * Qsca
unpolar_bh = (abs(s1_bh)**2+abs(s2_bh)**2)/2
unpolar_bh /= np.pi * qsca * 4 * x**2 # normalized to 1
ratio = unpolar_bh/unpolar
print("BHMie Test Case 14: m=1.5000-1.0000j, Size parameter = 1.0000")
print()
print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90))
print("BHMIE %9.5f %9.5f %9.5f %9.5f"%(unpolar_bh[0],unpolar_bh[1],unpolar_bh[2],unpolar_bh[3]))
print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3]))
print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3]))
print()
print("Note that this test is identical to MIEV0 Test Case 14 above.")
print()
print("Wiscombe's code is much more robust than Bohren's so I attribute errors all to Bohren")
x=3
m=1.33-1e-8j
theta = np.linspace(0,180,181)
mu = np.cos(theta*np.pi/180)
scaling_factor = 16*np.pi
iper = scaling_factor*miepython.i_per(m,x,mu)
ipar = scaling_factor*miepython.i_par(m,x,mu)
P = (iper-ipar)/(iper+ipar)
plt.subplots(2,1,figsize=(8,8))
plt.subplot(2,1,1)
plt.semilogy(theta,ipar,label='$i_{par}$')
plt.semilogy(theta,iper,label='$i_{per}$')
plt.xlim(0,180)
plt.xticks(range(0,181,30))
plt.ylabel('i$_{par}$ and i$_{per}$')
plt.legend()
plt.title('Figure 4.9 from Bohren & Huffman')
plt.subplot(2,1,2)
plt.plot(theta,P)
plt.ylim(-1,1)
plt.xticks(range(0,181,30))
plt.xlim(0,180)
plt.ylabel('Polarization')
plt.plot([0,180],[0,0],':k')
plt.xlabel('Angle (Degrees)')
plt.show()
x=5
m=10000
theta = np.linspace(0,180,361)
mu = np.cos(theta*np.pi/180)
fig, ax = plt.subplots(figsize=(8,8))
x=10
s1,s2 = miepython.mie_S1_S2(m,x,mu)
sone = 2.5*abs(s1)
stwo = 2.5*abs(s2)
plt.plot(theta,sone,'b')
plt.plot(theta,stwo,'--r')
plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom')
x=5
s1,s2 = miepython.mie_S1_S2(m,x,mu)
sone = 2.5*abs(s1) + 1
stwo = 2.5*abs(s2) + 1
plt.plot(theta,sone,'b')
plt.plot(theta,stwo,'--r')
plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom')
x=3
s1,s2 = miepython.mie_S1_S2(m,x,mu)
sone = 2.5*abs(s1) + 2
stwo = 2.5*abs(s2) + 2
plt.plot(theta,sone,'b')
plt.plot(theta,stwo,'--r')
plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom')
x=1
s1,s2 = miepython.mie_S1_S2(m,x,mu)
sone = 2.5*abs(s1) + 3
stwo = 2.5*abs(s2) + 3
plt.plot(theta,sone,'b')
plt.plot(theta,stwo,'--r')
plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom')
x=0.5
s1,s2 = miepython.mie_S1_S2(m,x,mu)
sone = 2.5*abs(s1) + 4
stwo = 2.5*abs(s2) + 4
plt.plot(theta,sone,'b')
plt.plot(theta,stwo,'--r')
plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom')
plt.xlim(0,180)
plt.ylim(0,5.5)
plt.xticks(range(0,181,30))
plt.yticks(np.arange(0,5.51,0.5))
plt.title('Figure 29 from van de Hulst, Non-Absorbing Spheres')
plt.xlabel('Angle (Degrees)')
ax.set_yticklabels(['0','1/2','0','1/2','0','1/2','0','1/2','0','1/2','5',' '])
plt.grid(True)
plt.show()
## Kerker, Angular Gain
x=1
m=10000
theta = np.linspace(0,180,361)
mu = np.cos(theta*np.pi/180)
fig, ax = plt.subplots(figsize=(8,8))
s1,s2 = miepython.mie_S1_S2(m,x,mu)
G1 = 4*abs(s1)**2/x**2
G2 = 4*abs(s2)**2/x**2
plt.plot(theta,G1,'b')
plt.plot(theta,G2,'--r')
plt.annotate('$G_1$',xy=(50,0.36),color='blue',fontsize=14)
plt.annotate('$G_2$',xy=(135,0.46),color='red',fontsize=14)
plt.xlim(0,180)
plt.xticks(range(0,181,30))
plt.title('Figure 4.51 from Kerker, Non-Absorbing Spheres, x=1')
plt.xlabel('Angle (Degrees)')
plt.ylabel('Angular Gain')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
@staticmethod
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
self.activation_function = NeuralNetwork.sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
### Forward pass ###
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
### Backward pass ###
output_errors = targets - final_outputs
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
hidden_grad = hidden_outputs * (1 - hidden_outputs)
self.weights_hidden_to_output += self.lr * (output_errors * hidden_outputs).T
self.weights_input_to_hidden += self.lr * np.dot((hidden_errors * hidden_grad), inputs.T)
def run(self, inputs_list):
inputs = np.array(inputs_list, ndmin=2).T
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.1
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Support Vector Machines (SVM) are based on learning a vector $w$ and an intercept $b$ such that the hyperplane $w^T x - b = 0$ separates the data i.e. $a$ belongs to one class if $w^T a - b > 0$ and the other elsewhere.
Step3: The following illustration can be found in the Python Data Science Handbook by Jake VanderPlas.
Step4: We see clearly that the linear SVM seeks at maximizing the margin between the hyperplane and the two well defined classes from the data.
Step5: Let us use the same linear SVM classifier. Obviously, there are misclassified points, the model is thus learnt not by maximizing the margin (which does not exist anymore) but by minimizing a penalty over misclassified data. This penalty takes the form of an allowance margin controlled by a parameter $C$. The smaller $C$ the more inclusive the margin. Finding a good value for $C$ is up to the data scientist.
Step6: To find out which value of $C$ to use or globally the performance of the classifier, one can use Scikit Learn's classification metrics, for instance the confusion matrix.
Step7: It can also be plotted in a fancier way with seaborn.
Step8: Kernels
Step9: Let us compare the linear and rbf training error using the zero one loss (the proportion of misclassified examples).
Step10: Multiple classes
Step11: Other classifiers
Step12: One immediate problem here is that the features are not numeric (not floats). Thankfully, Scikit Learn provides encoders to convert categorical (aka nominal, discrete) features to numerical ones.
Step13: Even numerical values were encoded, as we are going to normalize, it is not really important.
Step14: Regression and Feature selection with the Lasso
Step15: We can observe the regressor $w$ provided by the model, notice the sparsity.
Step16: We can observe which coefficients are put to $0$ and which ones are positively/negatively correlated.
Step17: Let us take a look at our predictions.
Step18: Regularization path
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
%matplotlib inline
# we create 40 separable points in R^2 around 2 centers (random_state=6 is a seed so that the set is separable)
X, y = make_blobs(n_samples=40, n_features=2, centers=2 , random_state=6)
print(X[:5,:],y[:5]) # print the first 5 points and labels
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
from sklearn.svm import SVC # Support vector classifier i.e. Classifier by SVM
modelSVMLinear = SVC(kernel="linear")
modelSVMLinear.fit(X,y)
def plot_svc_decision_function(model, ax=None, plot_support=True):
Plot the decision function for a 2D SVC
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:, 0], X[:, 1], c=y , cmap=plt.cm.Paired)
plot_svc_decision_function(modelSVMLinear)
# we create points in R^2 around 2 centers (random_state=48443 is a seed so that the set is *not* separable)
X, y = make_blobs(n_samples=100, n_features=2, centers=2 , random_state=48443)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
from sklearn.model_selection import train_test_split # sklearn > ...
XTrain, XTest, yTrain, yTest = train_test_split(X,y,test_size = 0.5) # split data in two
model1 = SVC(kernel="linear",C=0.01)
model1.fit(XTrain,yTrain)
model2 = SVC(kernel="linear",C=100)
model2.fit(XTrain,yTrain)
plt.scatter(XTrain[:, 0], XTrain[:, 1], c=yTrain , cmap=plt.cm.Paired)
plot_svc_decision_function(model1)
plt.title("C = 0.01")
plt.scatter(XTrain[:, 0], XTrain[:, 1], c=yTrain , cmap=plt.cm.Paired)
plot_svc_decision_function(model2)
plt.title("C = 100")
from sklearn.metrics import confusion_matrix
yFit1 = model1.predict(XTest)
yFit2 = model2.predict(XTest)
mat1 = confusion_matrix(yTest, yFit1)
mat2 = confusion_matrix(yTest, yFit2)
print('Model with C = 0.01')
print(mat1)
print("Model with C = 100")
print(mat2)
import seaborn as sns
sns.heatmap(mat1, square=True, annot=True ,cbar=False)
plt.ylabel('true label')
plt.xlabel('predicted label')
from sklearn.datasets import make_moons
X,y = make_moons(noise=0.1)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
modelLinear = SVC(kernel="linear")
modelLinear.fit(X,y)
modelRbf = SVC(kernel="rbf")
modelRbf.fit(X,y)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plot_svc_decision_function(modelLinear)
plot_svc_decision_function(modelRbf)
plt.title("The two models superposed")
from sklearn.metrics import zero_one_loss
yFitLinear = modelLinear.predict(X)
yFitRbf = modelRbf.predict(X)
print("0/1 loss -- Linear: {:.3f} Rbf: {:.3f}".format(zero_one_loss(y, yFitLinear),zero_one_loss(y, yFitRbf)))
import pandas as pd
import numpy as np
iris = pd.read_csv('data/iris.csv')
classes = pd.DataFrame(iris["species"])
features = iris.drop(["species","sepal_length","sepal_width"],axis=1)
classes.sample(6)
features.sample(6)
XTrain, XTest, yTrain, yTest = train_test_split(features,classes,test_size = 0.5)
from sklearn.multiclass import OneVsRestClassifier
yPred = OneVsRestClassifier(SVC()).fit(XTrain, yTrain).predict(XTest)
print(yPred) # Note the classes are not number but everything went as expected
class_labels= ['virginica' , 'setosa' , 'versicolor']
sns.heatmap(confusion_matrix(yTest, yPred), square=True, annot=True ,cbar=False, xticklabels= class_labels, yticklabels=class_labels)
plt.ylabel('true label')
plt.xlabel('predicted label')
import pandas as pd
import numpy as np
student = pd.read_csv('data/student-mat.csv')
student.head()
target = pd.DataFrame(student["G3"])
features = student.drop(["G3"],axis=1)
from sklearn.preprocessing import LabelEncoder
lenc = LabelEncoder()
num_features = features.apply(lenc.fit_transform)
num_features.head()
from sklearn.preprocessing import StandardScaler, add_dummy_feature
scaler = StandardScaler()
normFeatures = add_dummy_feature(scaler.fit_transform(num_features))
preproData = pd.DataFrame(normFeatures , columns=[ "intercept" ] + list(num_features.columns) )
preproData.describe().T
from sklearn.model_selection import train_test_split # sklearn > ...
from sklearn.linear_model import Lasso
XTrain, XTest, yTrain, yTest = train_test_split(preproData,target,test_size = 0.25)
model = Lasso(alpha=0.1)
model.fit(XTrain,yTrain)
model.coef_
print("Value Feature")
for idx,val in enumerate(model.coef_):
print("{:6.3f} {}".format(val,preproData.columns[idx]))
targetPred = model.predict(XTest)
print("Predicted True")
for idx,val in enumerate(targetPred):
print("{:4.1f} {:.0f}".format(val,float(yTest.iloc[idx])))
n_test = 15
alpha_tab = np.logspace(-10,1,base=2,num = n_test)
print(alpha_tab)
trainError = np.zeros(n_test)
testError = np.zeros(n_test)
featureNum = np.zeros(n_test)
for idx,alpha in enumerate(alpha_tab):
model = Lasso(alpha=alpha)
model.fit(XTrain,yTrain)
yPredTrain = model.predict(XTrain)
yPredTest = model.predict(XTest)
trainError[idx] = np.linalg.norm(yPredTrain-yTrain["G3"].values)/yTrain.count()
testError[idx] = np.linalg.norm(yPredTest-yTest["G3"].values)/yTest.count()
featureNum[idx] = sum(model.coef_!=0)
alpha_opt = alpha_tab[np.argmin(testError)]
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
plt.subplot(311)
plt.xscale("log")
plt.plot(alpha_tab, trainError,label="train error")
plt.xlim([min(alpha_tab),max(alpha_tab)])
plt.legend()
plt.xticks([])
plt.axvline(x=alpha_opt)
plt.ylabel("error")
plt.subplot(312)
plt.xscale("log")
plt.plot(alpha_tab, testError,'r',label="test error")
plt.xlim([min(alpha_tab),max(alpha_tab)])
#plt.ylim([0.19, 0.21])
plt.legend()
plt.axvline(x=alpha_opt)
plt.xticks([])
plt.ylabel("error")
plt.subplot(313)
plt.xscale("log")
plt.scatter(alpha_tab, featureNum)
plt.xlim([min(alpha_tab),max(alpha_tab)])
plt.ylim([0,28])
plt.axvline(x=alpha_opt)
plt.ylabel("nb. of features")
plt.xlabel("alpha")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 - Quelle est la poportion Homme/Femme impliquée dans les accidents ? Représenter le résultat sous forme graphique.
Step2: 2 - Quelle est la poportion des accidents ayant eu lieu le jour, la nuit ou a l'aube/crépuscule? Représenter le résultat sous forme graphique.
Step3: 3- Position géographique
|
<ASSISTANT_TASK:>
Python Code:
dfc = pd.read_csv('./DATA/caracteristiques_2016.csv')
dfu = pd.read_csv('./DATA/usagers_2016.csv')
dfl = pd.read_csv('./DATA/lieux_2016.csv')
df = pd.concat([dfu, dfc, dfl], axis=1)
dfc.tail()
dfu.head()
dfl.tail()
df.head()
df = pd.concat([df, dfl], axis=1)
df.head()
# methode pas propre
(h,c)=df[df.sexe==1].shape
(f,c)=df[df.sexe==2].shape
(t,c)=df.shape
print('h/t=', h/t)
print('f/t=', f/t)
# methode panda
df["sexe"].value_counts(normalize=True)
fig = plt.figure()
df[df.grav==2].sexe.value_counts(normalize=True).plot.pie(labels=['Homme', 'Femme'], colors= ['r', 'g'], autopct='%.2f')
dlum = df["lum"].value_counts(normalize=True)
dlum = dlum.sort_index()
dlum
dlum[3] = dlum[3:5].sum()
fig = plt.figure()
dlum[1:3].plot.pie(labels=['Jour','Aube/crépuscule', 'Nuit'], colors= ['y', 'g' , 'b'], autopct='%.2f')
df.lat=df.lat/100000
df.long=df.long/100000
dfp = df[df.gps=='M']
dfp = dfp[['lat','long']]
dfp = dfp[(dfp.long!=0.0) & (dfp.lat!=0.0)]
dfp.head()
#fig = plt.figure()
dfp.plot.scatter(x='long', y='lat',s=1);
df[(df.long!=0.0) & (df.lat!=0.0) & (df.gps=='M')].plot.scatter(x='long', y='lat',s=.5);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Library Imports
Step2: Setup
Step3: Generate the Synthea data
Step4: Generate the data
Step5: Export the data to BigQuery
Step6: Run the following commands to
Step7: Run the data importer pipeline. This step takes ~ 11 minutes, you can monitor the progress of job via Cloud dataflow dashboard (https
|
<ASSISTANT_TASK:>
Python Code:
from google.colab import auth
auth.authenticate_user()
credentials = auth._check_adc()
print(credentials)
from google.cloud import bigquery
from google.cloud import storage
project = "" #@param {type:"string"}
if not project:
raise Exception("Project is empty.")
!gcloud config set project $project
dataset = "SYNMASS_2k" #@param {type:"string"}
staging_bucket_name = "" #@param {type:"string"}
if not staging_bucket_name:
raise Exception("Staging bucket name is empty.")
if staging_bucket_name.startswith("gs://"):
staging_bucket_path = staging_bucket_name
staging_bucket_name = staging_bucket_path[5:]
else:
staging_bucket_path = "gs://" + staging_bucket_name
# Create the staging bucket if it doesn't exist.
storage_client = storage.Client(project)
if storage_client.lookup_bucket(staging_bucket_name) is None:
bucket = storage_client.create_bucket(staging_bucket_name)
# Clone the Synthea code
!git clone https://github.com/synthetichealth/synthea.git
# Compile the code. This will take ~2 minutes.
%cd ./synthea
!git checkout 56032e01bd2afb154dd94f62ae836459ee7821c9
!./gradlew build -x test
%%bash
time ./run_synthea Massachusetts -p 2000 -s 123 --exporter.csv.export=true > data_generation.log 2> error.log
echo "done"
%%bash -s "$project" "$dataset"
# This step is only needed if the dataset does not exist.
bq mk --dataset $1:$2
%%bash -s "$staging_bucket_path"
apt-get install openjdk-8-jdk-headless -qq > /dev/null
update-java-alternatives -s java-1.8.0-openjdk-amd64
git clone https://github.com/GoogleCloudPlatform/bigquery-data-importer.git
tar --create --gzip --file synmass.tar.gz output/csv
gsutil cp synmass.tar.gz "$1"
%cd bigquery-data-importer
%%bash -s "$project" "$dataset" "$staging_bucket_path" "$staging_bucket_name"
./gradlew run --stacktrace -PappArgs="[\
'--gcp_project_id', '${1}',\
'--gcs_uri', '${3}/synmass.tar.gz',\
'--bq_dataset', '${2}',\
'--temp_bucket', '${4}',\
'--verbose', 'true'
]"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What we have just done is create a dog who can only feel what's in his location (since he's blind), and can eat or drink. Let's see if he's alive...
Step2: This is our dog. How cool is he? Well, he's hungry and needs to go search for food. For him to do this, we need to give him a program. But before that, let's create a park for our dog to play in.
Step3: Wumpus Environment
Step4: PROGRAM
Step5: That's how easy it is to implement an agent, its program, and environment. But that was a very simple case. What if our environment was 2-Dimentional instead of 1? And what if we had multiple agents?
|
<ASSISTANT_TASK:>
Python Code:
from agents import *
class BlindDog(Agent):
def eat(self, thing):
print("Dog: Ate food at {}.".format(self.location))
def drink(self, thing):
print("Dog: Drank water at {}.".format( self.location))
dog = BlindDog()
print(dog.alive)
class Food(Thing):
pass
class Water(Thing):
pass
class Park(Environment):
def percept(self, agent):
'''prints & return a list of things that are in our agent's location'''
things = self.list_things_at(agent.location)
print(things)
return things
def execute_action(self, agent, action):
'''changes the state of the environment based on what the agent does.'''
if action == "move down":
agent.movedown()
elif action == "eat":
items = self.list_things_at(agent.location, tclass=Food)
if len(items) != 0:
if agent.eat(items[0]): #Have the dog pick eat the first item
self.delete_thing(items[0]) #Delete it from the Park after.
elif action == "drink":
items = self.list_things_at(agent.location, tclass=Water)
if len(items) != 0:
if agent.drink(items[0]): #Have the dog drink the first item
self.delete_thing(items[0]) #Delete it from the Park after.
def is_done(self):
'''By default, we're done when we can't find a live agent,
but to prevent killing our cute dog, we will or it with when there is no more food or water'''
no_edibles = not any(isinstance(thing, Food) or isinstance(thing, Water) for thing in self.things)
dead_agents = not any(agent.is_alive() for agent in self.agents)
return dead_agents or no_edibles
from ipythonblocks import BlockGrid
from agents import *
color = {"Breeze": (225, 225, 225),
"Pit": (0,0,0),
"Gold": (253, 208, 23),
"Glitter": (253, 208, 23),
"Wumpus": (43, 27, 23),
"Stench": (128, 128, 128),
"Explorer": (0, 0, 255),
"Wall": (44, 53, 57)
}
def program(percepts):
'''Returns an action based on it's percepts'''
print(percepts)
return input()
w = WumpusEnvironment(program, 7, 7)
grid = BlockGrid(w.width, w.height, fill=(123, 234, 123))
def draw_grid(world):
global grid
grid[:] = (123, 234, 123)
for x in range(0, len(world)):
for y in range(0, len(world[x])):
if len(world[x][y]):
grid[y, x] = color[world[x][y][-1].__class__.__name__]
def step():
global grid, w
draw_grid(w.get_world())
grid.show()
w.step()
step()
class BlindDog(Agent):
location = 1
def movedown(self):
self.location += 1
def eat(self, thing):
'''returns True upon success or False otherwise'''
if isinstance(thing, Food):
print("Dog: Ate food at {}.".format(self.location))
return True
return False
def drink(self, thing):
''' returns True upon success or False otherwise'''
if isinstance(thing, Water):
print("Dog: Drank water at {}.".format(self.location))
return True
return False
def program(percepts):
'''Returns an action based on it's percepts'''
for p in percepts:
if isinstance(p, Food):
return 'eat'
elif isinstance(p, Water):
return 'drink'
return 'move down'
park = Park()
dog = BlindDog(program)
dogfood = Food()
water = Water()
park.add_thing(dog, 0)
park.add_thing(dogfood, 5)
park.add_thing(water, 7)
park.run(10)
class Park(XYEnvironment):
def percept(self, agent):
'''prints & return a list of things that are in our agent's location'''
things = self.list_things_at(agent.location)
print(things)
return things
def execute_action(self, agent, action):
'''changes the state of the environment based on what the agent does.'''
if action == "move down":
agent.movedown()
elif action == "eat":
items = self.list_things_at(agent.location, tclass=Food)
if len(items) != 0:
if agent.eat(items[0]): #Have the dog pick eat the first item
self.delete_thing(items[0]) #Delete it from the Park after.
elif action == "drink":
items = self.list_things_at(agent.location, tclass=Water)
if len(items) != 0:
if agent.drink(items[0]): #Have the dog drink the first item
self.delete_thing(items[0]) #Delete it from the Park after.
def is_done(self):
'''By default, we're done when we can't find a live agent,
but to prevent killing our cute dog, we will or it with when there is no more food or water'''
no_edibles = not any(isinstance(thing, Food) or isinstance(thing, Water) for thing in self.things)
dead_agents = not any(agent.is_alive() for agent in self.agents)
return dead_agents or no_edibles
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selection and Indexing
Step2: DataFrame Columns are just Series
Step3: Creating a new column
Step4: Removing Columns
Step5: Can also drop rows this way
Step6: Selecting Rows
Step7: Or select based off of position instead of label
Step8: Selecting subset of rows and columns
Step9: Conditional Selection
Step10: For two conditions you can use | and & with parenthesis
Step11: More Index Details
Step12: Multi-Index and Index Hierarchy
Step13: Now let's show how to index this! For index hierarchy we use df.loc[], if this was on the columns axis, you would just use normal bracket notation df[]. Calling one level of the index returns the sub-dataframe
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from numpy.random import randn
np.random.seed(101)
df = pd.DataFrame(randn(5,4),index='A B C D E'.split(),columns='W X Y Z'.split())
df
df['W']
# Pass a list of column names
df[['W','Z']]
# SQL Syntax (NOT RECOMMENDED!)
df.W
type(df['W'])
df['new'] = df['W'] + df['Y']
df
df.drop('new',axis=1)
# Not inplace unless specified!
df
df.drop('new',axis=1,inplace=True)
df
df.drop('E',axis=0)
df.loc['A']
df.iloc[2]
df.loc['B','Y']
df.loc[['A','B'],['W','Y']]
df
df>0
df[df>0]
df[df['W']>0]
df[df['W']>0]['Y']
df[df['W']>0][['Y','X']]
df[(df['W']>0) & (df['Y'] > 1)]
df
# Reset to default 0,1...n index
df.reset_index()
newind = 'CA NY WY OR CO'.split()
df['States'] = newind
df
df.set_index('States')
df
df.set_index('States',inplace=True)
df
# Index Levels
outside = ['G1','G1','G1','G2','G2','G2']
inside = [1,2,3,1,2,3]
hier_index = list(zip(outside,inside))
hier_index = pd.MultiIndex.from_tuples(hier_index)
hier_index
df = pd.DataFrame(np.random.randn(6,2),index=hier_index,columns=['A','B'])
df
df.loc['G1']
df.loc['G1'].loc[1]
df.index.names
df.index.names = ['Group','Num']
df
df.xs('G1')
df.xs(['G1',1])
df.xs(1,level='Num')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the component using KFP SDK
Step2: Sample
Step3: Set sample parameters
Step4: Example pipeline that uses the component
Step5: Compile the pipeline
Step6: Submit the pipeline for execution
Step7: Inspect the output
|
<ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
REGION = '<Please put a GCP region here>'
GCS_STAGING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Python'
OUTPUT_FILE = '{}/wc/wordcount.out'.format(GCS_STAGING_DIR)
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = PROJECT_ID,
region = REGION,
staging_dir = GCS_STAGING_DIR,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', OUTPUT_FILE
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
!gsutil cat $OUTPUT_FILE
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Artistic Style Transfer with TensorFlow Lite
Step2: Download the content and style images, and the pre-trained TensorFlow Lite models.
Step3: Pre-process the inputs
Step4: Visualize the inputs
Step5: Run style transfer with TensorFlow Lite
Step6: Style transform
Step7: Style blending
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple linear model in a interactive session
Step2: Load and save models
Step3: Save model as pb file
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import tensorflow as tf
#Basic interactive session
# Enter an interactive TensorFlow Session.
sess = tf.InteractiveSession()
# Define a var and a constant
x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])
# Initialize the var 'x' using the run() method
x.initializer.run()
# Add an op to subtract 'a' from 'x'. Run it and print the result
sub = tf.sub(x, a)
print(sub.eval())
# ==> [-2. -1.]
# Close the Session when we're done.
sess.close()
# Get some data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('/home/ubuntu/data/training/image/mnist', one_hot=True)
# Interactive session for train a model
import tensorflow as tf
import numpy as np
# Start interactive session
sess = tf.InteractiveSession()
# Declare input variables
x = tf.placeholder(tf.float32, shape=[None, 784])
y = tf.placeholder(tf.float32, shape=[None, 10])
#Trainable variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
#Model
y_pred = tf.nn.softmax(tf.matmul(x,W) + b)
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_pred), reduction_indices=[1]))
# Trainer
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
#Loop to train the model. 30 batches of 100 cases
sess.run(tf.initialize_all_variables())
for i in range(30):
batch = mnist.train.next_batch(500)
train_step.run(feed_dict={x: batch[0], y: batch[1]})
print(i, ' - ',cross_entropy.eval(feed_dict={x: batch[0], y: batch[1]}))
#Evaluate variables
# Evaluata trainable variables
print(b.eval())
print(np.max(W.eval()))
# Evaluate results variables
print(y.eval(feed_dict={x: batch[0], y: batch[1]}))
# Close the Session when we're done.
sess.close()
#Basic usage in batch mode
# Define a graph
graph = tf.Graph()
with graph.as_default():
# graph definition
# Execute a graph to train a network
with tf.Session(graph=graph) as session:
print('Initializing')
tf.initialize_all_variables().run()
for epoch in range(nEpochs):
for batch in batch_list:
feedDict = {} # dictionary of batch data to run the graph
_, param1_out, param2_out = session.run([optimizer, param1_in, param2_in], feed_dict=feedDict)
# Execute a graph to score data
#SELECT DEVICE
with tf.device('/cpu:0'):
# Include here the graph operations for the CPU.
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# LIMIT THE MEMORY OF THE GPU
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
# List of variables saved in a model file
path_model = '/home/jorge/data/tesis/handwriting/p05_ctc/IAM_corleone_first_model/'
reader = tf.train.NewCheckpointReader(path_model + "modelCTC_original_images_01_epoch_95.ckpt")
print(reader.debug_string().decode("utf-8"))
# Create and save model
import tensorflow as tf
#Load data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('/tmp/MNIST_data', one_hot=True)
sess = tf.InteractiveSession()
# Define graph
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y = tf.placeholder(tf.float32, shape=[None, 10], name='y')
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
#Prediction
y_pred = tf.nn.softmax(tf.matmul(x,W) + b, name='y_pred')
#Loss
cross_entropy = -tf.reduce_sum(y*tf.log(y_pred), name='cross_entropy')
# Train graph
train_step = tf.train.GradientDescentOptimizer(0.01, name='train_step').minimize(cross_entropy)
# Inicialize graph vars
sess.run(tf.initialize_all_variables())
for i in range(100):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y: batch[1]})
# Predict and evaluate
correct_prediction = tf.equal(tf.argmax(y_pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='Accuracy')
print('Accuracy test', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
# Add to the collection the vars that we need in the future
# - For train: all the placeholders and the train_step
#tf.add_to_collection('x', x)
#tf.add_to_collection('y', y)
#tf.add_to_collection('train_step', train_step)
# - For score: X placeholders and y_pred
#tf.add_to_collection('x', x)
#tf.add_to_collection('y_pred', y_pred)
# - For validation: All placeholders and loss & accuracy
#tf.add_to_collection('x', x)
#tf.add_to_collection('y', y)
#tf.add_to_collection('cross_entropy', cross_entropy)
#tf.add_to_collection('accuracy', accuracy)
# Create a saver and save weigths.
saver = tf.train.Saver(max_to_keep=0)
saver.save(sess, '/tmp/my-model',)
#Close session
sess.close()
# Continue training a model
import tensorflow as tf
#Load data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('/tmp/MNIST_data', one_hot=True)
sess = tf.InteractiveSession()
#Load model
new_saver = tf.train.import_meta_graph('/tmp/my-model.meta')
new_saver.restore(sess, '/tmp/my-model')
#Load vars
#x = tf.get_collection('x')[0]
#y = tf.get_collection('y')[0]
#Continue training
train_step = tf.get_collection('train_step')[0]
for i in range(900):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y: batch[1]})
accuracy = tf.get_collection('accuracy')[0]
print('Accuracy test', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
sess.close()
# Score new data
import tensorflow as tf
#Load data
data_path = '/home/jorge/data/training/tensorflow/'
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(data_path + 'MNIST_data', one_hot=True)
sess = tf.InteractiveSession()
#Load model
new_saver = tf.train.import_meta_graph('/tmp/my-model.meta')
new_saver.restore(sess, '/tmp/my-model')
#Load vars
x = tf.get_collection('x')[0]
y_pred = tf.get_collection('y_pred')[0]
print('Prediction test', y_pred.eval(feed_dict={x: mnist.test.images[0:2]}))
sess.close()
# Evaluate model
import tensorflow as tf
#Load data
data_path = '/home/jorge/data/training/tensorflow/'
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(data_path + 'MNIST_data', one_hot=True)
sess = tf.InteractiveSession()
#Load model
new_saver = tf.train.import_meta_graph('/tmp/my-model.meta')
new_saver.restore(sess, '/tmp/my-model')
#Load vars
x = tf.get_collection('x')[0]
y = tf.get_collection('y')[0]
accuracy = tf.get_collection('accuracy')[0]
cross_entropy = tf.get_collection('cross_entropy')[0]
print('cross_entropy test', cross_entropy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
print('Accuracy test', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
sess.close()
sess = tf.InteractiveSession()
### create some graph here ###
##############################
graph_def = sess.graph.as_graph_def()
output_node_names = "output0,output1" # put the names of the output nodes here
# freeze all parameters and save
output_graph_def = graph_util.convert_variables_to_constants(
sess, graph_def, output_node_names.split(","))
with tf.gfile.GFile(output_graph_file, "wb") as f:
f.write(output_graph_def.SerializeToString())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Encoder
Step4: Note that we use a couple features of TF-Slim here
Step6: Loss
Step8: Visualization
Step9: Define the graph and train
Step10: <sub>[1] The primary purpose of TensorFlow is to construct a computation graph connecting Tensors and operations. Each of these nodes must be assigned a unique name; if the user does not specify one, a unique name is automatically generated, like 'Placeholder_2', with the number at the end incrementing each time you create a new node of that type. Attempting to create a node with a name already found in the graph raises an error.</sub>
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
slim = tf.contrib.slim
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
def encoder(x):
Network q(z|x)
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
mu_logvar = slim.fully_connected(x, 128, scope='fc1')
mu_logvar = slim.fully_connected(mu_logvar, 128, activation_fn=None, scope='fc2')
return mu_logvar
def decoder(mu_logvar):
Network p(x|z)
# Interpret z as concatenation of mean and log variance
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
# Standard deviation must be positive
stddev = tf.sqrt(tf.exp(logvar))
# Draw a z from the distribution
epsilon = tf.random_normal(tf.shape(stddev))
z = mu + tf.multiply(stddev, epsilon)
# Decoding arm
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
x_logits = slim.fully_connected(z, 128, scope='fc1')
x_logits = slim.fully_connected(x_logits, 784, activation_fn=None, scope='fc2')
# x_hat to be generated from a Bernoulli distribution
x_dist = tf.contrib.distributions.Bernoulli(logits=x_logits, dtype=tf.float32)
return x_logits, x_dist
def optimizer(x_logits, x, mu_logvar):
Define loss functions (reconstruction, KL divergence) and optimizer
with tf.variable_scope('optimizer') as scope:
# Reconstruction loss
reconstruction = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=x, logits=x_logits), reduction_indices=[1])
# KL divergence
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
kl_d = -0.5 * tf.reduce_sum(1.0 + logvar - tf.square(mu) - tf.exp(logvar), reduction_indices=[1])
# Total loss
loss = tf.reduce_mean(reconstruction + kl_d)
# ADAM optimizer
train_step = tf.train.AdamOptimizer().minimize(loss)
return train_step
def visualize_row(image, reconstruction, img_width=28, cmap='gray'):
Takes in a tensor of images of given width, and displays them in a column
in a plot, using `cmap` to map from numbers to colors.
fig, ax = plt.subplots(1, 2)
image = np.reshape(image, [-1, img_width])
reconstruction = np.reshape(reconstruction, [-1, img_width])
plt.figure()
ax[0].imshow(np.clip(image, 0, 1), cmap=cmap)
ax[1].imshow(np.clip(reconstruction, 0, 1), cmap=cmap)
plt.show()
# Reset the graph
tf.reset_default_graph()
# Define input placeholder
x = tf.placeholder(tf.float32,[None, 784], name='x')
# Define VAE graph
with tf.variable_scope('encoder'):
mu_logvar = encoder(x)
with tf.variable_scope('decoder'):
x_logits, x_dist = decoder(mu_logvar)
x_hat = x_dist.sample()
# Optimization
with tf.variable_scope('unlabeled') as scope:
train_step_unlabeled = optimizer(x_logits, x, mu_logvar)
with tf.Session() as sess:
# Initialize all variables
sess.run(tf.global_variables_initializer())
# Train VAE model
for i in range(20000):
# Get a training minibatch
batch = mnist.train.next_batch(100)
# Binarize the data
x_binarized = (batch[0] > 0.5).astype(np.float32)
# Train on minibatch
sess.run(train_step_unlabeled, feed_dict={x: x_binarized}) # No labels
# Visualize reconstructions every 1000 iterations
if i % 1000 == 0:
batch = mnist.validation.next_batch(5)
x_binarized = (batch[0] > 0.5).astype(np.float32)
reconstructions = sess.run(x_hat, feed_dict={x: x_binarized})
print("Iteration {0}:".format(i))
visualize_row(batch[0], reconstructions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Click below to see all the imports we need for this example.
Step3: Let's start by defining a optimization problem.
Step4: Next we define a search space. The critical assumption is that the optimal
Step5: Now we define the search algorithm from DragonflySearch with optimizer and
Step6: The number of samples is the number of hyperparameter combinations that will be
Step7: Finally, we run the experiment to minimize the mean_loss of the objective by
Step8: Below are the recommended relative proportions of water and each salt found to
|
<ASSISTANT_TASK:>
Python Code:
# !pip install ray[tune]
!pip install dragonfly-opt==0.1.6
import numpy as np
import time
import ray
from ray import tune
from ray.tune.suggest import ConcurrencyLimiter
from ray.tune.suggest.dragonfly import DragonflySearch
def objective(config):
Simplistic model of electrical conductivity with added Gaussian
noise to simulate experimental noise.
for i in range(config["iterations"]):
vol1 = config["LiNO3_vol"] # LiNO3
vol2 = config["Li2SO4_vol"] # Li2SO4
vol3 = config["NaClO4_vol"] # NaClO4
vol4 = 10 - (vol1 + vol2 + vol3) # Water
conductivity = vol1 + 0.1 * (vol2 + vol3) ** 2 + 2.3 * vol4 * (vol1 ** 1.5)
conductivity += np.random.normal() * 0.01
tune.report(timesteps_total=i, objective=conductivity)
time.sleep(0.02)
search_space = {
"iterations": 100,
"LiNO3_vol": tune.uniform(0, 7),
"Li2SO4_vol": tune.uniform(0, 7),
"NaClO4_vol": tune.uniform(0, 7)
}
ray.init(configure_logging=False)
algo = DragonflySearch(
optimizer="bandit",
domain="euclidean",
)
algo = ConcurrencyLimiter(algo, max_concurrent=4)
num_samples = 100
# Reducing samples for smoke tests
num_samples = 10
analysis = tune.run(
objective,
metric="objective",
mode="max",
name="dragonfly_search",
search_alg=algo,
num_samples=num_samples,
config=search_space
)
print("Best hyperparameters found: ", analysis.best_config)
ray.shutdown()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-1', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: Before proceeding lets check the geometry.
Step10: Looks good!
Step11: With the geometry and materials finished, we now just need to define simulation parameters.
Step12: Create an MGXS Library
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
Step14: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections
Step15: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
Step16: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
Step17: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
Step18: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
Step19: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
Step20: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
Step21: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
Step22: Time to run the calculation and get our results!
Step23: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
Step24: Tally Data Processing
Step25: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step26: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Step27: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
Step28: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Step29: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
Step30: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
Step31: At this point, the problem is set up and we can run the multi-group calculation.
Step32: Results Comparison
Step33: Next, we can load the continuous-energy eigenvalue for comparison.
Step34: Lets compare the two eigenvalues, including their bias
Step35: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Step36: We can now do the same for the Continuous-Energy results.
Step37: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
Step38: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Step39: Now we can re-run OpenMC to obtain our results
Step40: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
Step41: Mixed Scattering Representations
Step42: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
Step43: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
Step44: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import os
import openmc
%matplotlib inline
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_element('H', 4.9457e-2)
water.add_element('O', 2.4732e-2)
water.add_element('B', 8.0042e-6)
# Instantiate a Materials object
materials_file = openmc.Materials((fuel, zircaloy, water))
# Export to "materials.xml"
materials_file.export_to_xml()
# Create cylinders for the fuel and clad
# The x0 and y0 parameters (0. and 0.) are the default values for an
# openmc.ZCylinder object. We could therefore leave them out to no effect
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:, :] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cell(root_cell)
root_universe.plot(origin=(0., 0., 0.), width=(21.42, 21.42), pixels=(500, 500), color_by='material')
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
# OpenMC simulation parameters
batches = 600
inactive = 50
particles = 3000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
settings_file.run_mode = 'eigenvalue'
settings_file.verbosity = 4
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups([0., 0.625, 20.0e6])
# Initialize a 2-group MGXS Library for OpenMC
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = groups
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
# Specify a "cell" domain type for the cross section tally filters
mgxs_lib.domain_type = "material"
# Specify the cell domains over which to compute multi-group cross sections
mgxs_lib.domains = geometry.get_all_materials().values()
# Do not compute cross sections on a nuclide-by-nuclide basis
mgxs_lib.by_nuclide = False
# Set the Legendre order to 3 for P3 scattering
mgxs_lib.legendre_order = 3
# Check the library - if no errors are raised, then the library is satisfactory.
mgxs_lib.check_library_for_openmc_mgxs()
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate a tally Mesh
mesh = openmc.RegularMesh()
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.upper_right = [+10.71, +10.71]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally, merge=True)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Move the statepoint File
ce_spfile = './statepoint_ce.h5'
os.rename('statepoint.' + str(batches) + '.h5', ce_spfile)
# Move the Summary file
ce_sumfile = './summary_ce.h5'
os.rename('summary.h5', ce_sumfile)
# Load the statepoint file
sp = openmc.StatePoint(ce_spfile, autolink=False)
# Load the summary file in its new location
su = openmc.Summary(ce_sumfile)
sp.link_with_summary(su)
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
# Create a MGXS File which can then be written to disk
mgxs_file = mgxs_lib.create_mg_library(xs_type='macro', xsdata_names=['fuel', 'zircaloy', 'water'])
# Write the file to disk using the default filename of "mgxs.h5"
mgxs_file.export_to_hdf5()
# Re-define our materials to use the multi-group macroscopic data
# instead of the continuous-energy data.
# 1.6% enriched fuel UO2
fuel_mg = openmc.Material(name='UO2', material_id=1)
fuel_mg.add_macroscopic('fuel')
# cladding
zircaloy_mg = openmc.Material(name='Clad', material_id=2)
zircaloy_mg.add_macroscopic('zircaloy')
# moderator
water_mg = openmc.Material(name='Water', material_id=3)
water_mg.add_macroscopic('water')
# Finally, instantiate our Materials object
materials_file = openmc.Materials((fuel_mg, zircaloy_mg, water_mg))
# Set the location of the cross sections file
materials_file.cross_sections = 'mgxs.h5'
# Export to "materials.xml"
materials_file.export_to_xml()
# Set the energy mode
settings_file.energy_mode = 'multi-group'
# Export to "settings.xml"
settings_file.export_to_xml()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
# Add fission and flux mesh to tally for plotting using the same mesh we've already defined
mesh_tally = openmc.Tally(name='mesh tally')
mesh_tally.filters = [openmc.MeshFilter(mesh)]
mesh_tally.scores = ['fission']
tallies_file.append(mesh_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# First lets plot the fuel data
# We will first add the continuous-energy data
fig = openmc.plot_xs(fuel, ['total'])
# We will now add in the corresponding multi-group data and show the result
openmc.plot_xs(fuel_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# Then repeat for the zircaloy data
fig = openmc.plot_xs(zircaloy, ['total'])
openmc.plot_xs(zircaloy_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# And finally repeat for the water data
fig = openmc.plot_xs(water, ['total'])
openmc.plot_xs(water_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# Run the Multi-Group OpenMC Simulation
openmc.run()
# Move the StatePoint File
mg_spfile = './statepoint_mg.h5'
os.rename('statepoint.' + str(batches) + '.h5', mg_spfile)
# Move the Summary file
mg_sumfile = './summary_mg.h5'
os.rename('summary.h5', mg_sumfile)
# Rename and then load the last statepoint file and keff value
mgsp = openmc.StatePoint(mg_spfile, autolink=False)
# Load the summary file in its new location
mgsu = openmc.Summary(mg_sumfile)
mgsp.link_with_summary(mgsu)
# Get keff
mg_keff = mgsp.k_combined
ce_keff = sp.k_combined
bias = 1.0E5 * (ce_keff - mg_keff)
print('Continuous-Energy keff = {0:1.6f}'.format(ce_keff))
print('Multi-Group keff = {0:1.6f}'.format(mg_keff))
print('bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
# Get the OpenMC fission rate mesh tally data
mg_mesh_tally = mgsp.get_tally(name='mesh tally')
mg_fission_rates = mg_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
mg_fission_rates.shape = (17,17)
# Normalize to the average pin power
mg_fission_rates /= np.mean(mg_fission_rates[mg_fission_rates > 0.])
# Get the OpenMC fission rate mesh tally data
ce_mesh_tally = sp.get_tally(name='mesh tally')
ce_fission_rates = ce_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
ce_fission_rates.shape = (17,17)
# Normalize to the average pin power
ce_fission_rates /= np.mean(ce_fission_rates[ce_fission_rates > 0.])
# Force zeros to be NaNs so their values are not included when matplotlib calculates
# the color scale
ce_fission_rates[ce_fission_rates == 0.] = np.nan
mg_fission_rates[mg_fission_rates == 0.] = np.nan
# Plot the CE fission rates in the left subplot
fig = plt.subplot(121)
plt.imshow(ce_fission_rates, interpolation='none', cmap='jet')
plt.title('Continuous-Energy Fission Rates')
# Plot the MG fission rates in the right subplot
fig2 = plt.subplot(122)
plt.imshow(mg_fission_rates, interpolation='none', cmap='jet')
plt.title('Multi-Group Fission Rates')
# Set the maximum scattering order to 0 (i.e., isotropic scattering)
settings_file.max_order = 0
# Export to "settings.xml"
settings_file.export_to_xml()
# Run the Multi-Group OpenMC Simulation
openmc.run()
# Move the statepoint File
mgp0_spfile = './statepoint_mg_p0.h5'
os.rename('statepoint.' + str(batches) + '.h5', mgp0_spfile)
# Move the Summary file
mgp0_sumfile = './summary_mg_p0.h5'
os.rename('summary.h5', mgp0_sumfile)
# Load the last statepoint file and keff value
mgsp_p0 = openmc.StatePoint(mgp0_spfile, autolink=False)
# Get keff
mg_p0_keff = mgsp_p0.k_combined
bias_p0 = 1.0E5 * (ce_keff - mg_p0_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('P0 bias [pcm]: {0:1.1f}'.format(bias_p0.nominal_value))
# Convert the zircaloy and fuel data to P0 scattering
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name != 'water':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('legendre', 0)
# Convert the formats as discussed
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name == 'zircaloy':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('histogram', 2)
elif xsdata.name == 'fuel':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('tabular', 2)
mgxs_file.export_to_hdf5('mgxs.h5')
settings_file.max_order = None
# Export to "settings.xml"
settings_file.export_to_xml()
# Run the Multi-Group OpenMC Simulation
openmc.run()
# Load the last statepoint file and keff value
mgsp_mixed = openmc.StatePoint('./statepoint.' + str(batches) + '.h5')
mg_mixed_keff = mgsp_mixed.k_combined
bias_mixed = 1.0E5 * (ce_keff - mg_mixed_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('Mixed Scattering bias [pcm]: {0:1.1f}'.format(bias_mixed.nominal_value))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data file
Step2: Class Label
Step3: Features
Step4: Features
Step5: Aggregated Features
Step6: Fraud Detection as a classification problem
Step7: Fraud Detection as a classification problem
Step8: Models performance
Step9: Models performance
Step10: Models performance
Step11: Models performance
Step12: Financial savings
Step13: Models Savings
Step14: Threshold Optimization
Step15: Threshold Optimization
Step16: Models Savings
Step17: BMR Results
Step18: BMR Results
Step19: Average cost of a False Positive
Step20: BMR Results
Step21: CSDT & CSRF Results
Step23: Lessons Learned (so far ...)
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from costcla import datasets
from costcla.datasets.base import Bunch
def load_fraud(cost_mat_parameters=dict(Ca=10)):
# data_ = pd.read_pickle("trx_fraud_data.pk")
data_ = pd.read_pickle("/home/al/DriveAl/EasySol/Projects/DetectTA/Tests/trx_fraud_data_v3_agg.pk")
target = data_['fraud'].values
data = data_.drop('fraud', 1)
n_samples = data.shape[0]
cost_mat = np.zeros((n_samples, 4))
cost_mat[:, 0] = cost_mat_parameters['Ca']
cost_mat[:, 1] = data['amount']
cost_mat[:, 2] = cost_mat_parameters['Ca']
cost_mat[:, 3] = 0.0
return Bunch(data=data.values, target=target, cost_mat=cost_mat,
target_names=['Legitimate Trx', 'Fraudulent Trx'], DESCR='',
feature_names=data.columns.values, name='FraudDetection')
datasets.load_fraud = load_fraud
data = datasets.load_fraud()
print(data.keys())
print('Number of examples ', data.target.shape[0])
target = pd.DataFrame(pd.Series(data.target).value_counts(), columns=('Frequency',))
target['Percentage'] = (target['Frequency'] / target['Frequency'].sum()) * 100
target.index = ['Negative (Legitimate Trx)', 'Positive (Fraud Trx)']
target.loc['Total Trx'] = [data.target.shape[0], 1.]
print(target)
pd.DataFrame(data.feature_names[:4], columns=('Features',))
df = pd.DataFrame(data.data[:, :4], columns=data.feature_names[:4])
df.head(10)
df = pd.DataFrame(data.data[:, 4:], columns=data.feature_names[4:])
df.head(10)
from sklearn.cross_validation import train_test_split
X = data.data[:, [2, 3] + list(range(4, data.data.shape[1]))].astype(np.float)
X_train, X_test, y_train, y_test, cost_mat_train, cost_mat_test = \
train_test_split(X, data.target, data.cost_mat, test_size=0.33, random_state=10)
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
classifiers = {"RF": {"f": RandomForestClassifier()},
"DT": {"f": DecisionTreeClassifier()}}
ci_models = ['DT', 'RF']
# Fit the classifiers using the training dataset
for model in classifiers.keys():
classifiers[model]["f"].fit(X_train, y_train)
classifiers[model]["c"] = classifiers[model]["f"].predict(X_test)
classifiers[model]["p"] = classifiers[model]["f"].predict_proba(X_test)
classifiers[model]["p_train"] = classifiers[model]["f"].predict_proba(X_train)
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.pylabtools import figsize
import seaborn as sns
colors = sns.color_palette()
figsize(12, 8)
from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score
measures = {"F1Score": f1_score, "Precision": precision_score,
"Recall": recall_score, "Accuracy": accuracy_score}
results = pd.DataFrame(columns=measures.keys())
for model in ci_models:
results.loc[model] = [measures[measure](y_test, classifiers[model]["c"]) for measure in measures.keys()]
def fig_acc():
plt.bar(np.arange(results.shape[0])-0.3, results['Accuracy'], 0.6, label='Accuracy', color=colors[0])
plt.xticks(range(results.shape[0]), results.index)
plt.tick_params(labelsize=22); plt.title('Accuracy', size=30)
plt.show()
fig_acc()
def fig_f1():
plt.bar(np.arange(results.shape[0])-0.3, results['Precision'], 0.2, label='Precision', color=colors[0])
plt.bar(np.arange(results.shape[0])-0.3+0.2, results['Recall'], 0.2, label='Recall', color=colors[1])
plt.bar(np.arange(results.shape[0])-0.3+0.4, results['F1Score'], 0.2, label='F1Score', color=colors[2])
plt.xticks(range(results.shape[0]), results.index)
plt.tick_params(labelsize=22)
plt.ylim([0, 1])
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5),fontsize=22)
plt.show()
fig_f1()
# The cost matrix is already calculated for the dataset
# cost_mat[C_FP,C_FN,C_TP,C_TN]
print(data.cost_mat[[10, 17, 50]])
# Calculation of the cost and savings
from costcla.metrics import savings_score, cost_loss
# Evaluate the savings for each model
results["Savings"] = np.zeros(results.shape[0])
for model in ci_models:
results["Savings"].loc[model] = savings_score(y_test, classifiers[model]["c"], cost_mat_test)
# Plot the results
def fig_sav():
plt.bar(np.arange(results.shape[0])-0.4, results['Precision'], 0.2, label='Precision', color=colors[0])
plt.bar(np.arange(results.shape[0])-0.4+0.2, results['Recall'], 0.2, label='Recall', color=colors[1])
plt.bar(np.arange(results.shape[0])-0.4+0.4, results['F1Score'], 0.2, label='F1Score', color=colors[2])
plt.bar(np.arange(results.shape[0])-0.4+0.6, results['Savings'], 0.2, label='Savings', color=colors[3])
plt.xticks(range(results.shape[0]), results.index)
plt.tick_params(labelsize=22)
plt.ylim([0, 1])
plt.xlim([-0.5, results.shape[0] -1 + .5])
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5),fontsize=22)
plt.show()
fig_sav()
from costcla.models import ThresholdingOptimization
for model in ci_models:
classifiers[model+"-TO"] = {"f": ThresholdingOptimization()}
# Fit
classifiers[model+"-TO"]["f"].fit(classifiers[model]["p_train"], cost_mat_train, y_train)
# Predict
classifiers[model+"-TO"]["c"] = classifiers[model+"-TO"]["f"].predict(classifiers[model]["p"])
print('New thresholds')
for model in ci_models:
print(model + '-TO - ' + str(classifiers[model+'-TO']['f'].threshold_))
for model in ci_models:
# Evaluate
results.loc[model+"-TO"] = 0
results.loc[model+"-TO", measures.keys()] = \
[measures[measure](y_test, classifiers[model+"-TO"]["c"]) for measure in measures.keys()]
results["Savings"].loc[model+"-TO"] = savings_score(y_test, classifiers[model+"-TO"]["c"], cost_mat_test)
fig_sav()
from costcla.models import BayesMinimumRiskClassifier
for model in ci_models:
classifiers[model+"-BMR"] = {"f": BayesMinimumRiskClassifier()}
# Fit
classifiers[model+"-BMR"]["f"].fit(y_test, classifiers[model]["p"])
# Calibration must be made in a validation set
# Predict
classifiers[model+"-BMR"]["c"] = classifiers[model+"-BMR"]["f"].predict(classifiers[model]["p"], cost_mat_test)
for model in ci_models:
# Evaluate
results.loc[model+"-BMR"] = 0
results.loc[model+"-BMR", measures.keys()] = \
[measures[measure](y_test, classifiers[model+"-BMR"]["c"]) for measure in measures.keys()]
results["Savings"].loc[model+"-BMR"] = savings_score(y_test, classifiers[model+"-BMR"]["c"], cost_mat_test)
fig_sav()
print(data.data[data.target == 1, 2].mean())
print(data.cost_mat[:,0].mean())
from costcla.models import CostSensitiveDecisionTreeClassifier
from costcla.models import CostSensitiveRandomForestClassifier
classifiers = {"CSDT": {"f": CostSensitiveDecisionTreeClassifier()},
"CSRF": {"f": CostSensitiveRandomForestClassifier(combination='majority_bmr')}}
# Fit the classifiers using the training dataset
for model in classifiers.keys():
classifiers[model]["f"].fit(X_train, y_train, cost_mat_train)
if model == "CSRF":
classifiers[model]["c"] = classifiers[model]["f"].predict(X_test, cost_mat_test)
else:
classifiers[model]["c"] = classifiers[model]["f"].predict(X_test)
for model in ['CSDT', 'CSRF']:
# Evaluate
results.loc[model] = 0
results.loc[model, measures.keys()] = \
[measures[measure](y_test, classifiers[model]["c"]) for measure in measures.keys()]
results["Savings"].loc[model] = savings_score(y_test, classifiers[model]["c"], cost_mat_test)
fig_sav()
#Format from https://github.com/ellisonbg/talk-2013-scipy
from IPython.display import display, HTML
s =
<style>
.rendered_html {
font-family: "proxima-nova", helvetica;
font-size: 100%;
line-height: 1.3;
}
.rendered_html h1 {
margin: 0.25em 0em 0.5em;
color: #015C9C;
text-align: center;
line-height: 1.2;
page-break-before: always;
}
.rendered_html h2 {
margin: 1.1em 0em 0.5em;
color: #26465D;
line-height: 1.2;
}
.rendered_html h3 {
margin: 1.1em 0em 0.5em;
color: #002845;
line-height: 1.2;
}
.rendered_html li {
line-height: 1.5;
}
.prompt {
font-size: 120%;
}
.CodeMirror-lines {
font-size: 120%;
}
.output_area {
font-size: 120%;
}
#notebook {
background-image: url('files/images/witewall_3.png');
}
h1.bigtitle {
margin: 4cm 1cm 4cm 1cm;
font-size: 300%;
}
h3.point {
font-size: 200%;
text-align: center;
margin: 2em 0em 2em 0em;
#26465D
}
.logo {
margin: 20px 0 20px 0;
}
a.anchor-link {
display: none;
}
h1.title {
font-size: 250%;
}
</style>
display(HTML(s))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unarchive
Step2: Tokenizing and Filtering a Vocabulary
Step3: Feature Vocabulary
Step4: TFIDF Weighting
Step5: K-Means
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import sys
sys.version
import tempfile
import zipfile
import os.path
zipFile = "./openSubtitles-5000.json.zip"
print( "Unarchiving ...")
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zipFile, 'r')
zip_ref.extractall(temp_dir)
zip_ref.close()
openSubtitlesFile = os.path.join(temp_dir, "openSubtitles-5000.json")
print ("file unarchived to:" + openSubtitlesFile)
import json
from sklearn.feature_extraction.text import CountVectorizer
#from log_progress import log_progress
maxDocsToload = 50000
titles = []
def make_corpus(file):
with open(file) as f:
for i, line in enumerate(f):
doc = json.loads(line)
titles.append(doc.get('Title',''))
#if 'Sci-Fi' not in doc.get('Genre',''):
# continue
if i % 100 == 0:
print ("%d " % i, end='')
yield doc.get('Text','')
if i == maxDocsToload:
break
print ("Starting load ...")
textGenerator = make_corpus(openSubtitlesFile)
count_vectorizer = CountVectorizer(min_df=2, max_df=0.75, ngram_range=(1,2), max_features=50000,
stop_words='english', analyzer="word", token_pattern="[a-zA-Z]{3,}")
term_freq_matrix = count_vectorizer.fit_transform(textGenerator)
print ("Done.")
print ( "term_freq_matrix shape = %s" % (term_freq_matrix.shape,) )
print ("term_freq_matrix = \n%s" % term_freq_matrix)
print( "Vocabulary length = ", len(count_vectorizer.vocabulary_))
word = "data";
rainingIndex = count_vectorizer.vocabulary_[word];
print( "token index for \"%s\" = %d" % (word,rainingIndex))
feature_names = count_vectorizer.get_feature_names()
print( "feature_names[%d] = %s" % (rainingIndex, feature_names[rainingIndex]))
for i in range(0,1000):
print( "feature_names[%d] = %s" % (i, feature_names[i]))
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(norm="l2")
tfidf.fit(term_freq_matrix)
tf_idf_matrix = tfidf.transform(term_freq_matrix)
print( tf_idf_matrix)
%%time
from sklearn.cluster import KMeans,MiniBatchKMeans
import numpy
num_clusters = 5
#km = KMeans(n_clusters=num_clusters, verbose=True, init='k-means++', n_init=3, n_jobs=-1)
km = MiniBatchKMeans(n_clusters=num_clusters, verbose=True, init='k-means++', n_init=25, batch_size=2000)
km.fit(tf_idf_matrix)
clusters = km.labels_.tolist()
print ("cluster id for each document = %s" % clusters)
print()
# sort cluster centers by proximity to centroid
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
labels = pd.DataFrame(clusters, columns=['Cluster Labels'])
counts = pd.DataFrame(labels['Cluster Labels'].value_counts().sort_index())
counts.columns=['Document Count']
display(counts)
topNWords = 50
df = pd.DataFrame()
for i in range(num_clusters):
clusterWords = []
for topWordIndex,ind in enumerate(order_centroids[i, :topNWords]):
clusterWords.append( feature_names[ind] )
df['Cluster %d' % i] = pd.Series(clusterWords)
#dtype='object', data= [''] * topNWords)
#print(topWordIndex)
#print(ind)
#print(feature_names[ind])
df.style.set_properties(**{'text-align': 'right'})
df
titlesFrame = pd.DataFrame()
titlesFrame['Labels']=km.labels_
titlesFrame['Titles']=titles
sort = titlesFrame.sort_values(by=['Labels','Titles'])
for i in range(num_clusters):
display( sort.query('Labels == %d' % i) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use the class TwoLayerNet in the file nnet.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays.
Step2: Train a network
Step3: Debug the training
Step4: Tune your hyperparameters
Step5: Run on the test set
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code
import numpy as np
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# bool var. to let program show debug info.
debug = True
show_img = True
import cifar10
# Load the raw CIFAR-10 data
X, y, X_test, y_test = cifar10.load('../cifar-10-batches-py', debug = debug)
m = 49000
m_val = 1000
m_test = 1000
m_dev = 500
X, y, X_test, y_test, X_dev, y_dev, X_val, y_val = cifar10.split_vec(X, y, X_test, y_test, m, m_test, m_val, m_dev, debug = debug, show_img = show_img)
from nnet import NNet
n0 = X_dev.shape[1]
n1 = 20
n2 = 10
# Forward pass: compute scores
model = NNet(n0, n1, n2)
model.train_check(X_dev, y_dev, lamda = 3.3)
n0 = X_dev.shape[1]
n1 = 20
n2 = 10
alpha, lamda, T, B, rho = 1e-4, 0.5, 1000, 200, 0.95
hpara = (alpha, lamda, T, B, rho)
# Forward pass: compute scores
model = NNet(n0, n1, n2)
model.train(X, y, X_val, y_val, hpara, debug, show_img)
# Predict on the val. set
print 'val. acc.:', np.mean(model.predict(X_val) == y_val)
model.visualize_W()
best_model = None
best_acc = -1
# TODO: Tune hyperparameters using the validation set. Store your best trained
# model in best_net.
#
# To help debug your network, it may help to use visualizations similar to the
# ones we used above; these visualizations will have significant qualitative
# differences from the ones we saw above for the poorly tuned network.
#
# Tweaking hyperparameters by hand can be fun, but you might find it useful to
# write code to sweep through possible combinations of hyperparameters
# automatically like we did on the previous exercises.
n0 = X_dev.shape[1]
n1 = 200
n2 = 10
alpha, lamda, T, B, rho = 2e-3, 3e-2, 10000, 200, 0.95
for alpha in [2e-3]:
hpara = (alpha, lamda, T, B, rho)
print hpara
model = NNet(n0, n1, n2)
model.train(X, y, X_val, y_val, hpara, debug, show_img)
# Predict on the val. set
val_acc = np.mean(model.predict(X_val) == y_val)
print 'val. acc.:', val_acc
print '\n'
if val_acc > best_acc:
best_acc = val_acc
best_model = model
# Visualize the weights of the best model
best_model.visualize_W()
print 'Test accuracy: ', np.mean(best_model.predict(X_test) == y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here is a basic Bode plot using scipy.signal to generate the transfer function.
Step2: Here is a plot using bode_syss to plot multiple transfer functions on the same graph.
Step3: Quickly compare different filter parameters using bode_firs.
Step4: Plot analog and digital filters together on the same (analog) frequency axis, useful for evaluating digital approximations to analog filters.
Step5: Plot the poles and zeros of a given digital system.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import division
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy import signal
import sigutils
sigutils.bode_sys(signal.butter(4, [100*2*np.pi, 200*2*np.pi], analog=True, btype='bandpass'), xlim=(10, 1000), gain_point=-3)
butter = signal.butter(4, [100*2*np.pi, 200*2*np.pi], analog=True, btype='bandpass')
bessel = signal.bessel(4, [100*2*np.pi, 200*2*np.pi], analog=True, btype='bandpass')
ellip = signal.ellip(4, 1, 40, [100*2*np.pi, 200*2*np.pi], analog=True, btype='bandpass')
cheb2 = signal.cheby2(4, 40, [100*2*np.pi*0.75, 200*2*np.pi/0.75], analog=True, btype='bandpass')
fig, (ax1, ax2) = sigutils.bode_syss((butter, bessel, ellip, cheb2), xlim=(40, 480), mag_lim=(-55, 5, 5),)
ax1.legend(['Butter', 'Bessel', 'Ellipt.', 'Cheb.'], loc='lower center')
fig, (ax1, ax2) = sigutils.bode_firs((signal.firwin(51, 0.1, nyq=1),
signal.firwin(101, 0.1, nyq=1),
signal.remez(101, [0, 0.085, 0.125, 1], [1, 0], Hz=2),
signal.remez(51, [0, 0.085, 0.125, 1], [1, 0], Hz=2)),
xlim=(0, 0.25), mag_lim=(-60, 0, 10))
butt = signal.butter(2, 2*np.pi*50, analog=True, output='ba')
fs = 1000
ba = signal.cont2discrete(butt, 1/fs, method='bilinear')[:-1]
sigutils.bode_an_dig((butt,), (ba,), fs, xlog=True, xlim=(1, 500), mag_lim=(-70, 10, 10))
z, p, k = signal.butter(4, 0.5, output='zpk')
sigutils.pole_zero((z, p, k))
z, p, k = signal.ellip(4, 3, 40, 0.5, output='zpk')
sigutils.pole_zero((z, p, k))
freq, resp = signal.freqresp((z, p, k))
fig, ax = sigutils.plot.nyquist(freq, resp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Series 3, Online Convex Programming
Step5: Online Support Vector Machine
Step7: Online Logistic Regression
Step8: Analysis of algorithms
Step9: A grid search for optimal $\lambda$
Step10: A randomized search for optimal $\lambda$
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import pandas as pd
import scipy
from scipy.linalg import norm
from sklearn.base import BaseEstimator, ClassifierMixin
%matplotlib inline
import matplotlib.pyplot as plt
# Ensure consistency across runs.
np.random.seed(1337)
Xtrain = np.genfromtxt('data/Xtrain.csv', delimiter=',')
Ytrain = np.genfromtxt('data/Ytrain.csv', delimiter=',', dtype='int8')
Xtest = np.genfromtxt('data/Xtest.csv', delimiter=',')
Ytest = np.genfromtxt('data/Ytest.csv', delimiter=',', dtype='int8')
def permute_data(x, y):
Shuffles both numpy arrays in unison.
perm = np.random.permutation(x.shape[0])
return x[perm, :], y[perm]
Xtrain, Ytrain = permute_data(Xtrain, Ytrain)
Xtest, Ytest = permute_data(Xtest, Ytest)
from sklearn.utils.estimator_checks import check_estimator
class OnlineClassifier(BaseEstimator, ClassifierMixin):
def __init__(self, **params):
self.w = None
self.lbd = 1.0
self.set_params(**params)
def fit(self, X, y):
raise Exception("Not implemented in abstract class.")
def get_params(self, deep=True):
return {"lbd": self.lbd}
def set_params(self, **parameters):
for parameter, value in parameters.items():
setattr(self, parameter, value)
return self
def predict(self, X):
signs = np.sign(np.inner(self.w, X))
signs[signs == 0] = -1
return signs.astype('int8')
def project_L1(w, a):
Project to L1-ball, as described by Duchi et al. [ICML '08].
z = 1.0 / (a * a)
if norm(w, 1) <= z:
# No normalization required.
return w
mu = -np.sort(-w)
cs = np.cumsum(mu)
rho = -1
for j in range(len(w)):
if mu[j] - (1.0 / (j + 1)) * (cs[j] - z) > 0:
rho = j
theta = (1.0 / (rho + 1)) * (cs[rho] - z)
return np.sign(w) * np.fmax(w - theta, 0)
def project_L2(w, lbd):
l2 regularization, using an l-2 ball of radius $\sqrt{\lamda}$.
sqrt_lambda = np.sqrt(lbd)
w_norm = norm(w, 2)
regularizer = 1.0 / (sqrt_lambda * w_norm)
return w * min(1.0, regularizer)
class OnlineSVMClassifier(OnlineClassifier):
Online SVM with L2 regularization.
def fit(self, X, y):
assert X.shape[0] == y.shape[0]
# Initialize the model.
w = np.zeros(X.shape[1], dtype='float64')
# Iterate just once through our data (this works, and is one
# of the key advantages of online SVMs).
for t, (x, label) in enumerate(zip(X, y)):
# The adaptive learning rate.
eta = 1.0 / np.sqrt(t + 1)
# Compute the loss using the hinge loss formula.
hinge = label * np.inner(w, x)
# If the hinge loss is smaller than 0, then we classified the
# current data point completely wrong, and if it's between 0
# and 1, we were right but not confident enough (we want our
# decisions to be confident (hinge coef > 1) since we want to
# maximize the margin between our classes).
# In either of these cases we want to update our model and
# project it back to the specified l2 ball, in order to keep
# its complexity under control.
if hinge < 1:
w += eta * label * x
w = project_L2(w, self.lbd)
self.w = w
return self
# check_estimator(OnlineSVMClassifier)
def sigmoid(exp):
# TODO: consider simplifying this.
return np.exp(-scipy.misc.logsumexp([0, exp]))
class OnlineLogisticRegressionClassifier(OnlineClassifier):
Online logistic regression with L1 regularization.
def fit(self, X, y):
# Implementation copied from sample solution.
# Despite what was mentioned in the assignment, there was nothing
# about online logistic regression in the lecture/tutorial slides,
# or in the textbook.
assert X.shape[0] == y.shape[0]
w = np.zeros(X.shape[1])
for t, (x, label) in enumerate(zip(X, y)):
eta = 1.0 / np.sqrt(t + 1)
exp = label * np.inner(w, x)
predicted = sigmoid(exp)
w += eta * predicted * label * x
w = project_L1(w, self.lbd)
self.w = w
cls = OnlineSVMClassifier()
logistic_cls = OnlineLogisticRegressionClassifier()
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
parameters = {
'lbd': [0.001, 0.005, 0.01, 0.05, 0.1]
}
gs = GridSearchCV(cls, parameters)
gs_result = gs.fit(Xtrain, Ytrain)
print("Best score: %f" % gs_result.best_score_)
print("Best score params: %s" % gs_result.best_params_)
l_gs = GridSearchCV(logistic_cls, parameters)
l_gs_result = l_gs.fit(Xtrain, Ytrain)
print("Best score: %f" % l_gs_result.best_score_)
print("Best score params: %s" % l_gs_result.best_params_)
import scipy.stats as stats
rs_params = {
"lbd": stats.uniform(loc=0.001, scale=0.099)
}
rs_n_iter = 100
rs = RandomizedSearchCV(cls, rs_params, rs_n_iter, n_jobs=1)
rs_result = rs.fit(Xtrain, Ytrain)
print("Best score: %f" % rs_result.best_score_)
print("Best score params: %s" % rs_result.best_params_)
test_count = Xtrain.shape[0]
steps = 30
svm_cls = OnlineSVMClassifier(lbd=0.011)
log_cls = OnlineLogisticRegressionClassifier(lbd=0.001)
# TODO(andrei) Logistic regression with tonsa comments.
# TODO(andrei) Try to get a general idea of how they implemented the projection to the L1-ball (i.e. LASSO-like).
amounts = list(np.round((np.logspace(0, np.log10(test_count), steps))))
svm_scores = []
log_scores = []
for amount in amounts:
Xsubsample = Xtrain[:int(amount),:]
Ysubsample = Ytrain[:int(amount)]
svm_cls.fit(Xsubsample, Ysubsample)
svm_scores.append(svm_cls.score(Xtest, Ytest))
log_cls.fit(Xsubsample, Ysubsample)
log_scores.append(log_cls.score(Xtest, Ytest))
# plt.plot(amounts, svm_scores)
# ax = plt.gca()
# ax.plot(amounts, log_scores)
# _ =
df = pd.DataFrame(index=pd.Index(amounts), data={
"SVM + L2": svm_scores,
"Logistic + L1": log_scores
})
ax = df.plot()
ax.set_xlabel("Number of used training samples (linear scale)")
ax.set_ylabel("Test score")
ax = df.plot(logx=True)
ax.set_xlabel("Number of used training samples (log scale)")
ax.set_ylabel("Test score")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples
Step2: Numerical Example 1D
Step3: Numerical Example 2D
Step4: Numerical Example 3D
Step5: Example with Image 2D
|
<ASSISTANT_TASK:>
Python Code:
def pconv(f,h):
import numpy as np
h_ind=np.nonzero(h)
f_ind=np.nonzero(f)
if len(h_ind[0])>len(f_ind[0]):
h, f = f, h
h_ind,f_ind= f_ind,h_ind
gs = np.maximum(np.array(f.shape),np.array(h.shape))
if (f.dtype == 'complex') or (h.dtype == 'complex'):
g = np.zeros(gs,dtype='complex')
else:
g = np.zeros(gs)
f1 = g.copy()
f1[f_ind]=f[f_ind]
if f.ndim == 1:
(W,) = gs
col = np.arange(W)
for cc in h_ind[0]:
g[:] += f1[(col-cc)%W] * h[cc]
elif f.ndim == 2:
H,W = gs
row,col = np.indices(gs)
for rr,cc in np.transpose(h_ind):
g[:] += f1[(row-rr)%H, (col-cc)%W] * h[rr,cc]
else:
Z,H,W = gs
d,row,col = np.indices(gs)
for dd,rr,cc in np.transpose(h_ind):
g[:] += f1[(d-dd)%Z, (row-rr)%H, (col-cc)%W] * h[dd,rr,cc]
return g
testing = (__name__ == '__main__')
if testing:
! jupyter nbconvert --to python pconv.ipynb
import numpy as np
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
if testing:
f = np.array([0,0,0,1,0,0,0,0,1])
print("f:",f)
h = np.array([1,2,3])
print("h:",h)
g1 = ia.pconv(f,h)
g2 = ia.pconv(h,f)
print("g1:",g1)
print("g2:",g2)
if testing:
f = np.array([[1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,1],
[0,0,0,0,0,0,0,0,0]])
print("Image (f):")
print(f)
h = np.array([[1,2,3],
[4,5,6]])
print("\n Image Kernel (h):")
print(h)
g1 = ia.pconv(f,h)
print("Image Output (g1=f*h):")
print(g1)
g2 = ia.pconv(h,f)
print("Image Output (g2=h*f):")
print(g)
if testing:
f = np.zeros((3,3,3))
#f[0,1,1] = 1
f[1,1,1] = 1
#f[2,1,1] = 1
print("\n Image Original (F): ")
print(f)
h = np.array([[[ 1, 2, 3 ],
[ 3, 4, 5 ],
[ 5, 6, 7 ]],
[[ 8, 9, 10],
[11, 12, 13],
[14, 15, 16]],
[[17, 18, 19],
[20, 21, 22],
[23, 24, 25]]])
print("\n Image Kernel (H): ")
print(h)
result = ia.pconv(f,h)
print("\n Image Output - (G): ")
print(result)
if testing:
f = mpimg.imread('../data/cameraman.tif')
ia.adshow(f, title = 'a) - Original Image')
h = np.array([[-1,-1,-1],
[ 0, 0, 0],
[ 1, 1, 1]])
g = ia.pconv(f,h)
print("\nPrewitt´s Mask")
print(h)
gn = ia.normalize(g, [0,255])
ia.adshow(gn, title = 'b) Prewitt´s Mask filtering')
ia.adshow(ia.normalize(abs(g)), title = 'c) absolute of Prewitt´s Mask filtering')
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.