Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
8,900
|
<ASSISTANT_TASK:>
Python Code::
important_values = values.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'), important_values['damage_grade'], test_size = 0.2, random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type","position", "ground_floor_type", "other_floor_type","plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
import time
def my_grid_search():
print(time.gmtime())
i = 1
df = pd.DataFrame({'subsample': [],'gamma': [],'learning_rate': [],'max_depth': [],'score': []})
for subsample in [0.75, 0.885, 0.95]:
for gamma in [0.75, 1, 1.25]:
for learning_rate in [0.4375, 0.45, 0.4625]:
for max_depth in [5, 6, 7]:
model = XGBClassifier(n_estimators = 350,booster = 'gbtree',subsample = subsample,gamma = gamma,max_depth = max_depth,learning_rate = learning_rate,label_encoder = False,verbosity = 0)
model.fit(X_train, y_train)
y_preds = model.predict(X_test)
score = f1_score(y_test, y_preds, average = 'micro')
df = df.append(pd.Series(data={'subsample': subsample,'gamma': gamma,'learning_rate': learning_rate,'max_depth': max_depth,'score': score},name = i))
print(i, time.gmtime())
i += 1
return df.sort_values('score', ascending = False)
current_df = my_grid_search()
df = pd.read_csv('grid-search/res-feature-engineering.csv')
df.append(current_df)
df.to_csv('grid-search/res-feature-engineering.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
8,901
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
mask, clean_input_spectrogram, output= load_data()
output[:, mask[0].to(torch.bool), :] = clean_input_spectrogram[:, mask[0].to(torch.bool), :]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
8,902
|
<ASSISTANT_TASK:>
Python Code:
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(290)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(291)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 5, 2, 3)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(282)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(283)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(284)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(285)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(286)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 5, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(287)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(288)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(289)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (2, 3, 3, 4)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(290)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (2, 3, 3, 4)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(291)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (3, 4, 4, 3)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(292)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.12'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
print(json.dumps(DATA))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: [pooling.AveragePooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'
Step2: [pooling.AveragePooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'
Step3: [pooling.AveragePooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'
Step4: [pooling.AveragePooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'
Step5: [pooling.AveragePooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'
Step6: [pooling.AveragePooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'
Step7: [pooling.AveragePooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'
Step8: [pooling.AveragePooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'
Step9: [pooling.AveragePooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'
Step10: [pooling.AveragePooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'
Step11: [pooling.AveragePooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'
Step12: [pooling.AveragePooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'
Step13: export for Keras.js tests
|
8,903
|
<ASSISTANT_TASK:>
Python Code:
### Notebook 6
### Data set 6 (Finches)
### Authors: DaCosta & Sorenson (2016)
### Data Location: SRP059199
%%bash
## make a new directory for this analysis
mkdir -p empirical_6/fastq/
## IPython code
import pandas as pd
import numpy as np
import urllib2
import os
## open the SRA run table from github url
url = "https://raw.githubusercontent.com/"+\
"dereneaton/RADmissing/master/empirical_6_SraRunTable.txt"
intable = urllib2.urlopen(url)
indata = pd.read_table(intable, sep="\t")
## print first few rows
print indata.head()
def wget_download(SRR, outdir, outname):
Python function to get sra data from ncbi and write to
outdir with a new name using bash call wget
## get output name
output = os.path.join(outdir, outname+".sra")
## create a call string
call = "wget -q -r -nH --cut-dirs=9 -O "+output+" "+\
"ftp://ftp-trace.ncbi.nlm.nih.gov/"+\
"sra/sra-instant/reads/ByRun/sra/SRR/"+\
"{}/{}/{}.sra;".format(SRR[:6], SRR, SRR)
## call bash script
! $call
for ID, SRR in zip(indata.Library_Name_s, indata.Run_s):
wget_download(SRR, "empirical_6/fastq/", ID)
%%bash
## convert sra files to fastq using fastq-dump tool
## output as gzipped into the fastq directory
fastq-dump --gzip -O empirical_6/fastq/ empirical_6/fastq/*.sra
## remove .sra files
rm empirical_6/fastq/*.sra
%%bash
ls -l empirical_6/fastq/
%%bash
pyrad --version
%%bash
## remove old params file if it exists
rm params.txt
## create a new default params file
pyrad -n
%%bash
## substitute new parameters into file
sed -i '/## 1. /c\empirical_6/ ## 1. working directory ' params.txt
sed -i '/## 6. /c\CCTGCAGG,AATTC ## 6. cutters ' params.txt
sed -i '/## 7. /c\20 ## 7. N processors ' params.txt
sed -i '/## 9. /c\6 ## 9. NQual ' params.txt
sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt
sed -i '/## 12./c\4 ## 12. MinCov ' params.txt
sed -i '/## 13./c\10 ## 13. maxSH ' params.txt
sed -i '/## 14./c\empirical_6_m4 ## 14. output name ' params.txt
sed -i '/## 18./c\empirical_6/fastq/*.gz ## 18. data location ' params.txt
sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt
sed -i '/## 30./c\p,n,s ## 30. output formats ' params.txt
cat params.txt
%%bash
pyrad -p params.txt -s 234567 >> log.txt 2>&1
%%bash
sed -i '/## 12./c\2 ## 12. MinCov ' params.txt
sed -i '/## 14./c\empirical_6_m2 ## 14. output name ' params.txt
%%bash
pyrad -p params.txt -s 7 >> log.txt 2>&1
import pandas as pd
## read in the data
s2dat = pd.read_table("empirical_6/stats/s2.rawedit.txt", header=0, nrows=25)
## print summary stats
print s2dat["passed.total"].describe()
## find which sample has the most raw data
maxraw = s2dat["passed.total"].max()
print "\nmost raw data in sample:"
print s2dat['sample '][s2dat['passed.total']==maxraw]
## read in the s3 results
s6dat = pd.read_table("empirical_6/stats/s3.clusters.txt", header=0, nrows=25)
## print summary stats
print "summary of means\n=================="
print s6dat['dpt.me'].describe()
## print summary stats
print "\nsummary of std\n=================="
print s6dat['dpt.sd'].describe()
## print summary stats
print "\nsummary of proportion lowdepth\n=================="
print pd.Series(1-s6dat['d>5.tot']/s6dat["total"]).describe()
## find which sample has the greatest depth of retained loci
max_hiprop = (s6dat["d>5.tot"]/s6dat["total"]).max()
print "\nhighest coverage in sample:"
print s6dat['taxa'][s6dat['d>5.tot']/s6dat["total"]==max_hiprop]
import numpy as np
## print mean and std of coverage for the highest coverage sample
with open("empirical_6/clust.85/A167.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
print depths.mean(), depths.std()
import toyplot
import toyplot.svg
import numpy as np
## read in the depth information for this sample
with open("empirical_6/clust.85/A167.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
## make a barplot in Toyplot
canvas = toyplot.Canvas(width=350, height=300)
axes = canvas.axes(xlabel="Depth of coverage (N reads)",
ylabel="N loci",
label="dataset6/sample=A167")
## select the loci with depth > 5 (kept)
keeps = depths[depths>5]
## plot kept and discarded loci
edat = np.histogram(depths, range(30)) # density=True)
kdat = np.histogram(keeps, range(30)) #, density=True)
axes.bars(edat)
axes.bars(kdat)
#toyplot.svg.render(canvas, "empirical_6_depthplot.svg")
cat empirical_6/stats/empirical_6_m4.stats
%%bash
head -n 20 empirical_6/stats/empirical_6_m2.stats
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_6/ \
-n empirical_6_m4 -s empirical_6/outfiles/empirical_6_m4.phy
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_6/ \
-n empirical_6_m2 -s empirical_6/outfiles/empirical_6_m2.phy
%%bash
head -n 20 empirical_6/RAxML_info.empirical_6_m4
%%bash
head -n 20 empirical_6/RAxML_info.empirical_6_m2
%load_ext rpy2.ipython
%%R -h 800 -w 800
library(ape)
tre <- read.tree("empirical_6/RAxML_bipartitions.empirical_6")
ltre <- ladderize(tre)
par(mfrow=c(1,2))
plot(ltre, use.edge.length=F)
nodelabels(ltre$node.label)
plot(ltre, type='u')
%%R
mean(cophenetic.phylo(ltre))
print pd.DataFrame([indata.Library_Name_s, indata.Organism_s]).T
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download the sequence data
Step3: For each ERS (individuals) get all of the ERR (sequence file accessions).
Step4: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
Step5: Merge technical replicates
Step6: Note
Step7: Assemble in pyrad
Step8: Results
Step9: Look at distributions of coverage
Step10: Plot the coverage for the sample with highest mean coverage
Step11: Print final stats table
Step12: Infer ML phylogeny in raxml as an unrooted tree
Step13: Plot the tree in R using ape
Step14: Get phylo distances (GTRgamma dist)
Step15: Translation to taxon names
|
8,904
|
<ASSISTANT_TASK:>
Python Code:
def equilibrium_temperature(luminosity, distance, albedo=0):
Calculates the equilibrium temperature of a planet, assuming blackbody radiation
and thermodynamic equilibrium.
Parameters
----------
luminosity : float
luminosity of the host star [ergs s**-1]
distance : float
distance between the star and the planet [cm]
(assumes circular orbits)
albedo : float
albedo of the planet (0 for perfect absorber, 1 for perfect reflector)
Returns
-------
T_eq : float
Equilibrium temperature of the planet
T_eq = (luminosity / distance**2)**.25 \
* ((1-albedo) / (16 * np.pi * const.sigma_sb.cgs.value))**.25
return T_eq
T_eq_earth = equilibrium_temperature(const.L_sun.cgs.value, units.AU.to(units.cm))
print("Earth equilibrium temperature: ", T_eq_earth, " K")
def luminosity_from_mass(mass):
Convert mass to luminosity using a standard mass-luminosity relation
Parameters
----------
mass : float
stellar mass [g]
Returns
-------
luminosity : float
stellar luminosity [ergs s**-1]
Notes
-----
not vectorized.
`mass` cannot be an array
Sources
-------
Maurizio & Cassisi (2005) Evolution of stars and stellar populations
http://books.google.com/books?id=r1dNzr8viRYC&lpg=PA138&dq=Mass-Luminosity%20relation&lr=&client=firefox-a&pg=PA138#v=onepage&q=&f=false
Nebojsa (2004) Advanced astrophysics
http://books.google.com/books?id=-ljdYMmI0EIC&lpg=PA19&ots=VdMUIiCdP_&dq=Mass-luminosity%20relation&pg=PA19#v=onepage&q=&f=false
# convert to solar units
mass /= const.M_sun.cgs.value
if mass < .43:
luminosity = .23 * (mass)**2.3
elif mass < 2:
luminosity = (mass)**4
elif mass < 20:
luminosity = 1.5 * (mass)**3.5
else:
luminosity = 3200
# convert back into cgs units
luminosity *= const.L_sun.cgs.value
return luminosity
if np.isclose(luminosity_from_mass(const.M_sun.cgs.value) / const.L_sun.cgs.value, 1 ):
print("OK")
else:
print("Error: luminosity_from_mass gave wrong answer")
def equilibrium_temperature_from_mass(mass, distance, albedo=0):
return equilibrium_temperature(luminosity_from_mass(mass),
distance,
albedo = albedo)
if np.isclose(equilibrium_temperature_from_mass(const.M_sun.cgs.value, units.AU.to(units.cm)),
equilibrium_temperature(const.L_sun.cgs.value, units.AU.to(units.cm))):
print("OK")
else:
print("Error: equilibrium_temperature_from_mass gave wrong answer")
print("albedo = 0")
T_eq_earth = equilibrium_temperature(const.L_sun.cgs.value, units.AU.to(units.cm))
print("Earth equilibrium temperature: ", T_eq_earth, " K")
print()
Earth_albedo = .3
print("albedo = ", Earth_albedo)
T_eq_earth = equilibrium_temperature(const.L_sun.cgs.value,
units.AU.to(units.cm),
albedo=Earth_albedo)
print("Earth equilibrium temperature: ", T_eq_earth, " K")
mass = .96 * const.M_sun.cgs.value
luminosity = luminosity_from_mass(mass)
T_eq_earth = equilibrium_temperature(luminosity, units.AU.to(units.cm))
print("Earth equilibrium temperature from lower mass sun: ", T_eq_earth)
print("For a 10 K change in T_eq, the sun would need to be ",
round(100*(mass / const.M_sun.cgs.value)),
"% of its current mass")
# the better way to do this is with objects,
# but we're not going to teach object-oriented programming in this workshop
# tuples should be ("name", distance [in cm])
planets = [
("Mercury", 5.79e12),
("Venus", 1.08e13),
("Earth", 1.50e13),
("Mars", 2.28e13),
("Jupiter", 7.78e13),
("Saturn", 1.43e14),
("Uranus", 2.87e14),
("Neptune", 4.50e14),
("Pluto", 5.90e14)
]
solar_system_T_eqs = dict()
for planet in planets:
solar_system_T_eqs[planet[0]] = equilibrium_temperature(const.L_sun.cgs.value,
planet[1])
solar_system_T_eqs
# if you wanted to keep the solar system in order,
# you could have used collections.OrderedDict instead of dict
import exoplanets
exoplanets.download_data()
data = exoplanets.parse_data()
print("column names of `data`: ")
data.dtype.names
luminosities = const.L_sun.cgs.value * 10**data["st_lum"]
distances = units.AU.to(units.cm) * data["pl_orbsmax"]
plt.scatter(distances / units.AU.to(units.cm),
luminosities / const.L_sun.cgs.value)
plt.xscale("log")
plt.yscale("log")
plt.ylim(10**-3, 10**3)
plt.xlabel("Star-Planet distance [AU]")
plt.ylabel("Star luminosity [L_sun]")
equilibrium_temperatures = equilibrium_temperature(luminosities, distances)
plt.hist(equilibrium_temperatures)
plt.xlabel("Equilibrium Temperature [K]")
plt.ylabel("Number of planets")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Activity Description
Step2: Test equilibrium_temperature
Step4: Mass-luminosity relation
Step5: test luminosity_from_mass
Step6: Equilibrium temperature as a function of mass
Step7: test equilibrium_temperature_from_mass
Step8: Results for activities
Step9: Get equilibrium temperatures of all the planets
Step10: Calculate T_eq for all exoplanets
|
8,905
|
<ASSISTANT_TASK:>
Python Code:
from polara.recommender.data import RecommenderData
from polara.datasets.movielens import get_movielens_data
data = get_movielens_data() # will automatically download it, or you can specify a path to the local copy
data.head()
data.shape
data_model = RecommenderData(data, 'userid', 'movieid', 'rating', seed=0)
data_model.get_configuration()
data_model.random_holdout = True # allow not only top-rated items in evaluation, this reduces evaluation biases
data_model.warm_start = False # standard case
data_model.prepare()
data_model.test.holdout['userid'].isin(data_model.index.userid.training.new).all()
from polara.recommender.models import SVDModel
svd = SVDModel(data_model) # create model
svd.switch_positive = 4 # mark ratings below 4 as negative feedback and treat them accordingly in evaluation
svd.build() # fit model
svd.evaluate() # by default it calculates the total number of hits
# implicit library must be installed separately, follow instructions at https://github.com/benfred/implicit
from polara.recommender.external.implicit.ialswrapper import ImplicitALS
als = ImplicitALS(data_model) # create model
als.switch_positive = 4 # same as for PureSVD, affects only evaluation
als.build()
als.evaluate()
data_model.test.holdout.query('rating>=4').shape[0]
svd.evaluate('relevance')
als.evaluate('relevance')
data_model.warm_start = True # warm-start case
data_model.prepare()
data_model.index.userid.test.old.isin(data_model.index.userid.training.old).any()
svd.build()
svd.evaluate()
als.evaluate()
data_model.test.holdout.query('rating>=4').shape[0]
svd.evaluate('relevance')
als.evaluate('relevance')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Standard scenario with known users
Step2: Let's check for demonstration purposes that all test users are present in the training set
Step3: PureSVD
Step4: iALS
Step5: The maximum possible number of correct recomendations is
Step6: Both models correctly retrieve around a quarter of all items. Let's look on the averaged relevance scores
Step7: Warm-start scenario
Step8: There's no intersection between test and training users
Step9: <div class="alert alert-block alert-info">Polara makes a certain level of efforts to preserve data sanity and consistency.</div>
Step10: Note, that you do not have to recreate the models as they operate on top of the data_model instance.
Step11: The maximum possible number of correct recomendations is
Step12: Check relevance scores
|
8,906
|
<ASSISTANT_TASK:>
Python Code:
from pylab import *
from ase.build import graphene_nanoribbon
from thermo.gpumd.data import load_dos, load_vac
from thermo.gpumd.io import ase_atoms_to_gpumd
gnr = graphene_nanoribbon(60, 36, type='armchair', sheet=True, vacuum=3.35/2, C_C=1.44)
gnr.euler_rotate(theta=90)
l = gnr.cell.lengths()
gnr.cell = gnr.cell.new((l[0], l[2], l[1]))
l = l[2]
gnr.center()
gnr.pbc = [True, True, False]
gnr
ase_atoms_to_gpumd(gnr, M=3, cutoff=2.1)
aw = 2
fs = 16
font = {'size' : fs}
matplotlib.rc('font', **font)
matplotlib.rc('axes' , linewidth=aw)
def set_fig_properties(ax_list):
tl = 8
tw = 2
tlm = 4
for ax in ax_list:
ax.tick_params(which='major', length=tl, width=tw)
ax.tick_params(which='minor', length=tlm, width=tw)
ax.tick_params(which='both', axis='both', direction='in', right=True, top=True)
Nc = 200
dos = load_dos(num_dos_points=Nc)['run0']
vac = load_vac(Nc)['run0']
dos['DOSxyz'] = dos['DOSx']+dos['DOSy']+dos['DOSz']
vac['VACxyz'] = vac['VACx']+vac['VACy']+vac['VACz']
vac['VACxyz'] /= vac['VACxyz'].max()
print('DOS:', dos.keys())
print('VAC:', vac.keys())
figure(figsize=(12,10))
subplot(2,2,1)
set_fig_properties([gca()])
plot(vac['t'], vac['VACx']/vac['VACx'].max(), color='C3',linewidth=3)
plot(vac['t'], vac['VACy']/vac['VACy'].max(), color='C0', linestyle='--',linewidth=3)
plot(vac['t'], vac['VACz']/vac['VACz'].max(), color='C2', linestyle='-.',zorder=100,linewidth=3)
xlim([0,1])
gca().set_xticks([0,0.5,1])
ylim([-0.5, 1])
gca().set_yticks([-0.5,0,0.5,1])
ylabel('VAC (Normalized)')
xlabel('Correlation Time (ps)')
legend(['x','y', 'z'])
title('(a)')
subplot(2,2,2)
set_fig_properties([gca()])
plot(dos['nu'], dos['DOSx'], color='C3',linewidth=3)
plot(dos['nu'], dos['DOSy'], color='C0', linestyle='--',linewidth=3)
plot(dos['nu'], dos['DOSz'], color='C2', linestyle='-.',zorder=100, linewidth=3)
xlim([0, 60])
gca().set_xticks(range(0,61,20))
ylim([0, 1500])
gca().set_yticks(np.arange(0,1501,500))
ylabel('PDOS (1/THz)')
xlabel(r'$\nu$ (THz)')
legend(['x','y', 'z'])
title('(b)')
subplot(2,2,3)
set_fig_properties([gca()])
plot(vac['t'], vac['VACxyz'], color='k',linewidth=3)
xlim([0,1])
gca().set_xticks([0,0.5,1])
ylim([-0.5, 1])
gca().set_yticks([-0.5,0,0.5,1])
ylabel('VAC (Normalized)')
xlabel('Correlation Time (ps)')
title('(c)')
subplot(2,2,4)
set_fig_properties([gca()])
plot(dos['nu'], dos['DOSxyz'], color='k',linewidth=3)
xlim([0, 60])
gca().set_xticks(range(0,61,20))
ylim([0, 2500])
gca().set_yticks(np.arange(0,2501,500))
ylabel('PDOS (1/THz)')
xlabel(r'$\nu$ (THz)')
title('(d)')
tight_layout()
show()
Temp = np.arange(100,5001,100) # [K]
N = 8640 # Number of atoms
Cx, Cy, Cz = list(), list(), list() # [k_B/atom] Heat capacity per atom
hnu = 6.63e-34*dos['nu']*1.e12 # [J]
for T in Temp:
kBT = 1.38e-23*T # [J]
x = hnu/kBT
expr = np.square(x)*np.exp(x)/(np.square(np.expm1(x)))
Cx.append(np.trapz(dos['DOSx']*expr, dos['nu'])/N)
Cy.append(np.trapz(dos['DOSy']*expr, dos['nu'])/N)
Cz.append(np.trapz(dos['DOSz']*expr, dos['nu'])/N)
figure(figsize=(8,6))
set_fig_properties([gca()])
mew, ms, mfc, lw = 1, 8, 'none', 2.5
plot(Temp, Cx, lw=lw,marker='d',mfc=mfc,ms=ms,mew=mew)
plot(Temp, Cy, lw=lw,marker='s',mfc=mfc,ms=ms,mew=mew)
plot(Temp, Cz, lw=lw,marker='o',mfc=mfc,ms=ms,mew=mew)
xlim([0,5100])
gca().set_xticks(range(0,5001,1000))
ylim([0, 1.1])
gca().set_yticks(np.linspace(0,1,6))
ylabel(r'Heat Capacity (k$_B$/atom)')
xlabel('Temperature (K)')
legend(['x','y','z'])
tight_layout()
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Preparing the Inputs
Step2: The first few lines of the xyz.in file are
Step3: Plot DOS and VAC
Step4: (a) Normalized VAC for individual directions. (b) DOS for individual directions. (c) Total Normalized VAC. (d) Total DOS.
|
8,907
|
<ASSISTANT_TASK:>
Python Code:
import autograd.numpy as np
np.set_printoptions(precision=2)
import matplotlib.pyplot as plt
%matplotlib inline
# Number of data points
N = 1000
# Dimension of each data point
D = 2
# Number of clusters
K = 3
pi = [0.1, 0.6, 0.3]
mu = [np.array([-4, 1]), np.array([0, 0]), np.array([2, -1])]
Sigma = [np.array([[3, 0],[0, 1]]), np.array([[1, 1.], [1, 3]]), 0.5 * np.eye(2)]
components = np.random.choice(K, size=N, p=pi)
samples = np.zeros((N, D))
# For each component, generate all needed samples
for k in range(K):
# indices of current component in X
indices = k == components
# number of those occurrences
n_k = indices.sum()
if n_k > 0:
samples[indices, :] = np.random.multivariate_normal(mu[k], Sigma[k], n_k)
colors = ['r', 'g', 'b', 'c', 'm']
for k in range(K):
indices = k == components
plt.scatter(samples[indices, 0], samples[indices, 1], alpha=0.4, color=colors[k%K])
plt.axis('equal')
plt.show()
import sys
sys.path.insert(0,"..")
import autograd.numpy as np
from autograd.scipy.special import logsumexp
from pymanopt.manifolds import Product, Euclidean, SymmetricPositiveDefinite
from pymanopt import Problem
from pymanopt.solvers import SteepestDescent
# (1) Instantiate the manifold
manifold = Product([SymmetricPositiveDefinite(D+1, k=K), Euclidean(K-1)])
# (2) Define cost function
# The parameters must be contained in a list theta.
def cost(theta):
# Unpack parameters
nu = np.concatenate([theta[1], [0]], axis=0)
S = theta[0]
logdetS = np.expand_dims(np.linalg.slogdet(S)[1], 1)
y = np.concatenate([samples.T, np.ones((1, N))], axis=0)
# Calculate log_q
y = np.expand_dims(y, 0)
# 'Probability' of y belonging to each cluster
log_q = -0.5 * (np.sum(y * np.linalg.solve(S, y), axis=1) + logdetS)
alpha = np.exp(nu)
alpha = alpha / np.sum(alpha)
alpha = np.expand_dims(alpha, 1)
loglikvec = logsumexp(np.log(alpha) + log_q, axis=0)
return -np.sum(loglikvec)
problem = Problem(manifold=manifold, cost=cost, verbosity=1)
# (3) Instantiate a Pymanopt solver
solver = SteepestDescent()
# let Pymanopt do the rest
Xopt = solver.solve(problem)
mu1hat = Xopt[0][0][0:2,2:3]
Sigma1hat = Xopt[0][0][:2, :2] - mu1hat.dot(mu1hat.T)
mu2hat = Xopt[0][1][0:2,2:3]
Sigma2hat = Xopt[0][1][:2, :2] - mu2hat.dot(mu2hat.T)
mu3hat = Xopt[0][2][0:2,2:3]
Sigma3hat = Xopt[0][2][:2, :2] - mu3hat.dot(mu3hat.T)
pihat = np.exp(np.concatenate([Xopt[1], [0]], axis=0))
pihat = pihat / np.sum(pihat)
print(mu[0])
print(Sigma[0])
print(mu[1])
print(Sigma[1])
print(mu[2])
print(Sigma[2])
print(pi[0])
print(pi[1])
print(pi[2])
print(mu1hat)
print(Sigma1hat)
print(mu2hat)
print(Sigma2hat)
print(mu3hat)
print(Sigma3hat)
print(pihat[0])
print(pihat[1])
print(pihat[2])
class LineSearchMoG:
Back-tracking line-search that checks for close to singular matrices.
def __init__(self, contraction_factor=.5, optimism=2,
suff_decr=1e-4, maxiter=25, initial_stepsize=1):
self.contraction_factor = contraction_factor
self.optimism = optimism
self.suff_decr = suff_decr
self.maxiter = maxiter
self.initial_stepsize = initial_stepsize
self._oldf0 = None
def search(self, objective, manifold, x, d, f0, df0):
Function to perform backtracking line-search.
Arguments:
- objective
objective function to optimise
- manifold
manifold to optimise over
- x
starting point on the manifold
- d
tangent vector at x (descent direction)
- df0
directional derivative at x along d
Returns:
- stepsize
norm of the vector retracted to reach newx from x
- newx
next iterate suggested by the line-search
# Compute the norm of the search direction
norm_d = manifold.norm(x, d)
if self._oldf0 is not None:
# Pick initial step size based on where we were last time.
alpha = 2 * (f0 - self._oldf0) / df0
# Look a little further
alpha *= self.optimism
else:
alpha = self.initial_stepsize / norm_d
alpha = float(alpha)
# Make the chosen step and compute the cost there.
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = 1
# Backtrack while the Armijo criterion is not satisfied
while (newf > f0 + self.suff_decr * alpha * df0 and
step_count <= self.maxiter and
not reset):
# Reduce the step size
alpha = self.contraction_factor * alpha
# and look closer down the line
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = step_count + 1
# If we got here without obtaining a decrease, we reject the step.
if newf > f0 and not reset:
alpha = 0
newx = x
stepsize = alpha * norm_d
self._oldf0 = f0
return stepsize, newx
def _newxnewf(self, x, d, objective, manifold):
newx = manifold.retr(x, d)
try:
newf = objective(newx)
except np.linalg.LinAlgError:
replace = np.asarray([np.linalg.matrix_rank(newx[0][k, :, :]) != newx[0][0, :, :].shape[0]
for k in range(newx[0].shape[0])])
x[0][replace, :, :] = manifold.rand()[0][replace, :, :]
return x, objective(x), True
return newx, newf, False
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Given a data sample the de facto standard method to infer the parameters is the expectation maximisation (EM) algorithm that, in alternating so-called E and M steps, maximises the log-likelihood of the data.
Step2: Once Pymanopt has finished the optimisation we can obtain the inferred parameters as follows
Step3: And convince ourselves that the inferred parameters are close to the ground truth parameters.
Step4: And the inferred parameters $\hat{\mathbf{\mu}}_1, \hat{\mathbf{\Sigma}}_1, \hat{\mathbf{\mu}}_2, \hat{\mathbf{\Sigma}}_2, \hat{\mathbf{\mu}}_3, \hat{\mathbf{\Sigma}}_3, \hat{\pi}_1, \hat{\pi}_2, \hat{\pi}_3$
Step7: Et voilà – this was a brief demonstration of how to do inference for MoG models by performing Manifold optimisation using Pymanopt.
|
8,908
|
<ASSISTANT_TASK:>
Python Code:
# import packages
from openpiv import windef # <---- see windef.py for details
from openpiv import tools, scaling, validation, filters, preprocess
import openpiv.pyprocess as process
from openpiv import pyprocess
import numpy as np
import os
from time import time
import warnings
import matplotlib.pyplot as plt
%matplotlib inline
settings = windef.Settings()
'Data related settings'
# Folder with the images to process
settings.filepath_images = '../../examples/test1/'
# Folder for the outputs
settings.save_path = '../../../examples/test1/'
# Root name of the output Folder for Result Files
settings.save_folder_suffix = 'Test_1'
# Format and Image Sequence
settings.frame_pattern_a = 'exp1_001_a.bmp'
settings.frame_pattern_b = 'exp1_001_b.bmp'
'Region of interest'
# (50,300,50,300) #Region of interest: (xmin,xmax,ymin,ymax) or 'full' for full image
settings.ROI = 'full'
'Image preprocessing'
# 'None' for no masking, 'edges' for edges masking, 'intensity' for intensity masking
# WARNING: This part is under development so better not to use MASKS
settings.dynamic_masking_method = 'None'
settings.dynamic_masking_threshold = 0.005
settings.dynamic_masking_filter_size = 7
settings.deformation_method = 'symmetric'
'Processing Parameters'
settings.correlation_method='circular' # 'circular' or 'linear'
settings.normalized_correlation=False
settings.num_iterations = 2 # select the number of PIV passes
# add the interroagtion window size for each pass.
# For the moment, it should be a power of 2
settings.windowsizes = (64, 32, 16) # if longer than n iteration the rest is ignored
# The overlap of the interroagtion window for each pass.
settings.overlap = (32, 16, 8) # This is 50% overlap
# Has to be a value with base two. In general window size/2 is a good choice.
# methode used for subpixel interpolation: 'gaussian','centroid','parabolic'
settings.subpixel_method = 'gaussian'
# order of the image interpolation for the window deformation
settings.interpolation_order = 3
settings.scaling_factor = 1 # scaling factor pixel/meter
settings.dt = 1 # time between to frames (in seconds)
'Signal to noise ratio options (only for the last pass)'
# It is possible to decide if the S/N should be computed (for the last pass) or not
# settings.extract_sig2noise = True # 'True' or 'False' (only for the last pass)
# method used to calculate the signal to noise ratio 'peak2peak' or 'peak2mean'
settings.sig2noise_method = 'peak2peak'
# select the width of the masked to masked out pixels next to the main peak
settings.sig2noise_mask = 2
# If extract_sig2noise==False the values in the signal to noise ratio
# output column are set to NaN
'vector validation options'
# choose if you want to do validation of the first pass: True or False
settings.validation_first_pass = True
# only effecting the first pass of the interrogation the following passes
# in the multipass will be validated
'Validation Parameters'
# The validation is done at each iteration based on three filters.
# The first filter is based on the min/max ranges. Observe that these values are defined in
# terms of minimum and maximum displacement in pixel/frames.
settings.MinMax_U_disp = (-30, 30)
settings.MinMax_V_disp = (-30, 30)
# The second filter is based on the global STD threshold
settings.std_threshold = 7 # threshold of the std validation
# The third filter is the median test (not normalized at the moment)
settings.median_threshold = 3 # threshold of the median validation
# On the last iteration, an additional validation can be done based on the S/N.
settings.median_size=1 #defines the size of the local median
'Validation based on the signal to noise ratio'
# Note: only available when extract_sig2noise==True and only for the last
# pass of the interrogation
# Enable the signal to noise ratio validation. Options: True or False
# settings.do_sig2noise_validation = False # This is time consuming
# minmum signal to noise ratio that is need for a valid vector
settings.sig2noise_threshold = 1.2
'Outlier replacement or Smoothing options'
# Replacment options for vectors which are masked as invalid by the validation
settings.replace_vectors = True # Enable the replacment. Chosse: True or False
settings.smoothn=True #Enables smoothing of the displacemenet field
settings.smoothn_p=0.5 # This is a smoothing parameter
# select a method to replace the outliers: 'localmean', 'disk', 'distance'
settings.filter_method = 'localmean'
# maximum iterations performed to replace the outliers
settings.max_filter_iteration = 4
settings.filter_kernel_size = 2 # kernel size for the localmean method
'Output options'
# Select if you want to save the plotted vectorfield: True or False
settings.save_plot = False
# Choose wether you want to see the vectorfield or not :True or False
settings.show_plot = True
settings.scale_plot = 200 # select a value to scale the quiver plot of the vectorfield
# run the script with the given settings
windef.piv(settings)
# we can run it from any folder
path = settings.filepath_images
frame_a = tools.imread( os.path.join(path,settings.frame_pattern_a))
frame_b = tools.imread( os.path.join(path,settings.frame_pattern_b))
frame_a = (frame_a).astype(np.int32)
frame_b = (frame_b).astype(np.int32)
u, v, sig2noise = process.extended_search_area_piv( frame_a, frame_b, \
window_size=32, overlap=16, dt=1, search_area_size=64, sig2noise_method='peak2peak' )
x, y = process.get_coordinates( image_size=frame_a.shape,
search_area_size=64, overlap=16 )
u, v, mask = validation.sig2noise_val( u, v, sig2noise, threshold = 1.3 )
u, v, mask = validation.global_val( u, v, (-1000, 2000), (-1000, 1000) )
u, v = filters.replace_outliers( u, v, method='localmean', max_iter=10, kernel_size=2)
x, y, u, v = scaling.uniform(x, y, u, v, scaling_factor = 1)
x, y, u, v = tools.transform_coordinates(x, y, u, v)
tools.save(x, y, u, v, mask, 'test1.vec' )
tools.display_vector_field('test1.vec', scale=75, width=0.0035);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up all the settings
Step2: Run the windef.py function, called piv with these settings
Step3: Run the extended search area PIV for comparison
|
8,909
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.linear_model import Ridge
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 10
# Define input array with angles from 60° to 300° in radians
x = np.array([i*np.pi/180 for i in range(60,300,4)])
np.random.seed(0) # setting rand seed for reproducibility
y = np.sin(x) + np.random.normal(0,0.15,len(x))
data = pd.DataFrame(np.column_stack([x,y]), columns=['x','y'])
plt.plot(data['x'], data['y'], '.');
for i in range(2,16): # power of 1 is already there
colname = f'x_{i}' # new var will be the x_power
data[colname] = data['x']**i
data.head()
# print(data.head())
# from: https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-ridge-lasso-regression-python/#three
def ridge_regression(data, predictors, alpha, models_to_plot={}):
# Fit the model
ridgereg = Ridge(alpha=alpha, normalize=True)
ridgereg.fit(data[predictors], data['y'])
y_pred = ridgereg.predict(data[predictors])
# Check if a plot is to be made for the entered alpha
if alpha in models_to_plot:
plt.subplot(models_to_plot[alpha])
plt.tight_layout()
plt.plot(data['x'], y_pred)
plt.plot(data['x'], data['y'], '.')
plt.title(f'Plot for alpha: {alpha:.3g}')
# Return result in pre-defined format
rss = sum((y_pred - data['y'])**2)
ret = [rss]
ret.extend([ridgereg.intercept_])
ret.extend(ridgereg.coef_)
return ret
# Initialize predictors to be set of 15 powers of x
predictors = ['x']
predictors.extend([f'x_{i}' for i in range(2,16)])
# Set different values of alpha to be tested
alpha_ridge = [1e-15, 1e-10, 1e-8, 1e-4, 1e-3,1e-2, 1, 5, 10, 20]
# Initialize dataframe for storing coefficients
col = ['rss','intercept'] + [f'coef_x_{i}' for i in range(1,16)]
ind = [f'alpha_{alpha_ridge[i]:.2g}' for i in range(0,10)]
coef_matrix_ridge = pd.DataFrame(index=ind, columns=col)
models_to_plot = {1e-15:231, 1e-10:232, 1e-4:233, 1e-3:234, 1e-2:235, 5:236}
for i in range(10):
coef_matrix_ridge.iloc[i,] = ridge_regression(data, predictors,
alpha_ridge[i], models_to_plot)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data simulating a sine-curve between 60°-300° with random noise
Step2: Adding a column for each power up to 15
Step3: Generic function for ridge regression, similar to that defined for simple linear regression
Step4: Analyze result of ridge (L2) regression for 10 values of α
|
8,910
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
#x = np.array(np.random.normal(0,1,size=(1000,1))).reshape(-1, 1)
#y = np.array(np.random.normal(0,1,size=(1000,1))).reshape(-1, 1)
x = np.array([0, 0, 1, 1, 2, 4, 2, 1, 2, 0]).reshape(-1, 1)
y = np.array([0,0,0,0,0,0,0,1, 1, 1, 2, 2, 2, 2, 3, 2, 0]).reshape(-1, 1)
plt.plot(x,'b')
plt.plot(y,'r')
plt.show()
def dtw(x,y, d = lambda i,j: np.linalg.norm(i - j,ord=2)):
M = len(x) # Number of elements in sequence x
N = len(y) # Number of elements in sequence y
C = np.zeros((M,N)) # The local cost matrix
D = np.zeros((M,N)) # The accumulative cost matrix
# First, let's fill out D (time complexity O(M*N)):
for m in range(len(x)):
for n in range(len(y)):
if (m == 0 and n == 0):
D[m][n] = C[m][n] = d(x[m],y[n])
elif m == 0 and n > 0:
C[m][n] = d(x[m],y[n])
D[m][n] = C[m][n] + D[m][n-1]
elif m > 0 and n == 0:
C[m][n] = d(x[m],y[n])
D[m][n] = C[m][n] + D[m-1][n]
else:
C[m][n] = d(x[m],y[n])
D[m][n] = C[m][n] + np.min([D[m-1][n], D[m][n-1], D[m-1][n-1]])
# Then, using D we can easily find the optimal path, starting from the end
p = [(M-1, N-1)] # This will store the a list with the indexes of D for the optimal path
m,n = p[-1]
while (m != 0 and n !=0):
options = [[D[max(m-1,0)][n], D[m][max(n-1,0)], D[max(m-1,0)][max(n-1,0)]],
[(max(m-1,0),n),(m,max(n-1,0)),(max(m-1,0),max(n-1,0))]]
p.append(options[1][np.argmin(options[0])])
m,n = p[-1]
pstar = np.asarray(p[::-1])
optimal_cost = D[-1][-1]
return optimal_cost, pstar, C, D
optimal_cost, pstar, local_cost, accumulative_cost = dtw(x,y)
print("The DTW distance is: {}".format(optimal_cost))
print("The optimal path is: \n{}".format(pstar))
def plotWarping(D,C,pstar):
fig1 = plt.figure()
plt.imshow(D.T,origin='lower',cmap='gray',interpolation='nearest')
plt.colorbar()
plt.title('Accumulative Cost Matrix')
plt.plot(pstar[:,0], pstar[:,1],'w-')
plt.show()
fig2 = plt.figure()
plt.imshow(C.T,origin='lower',cmap='gray',interpolation='nearest')
plt.colorbar()
plt.title('Local Cost Matrix')
plt.show()
return fig1, fig2
plotWarping(accumulative_cost,local_cost,pstar)
import pickle
pkf = open('data/loadCurves.pkl','rb')
data,loadCurves = pickle.load(pkf)
pkf.close()
y = loadCurves.ix[1].values.reshape(-1,1)
x = loadCurves.ix[365].values.reshape(-1,1)
plt.plot(x,'r')
plt.plot(y,'b')
plt.show()
Dstar, Pstar, C, D = dtw(x,y)
plotWarping(D,C,Pstar)
print("The DTW distance between them is: {}".format(Dstar))
#loadCurves = loadCurves.replace(np.inf,np.nan).fillna(0)
#dtwMatrix = np.zeros((365,365))
for i in range(1,31):
for j in range(1,365):
x = loadCurves.ix[i].values.reshape(-1,1)
y = loadCurves.ix[j].values.reshape(-1,1)
dtwMatrix[i][j],_,_,_ = dtw(x,y)
plt.imshow(dtwMatrix,origin='bottom',cmap='gray')
plt.colorbar()
dtwMatrix[10][30:33]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's see what the path looks like on top of the accumulative cost matrix (and, because we can, let's also plot the local cost matrix)
Step2: Now let's have a bit of fun with this new function.
Step3: First, let's try comparing the first and last day of the dataset.
Step4: But why don't we just calculate that distance across all possible pairs?
|
8,911
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
init_printing()
from sympsi import *
from sympsi.boson import *
from sympsi.pauli import *
# CPW, qubit and NR energies
omega_r, omega_q, omega_nr = symbols("omega_r, omega_q, omega_{NR}")
# Coupling CPW-qubit, NR_qubit
g, L, chi, eps = symbols("g, lambda, chi, epsilon")
# Detuning
# Drives and detunnings
Delta_d, Delta_s, Delta_p = symbols(" Delta_d, Delta_s, Delta_p ")
A, B, C = symbols("A,B,C") # Electric field amplitude
omega_d, omega_s, omega_p = symbols("omega_d, omega_s, omega_p") # drive frequencies
# Detunning CPW-qubit, NR-qubit
Delta_CPW, Delta_NR = symbols("Delta_{CPW},Delta_{NR}")
# auxilary variables
y, x, t, Hsym = symbols("x, y, t, H ")
Delta_SP, Delta_SD = symbols("Delta_{SP},Delta_{SD}")
# omega_r, omega_q, g, Delta_d, Delta_s, t, x, chi, Hsym = symbols("omega_r, omega_q, g, Delta_d, Delta_s, t, x, chi, H")
# A, B, C = symbols("A,B,C") # Electric field amplitude
# omega_d, omega_s = symbols("omega_d, omega_s") #
# omega_nr, L = symbols("omega_{NR},lambda")
# Delta,Delta_t = symbols("Delta,Delta_t")
# y, omega_t = symbols("y, omega_t")
sx, sy, sz, sm, sp = SigmaX(), SigmaY(), SigmaZ(), SigmaMinus(), SigmaPlus()
a = BosonOp("a")
b = BosonOp("b")
H = omega_r * Dagger(a) * a + omega_q/2 * sz + omega_nr * Dagger(b) * b
H_int = sx * (g * (a + Dagger(a)) + L * (b + Dagger(b)))
H_drive_r = A * (exp(I*omega_d*t)*a + exp(-I*omega_d*t)*Dagger(a))
H_drive_q = B * (exp(I*omega_s*t)*sm + exp(-I*omega_s*t)*sp)
H_drive_NR = C * (exp(I*omega_p*t)*b + exp(-I*omega_p*t)*Dagger(b))
H_total = H+ H_int + H_drive_r + H_drive_q + H_drive_NR
Eq(Hsym,H_total)
U = exp(I * omega_r * t * Dagger(a)*a)
# U
H1 = hamiltonian_transformation(U, H_total.expand(), independent=True)
# H1
U = exp(I * omega_nr * t * Dagger(b)*b)
# U
H1a = hamiltonian_transformation(U, H1.expand(), independent=True)
# H1a
U = exp(I * omega_q * t * sp * sm)
# U
H2 = hamiltonian_transformation(U, H1a.expand())
H2 = H2.subs(sx, sm + sp).expand()
H2 = powsimp(H2)
# H2
# H2
# trick to simplify exponents
def simplify_exp(e):
if isinstance(e, exp):
return exp(simplify(e.exp.expand()))
if isinstance(e, (Add, Mul)):
return type(e)(*(simplify_exp(arg) for arg in e.args))
return e
H3a = simplify_exp(H2).subs(-omega_r + omega_q, Delta_CPW)
H3 = simplify_exp(H3a).subs(-omega_nr + omega_q, Delta_NR)
# H3
H4 = drop_terms_containing(H3, [exp( I * (omega_q + omega_r) * t),
exp(-I * (omega_q + omega_r) * t),
exp(I * (omega_q + omega_nr) * t),
exp(-I * (omega_q + omega_nr) * t)],
)
H4 = drop_c_number_terms(H4.expand())
Eq(Hsym, H4)
U = exp(-I * omega_r * t * Dagger(a) * a)
H5 = hamiltonian_transformation(U, H4.expand(), independent=True)
# H5
U = exp(-I * omega_nr * t * Dagger(b) * b)
H5a = hamiltonian_transformation(U, H5.expand(), independent=True)
# H5a
U = exp(-I * omega_q * t * sp * sm)
H6 = hamiltonian_transformation(U, H5a.expand())
# H6
H7a = simplify_exp(H6).subs(Delta_CPW, omega_q - omega_r)
H7 = simplify_exp(H7a).subs(Delta_NR, omega_q - omega_nr)
H7 = simplify_exp(powsimp(H7)).expand()
H7 = drop_c_number_terms(H7)
H = collect(H7, [A,B, C,g,L])
Eq(Hsym, H)
U = exp(I * Dagger(a) * a * omega_d * t)
# U
H1 = hamiltonian_transformation(U, H, independent=True)
# H1
H2 = drop_terms_containing(H1.expand(), [exp(-2*I*omega_d*t), exp(2*I*omega_d*t)])
H2 = H2.collect([A,B,Dagger(a)*a,g])
# H2
# Eq(Symbol("H_{rwa}"), H2)
H3 = H2.subs(-omega_d+omega_r, Delta_d).expand()
H3 = H3.collect([A,B,C,Dagger(a)*a,Dagger(b)*b,g,L])
# H3
H3
U = exp(I * Dagger(b) * b * omega_p * t)
H3a = hamiltonian_transformation(U, H3, independent=True)
H3b = drop_terms_containing(H3a.expand(), [exp(-2*I*omega_p*t), exp(2*I*omega_p*t)])
H3b = H3b.collect([A,B,C,L,Dagger(a)*a,Dagger(b)*b,g])
H3c = H3b.subs(-omega_p+omega_nr, Delta_p).expand()
H3c = H3c.collect([A,B,C,L,Dagger(a)*a,Dagger(b)*b,g])
H3c
H3 = H3c.subs(sz,2*sp*sm).expand()
# H3
H3 = H3.collect([A,B,C,L,Dagger(a)*a,Dagger(b)*b,g])
# H3
# H3
U = exp(I * sp*sm* omega_s * t)
# U
H4 = hamiltonian_transformation(U, H3, independent=True)
H4 =H4.expand()
H4
H4 = H4.subs(omega_q*sp*sm,omega_q*sz/2)
# H4
H4 = H4.subs(-omega_s*sp*sm,-omega_s*sz/2)
# H4 = H4.subs(-omega_s*sp*sm,-omega_s*sz/2)
H4= H4.collect([A,B,C,L,Dagger(a)*a,Dagger(b)*b,g,sz])
H4
H4 = H4.subs(omega_q/2 - omega_s/2,Delta_s/2).expand()
H4= H4.collect([A,B,C,L,Dagger(a)*a,Dagger(b)*b,g])
H4
H4 = simplify_exp(powsimp(H4))
H4 = H4.subs(omega_p - omega_s,-Delta_SP).expand()
H4 = simplify_exp(powsimp(H4))
H4 = H4.subs(omega_d - omega_s,-Delta_SD).expand()
H5 = H4.collect([A,B,C,L,Dagger(a)*a,Dagger(b)*b,g])
# H5
H5 = H5.expand()
# H5
H6 = simplify_exp(powsimp(H5))
# H6 = simplify_exp(H6).subs(omega_t - omega_s, Delta_t)
H = H6.collect([A,B,C,Dagger(a)*a,g,sz,L])
Eq(Hsym, H)
U = exp(x * (a * sp).expand())
# U
#H1 = unitary_transformation(U, H, allinone=True, expansion_search=False, N=3).expand()
#H1 = qsimplify(H1)
#H1
H1 = hamiltonian_transformation(U, H, expansion_search=False, N=3).expand()
H1a = qsimplify(H1)
# H1a
U = exp(y * (b * sp).expand())
# U
H1a = hamiltonian_transformation(U, H1a, expansion_search=False, N=3).expand()
H1 = qsimplify(H1a)
# H1
U = exp(-x * (Dagger(a) * sm).expand())
# U
H2a = hamiltonian_transformation(U, H1, expansion_search=False, N=3).expand()
H2a = qsimplify(H2a)
# H2a
U = exp(-y * (Dagger(b) * sm).expand())
# U
H2 = hamiltonian_transformation(U, H2a, expansion_search=False, N=3).expand()
H2 = qsimplify(H2)
# H2
# H3 = drop_terms_containing(H2.expand(), [x**2,x**3, x**4,x**5,x**6,x**7, y**2,y**3, y**4,y**5,y**6,y**7])
H3 = drop_terms_containing(H2.expand(), [x**3, x**4,x**5,x**6,x**7, y**3, y**4,y**5,y**6,y**7])
# H3
H4 = H3.subs(x, g/Delta_CPW)
H4a = H4.subs(y, L/Delta_NR)
# H4
H4b = qsimplify(H4a)
H5 = drop_c_number_terms(H4b)
# H6a = drop_terms_containing(H5.expand(), [exp(I*Delta_SP*t), exp(-I*Delta_SP*t)])
# H6 = drop_terms_containing(H6a.expand(), [exp(I*Delta_SD*t), exp(-I*Delta_SD*t)])
# H6
H6 = collect(H5, [A,B,C,Dagger(a)*a,Dagger(b)*b,sz,g,L])
# H6
U = exp(I * omega_r * t * Dagger(a) * a)
H7 = hamiltonian_transformation(U, H6.expand()).expand()
H7a = qsimplify(H7)
U = exp(I * omega_nr * t * Dagger(b) * b)
H7b = hamiltonian_transformation(U, H7a).expand()
H7b = qsimplify(H7b)
U = exp(I * omega_q * t * sp * sm)
H7c = hamiltonian_transformation(U, H7b, expansion_search=False, N=3).expand()
H7c = qsimplify(H7c)
H8 = drop_terms_containing(H7, [exp(I * omega_r * t), exp(-I * omega_r * t),
exp(I * omega_nr * t), exp(-I * omega_nr * t),
exp(I * omega_q * t), exp(-I * omega_q * t)])
U = exp(- I * omega_r * t * Dagger(a) * a)
H9 = hamiltonian_transformation(U, H8.expand(), expansion_search=False, N=3).expand()
H9 = qsimplify(H9)
U = exp(-I * omega_nr * t * Dagger(b) * b)
H10 = hamiltonian_transformation(U, H9, expansion_search=False, N=3).expand()
H10 = qsimplify(H10)
U = exp(-I * omega_q * t * sp * sm)
H11 = hamiltonian_transformation(U, H10, expansion_search=False, N=3).expand()
H11 = qsimplify(H11)
# H8 = qsimplify(H7.expand())
# H9 = collect(H8, [A,B,C,Dagger(a)*a,g,sz,L])
# # H9
# H10 = qsimplify(H9)
H12 = collect(H11, [A,B,C,Dagger(a)*a,g,sz,L])
# H10
H12
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Jaynes-Cummings model
Step2: Unitary transformation to interaction picture
Step3: Nanoresonator - b
Step4: Qubit - Sigma_z
Step5: Now, in the rotating-wave approximation we can drop the fast oscillating terms containing the factors
Step6: This is the interaction term of in the Jaynes-Cumming model in the interaction picture. If we transform back to the Schrödinger picture we have
Step7: Linearized interaction
Step8: We can now perform a rotating-wave approximation (RWA) by eliminating all terms that rotate with frequencies $2\omega_d$
Step9: Introduce the detuning $\Delta_d = \omega_r - \omega_d$
Step10: Second we apply the unitary transformation $U = e^{i \omega_p b^\dagger b t}$
Step11: Substiture $\sigma_z = 2\sigma_+\sigma_-$
Step12: Second we apply the unitary transformation $U = e^{i \omega_s \sigma_+ \sigma_- t}$
Step13: Substitute $ \sigma_+\sigma_-=\dfrac{\sigma_z}{2}$
Step14: Introduce the detuning $\Delta_{SP} = \omega_S - \omega_D$ and $\Delta_{SD} = \omega_S - \omega_D$
Step15: This is the Jaynes-Cumming model give above, and we have now seen that it is obtained to the dipole interaction Hamiltonian through the rotating wave approximation.
Step16: This is the Hamiltonian of the Jaynes-Cummings model in the the dispersive regime. It can be interpreted as the resonator having a qubit-state-dependent frequency shift, or alternatively that the qubit is feeling a resonator-photon-number dependent Stark-shift.
|
8,912
|
<ASSISTANT_TASK:>
Python Code:
# Personality Embeddings: What are you like?
jay = [-0.4, 0.8, 0.5, -0.2, 0.3]
john = [-0.3, 0.2, 0.3, -0.4, 0.9]
mike = [-0.5, -0.4, -0.2, 0.7, -0.1]
from numpy import dot
from numpy.linalg import norm
def cos_sim(a, b):
return dot(a, b)/(norm(a)*norm(b))
cos_sim([1, 0, -1], [-1,-1, 0])
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity([[1, 0, -1]], [[-1,-1, 0]])
from scipy import spatial
# spatial.distance.cosine computes
# the Cosine distance between 1-D arrays.
1 - spatial.distance.cosine([1, 0, -1], [-1,-1, 0])
cos_sim(jay, john)
cos_sim(jay, mike)
import gensim
# Load Google's pre-trained Word2Vec model.
filepath = '/Users/datalab/bigdata/GoogleNews-vectors-negative300.bin'
model = gensim.models.KeyedVectors.load_word2vec_format(filepath, binary=True)
model['woman'][:10]
model.most_similar('woman')
model.similarity('woman', 'man')
cos_sim(model['woman'], model['man'])
model.most_similar(positive=['woman', 'king'], negative=['man'], topn=5)
# see http://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
text = We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.
text = text.replace(',', '').replace('.', '').lower().split()
# By deriving a set from `raw_text`, we deduplicate the array
vocab = set(text)
vocab_size = len(vocab)
print('vocab_size:', vocab_size)
w2i = {w: i for i, w in enumerate(vocab)}
i2w = {i: w for i, w in enumerate(vocab)}
# context window size is two
def create_cbow_dataset(text):
data = []
for i in range(2, len(text) - 2):
context = [text[i - 2], text[i - 1],
text[i + 1], text[i + 2]]
target = text[i]
data.append((context, target))
return data
cbow_train = create_cbow_dataset(text)
print('cbow sample', cbow_train[0])
def create_skipgram_dataset(text):
import random
data = []
for i in range(2, len(text) - 2):
data.append((text[i], text[i-2], 1))
data.append((text[i], text[i-1], 1))
data.append((text[i], text[i+1], 1))
data.append((text[i], text[i+2], 1))
# negative sampling
for _ in range(4):
if random.random() < 0.5 or i >= len(text) - 3:
rand_id = random.randint(0, i-1)
else:
rand_id = random.randint(i+3, len(text)-1)
data.append((text[i], text[rand_id], 0))
return data
skipgram_train = create_skipgram_dataset(text)
print('skipgram sample', skipgram_train[0])
class CBOW(nn.Module):
def __init__(self, vocab_size, embd_size, context_size, hidden_size):
super(CBOW, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embd_size)
self.linear1 = nn.Linear(2*context_size*embd_size, hidden_size)
self.linear2 = nn.Linear(hidden_size, vocab_size)
def forward(self, inputs):
embedded = self.embeddings(inputs).view((1, -1))
hid = F.relu(self.linear1(embedded))
out = self.linear2(hid)
log_probs = F.log_softmax(out, dim = 1)
return log_probs
def extract(self, inputs):
embeds = self.embeddings(inputs)
return embeds
class SkipGram(nn.Module):
def __init__(self, vocab_size, embd_size):
super(SkipGram, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embd_size)
def forward(self, focus, context):
embed_focus = self.embeddings(focus).view((1, -1)) # input
embed_ctx = self.embeddings(context).view((1, -1)) # output
score = torch.mm(embed_focus, torch.t(embed_ctx)) # input*output
log_probs = F.logsigmoid(score) # sigmoid
return log_probs
def extract(self, focus):
embed_focus = self.embeddings(focus)
return embed_focus
embd_size = 100
learning_rate = 0.001
n_epoch = 30
CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
def train_cbow():
hidden_size = 64
losses = []
loss_fn = nn.NLLLoss()
model = CBOW(vocab_size, embd_size, CONTEXT_SIZE, hidden_size)
print(model)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(n_epoch):
total_loss = .0
for context, target in cbow_train:
ctx_idxs = [w2i[w] for w in context]
ctx_var = Variable(torch.LongTensor(ctx_idxs))
model.zero_grad()
log_probs = model(ctx_var)
loss = loss_fn(log_probs, Variable(torch.LongTensor([w2i[target]])))
loss.backward()
optimizer.step()
total_loss += loss.data.item()
losses.append(total_loss)
return model, losses
def train_skipgram():
losses = []
loss_fn = nn.MSELoss()
model = SkipGram(vocab_size, embd_size)
print(model)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(n_epoch):
total_loss = .0
for in_w, out_w, target in skipgram_train:
in_w_var = Variable(torch.LongTensor([w2i[in_w]]))
out_w_var = Variable(torch.LongTensor([w2i[out_w]]))
model.zero_grad()
log_probs = model(in_w_var, out_w_var)
loss = loss_fn(log_probs[0], Variable(torch.Tensor([target])))
loss.backward()
optimizer.step()
total_loss += loss.data.item()
losses.append(total_loss)
return model, losses
cbow_model, cbow_losses = train_cbow()
sg_model, sg_losses = train_skipgram()
plt.figure(figsize= (10, 4))
plt.subplot(121)
plt.plot(range(n_epoch), cbow_losses, 'r-o', label = 'CBOW Losses')
plt.legend()
plt.subplot(122)
plt.plot(range(n_epoch), sg_losses, 'g-s', label = 'SkipGram Losses')
plt.legend()
plt.tight_layout()
cbow_vec = cbow_model.extract(Variable(torch.LongTensor([v for v in w2i.values()])))
cbow_vec = cbow_vec.data.numpy()
len(cbow_vec[0])
sg_vec = sg_model.extract(Variable(torch.LongTensor([v for v in w2i.values()])))
sg_vec = sg_vec.data.numpy()
len(sg_vec[0])
# 利用PCA算法进行降维
from sklearn.decomposition import PCA
X_reduced = PCA(n_components=2).fit_transform(sg_vec)
# 绘制所有单词向量的二维空间投影
import matplotlib.pyplot as plt
import matplotlib
fig = plt.figure(figsize = (20, 10))
ax = fig.gca()
ax.set_facecolor('black')
ax.plot(X_reduced[:, 0], X_reduced[:, 1], '.', markersize = 1, alpha = 0.4, color = 'white')
# 绘制几个特殊单词的向量
words = list(w2i.keys())
# 设置中文字体,否则无法在图形上显示中文
for w in words:
if w in w2i:
ind = w2i[w]
xy = X_reduced[ind]
plt.plot(xy[0], xy[1], '.', alpha =1, color = 'red')
plt.text(xy[0], xy[1], w, alpha = 1, color = 'white', fontsize = 20)
with open("../data/3body.txt", 'r') as f:
text = str(f.read())
import jieba, re
temp = jieba.lcut(text)
words = []
for i in temp:
#过滤掉所有的标点符号
i = re.sub("[\s+\.\!\/_,$%^*(+\"\'””《》]+|[+——!,。?、~@#¥%……&*():]+", "", i)
if len(i) > 0:
words.append(i)
print(len(words))
text[:100]
print(*words[:50])
trigrams = [([words[i], words[i + 1]], words[i + 2]) for i in range(len(words) - 2)]
# 打印出前三个元素看看
print(trigrams[:3])
# 得到词汇表
vocab = set(words)
print(len(vocab))
word_to_idx = {i:[k, 0] for k, i in enumerate(vocab)}
idx_to_word = {k:i for k, i in enumerate(vocab)}
for w in words:
word_to_idx[w][1] +=1
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import torch
class NGram(nn.Module):
def __init__(self, vocab_size, embedding_dim, context_size):
super(NGram, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim) #嵌入层
self.linear1 = nn.Linear(context_size * embedding_dim, 128) #线性层
self.linear2 = nn.Linear(128, vocab_size) #线性层
def forward(self, inputs):
#嵌入运算,嵌入运算在内部分为两步:将输入的单词编码映射为one hot向量表示,然后经过一个线性层得到单词的词向量
embeds = self.embeddings(inputs).view(1, -1)
# 线性层加ReLU
out = F.relu(self.linear1(embeds))
# 线性层加Softmax
out = self.linear2(out)
log_probs = F.log_softmax(out, dim = 1)
return log_probs
def extract(self, inputs):
embeds = self.embeddings(inputs)
return embeds
losses = [] #纪录每一步的损失函数
criterion = nn.NLLLoss() #运用负对数似然函数作为目标函数(常用于多分类问题的目标函数)
model = NGram(len(vocab), 10, 2) #定义NGram模型,向量嵌入维数为10维,N(窗口大小)为2
optimizer = optim.SGD(model.parameters(), lr=0.001) #使用随机梯度下降算法作为优化器
#循环100个周期
for epoch in range(100):
total_loss = torch.Tensor([0])
for context, target in trigrams:
# 准备好输入模型的数据,将词汇映射为编码
context_idxs = [word_to_idx[w][0] for w in context]
# 包装成PyTorch的Variable
context_var = Variable(torch.LongTensor(context_idxs))
# 清空梯度:注意PyTorch会在调用backward的时候自动积累梯度信息,故而每隔周期要清空梯度信息一次。
optimizer.zero_grad()
# 用神经网络做计算,计算得到输出的每个单词的可能概率对数值
log_probs = model(context_var)
# 计算损失函数,同样需要把目标数据转化为编码,并包装为Variable
loss = criterion(log_probs, Variable(torch.LongTensor([word_to_idx[target][0]])))
# 梯度反传
loss.backward()
# 对网络进行优化
optimizer.step()
# 累加损失函数值
total_loss += loss.data
losses.append(total_loss)
print('第{}轮,损失函数为:{:.2f}'.format(epoch, total_loss.numpy()[0]))
# 从训练好的模型中提取每个单词的向量
vec = model.extract(Variable(torch.LongTensor([v[0] for v in word_to_idx.values()])))
vec = vec.data.numpy()
# 利用PCA算法进行降维
from sklearn.decomposition import PCA
X_reduced = PCA(n_components=2).fit_transform(vec)
# 绘制所有单词向量的二维空间投影
import matplotlib.pyplot as plt
import matplotlib
fig = plt.figure(figsize = (20, 10))
ax = fig.gca()
ax.set_facecolor('black')
ax.plot(X_reduced[:, 0], X_reduced[:, 1], '.', markersize = 1, alpha = 0.4, color = 'white')
# 绘制几个特殊单词的向量
words = ['智子', '地球', '三体', '质子', '科学', '世界', '文明', '太空', '加速器', '平面', '宇宙', '信息']
# 设置中文字体,否则无法在图形上显示中文
zhfont1 = matplotlib.font_manager.FontProperties(fname='/Library/Fonts/华文仿宋.ttf', size = 35)
for w in words:
if w in word_to_idx:
ind = word_to_idx[w][0]
xy = X_reduced[ind]
plt.plot(xy[0], xy[1], '.', alpha =1, color = 'red')
plt.text(xy[0], xy[1], w, fontproperties = zhfont1, alpha = 1, color = 'white')
# 定义计算cosine相似度的函数
import numpy as np
def cos_similarity(vec1, vec2):
norm1 = np.linalg.norm(vec1)
norm2 = np.linalg.norm(vec2)
norm = norm1 * norm2
dot = np.dot(vec1, vec2)
result = dot / norm if norm > 0 else 0
return result
# 在所有的词向量中寻找到与目标词(word)相近的向量,并按相似度进行排列
def find_most_similar(word, vectors, word_idx):
vector = vectors[word_to_idx[word][0]]
simi = [[cos_similarity(vector, vectors[num]), key] for num, key in enumerate(word_idx.keys())]
sort = sorted(simi)[::-1]
words = [i[1] for i in sort]
return words
# 与智子靠近的词汇
find_most_similar('智子', vec, word_to_idx)[:10]
import gensim as gensim
from gensim.models import Word2Vec
from gensim.models.keyedvectors import KeyedVectors
from gensim.models.word2vec import LineSentence
f = open("../data/三体.txt", 'r')
lines = []
for line in f:
temp = jieba.lcut(line)
words = []
for i in temp:
#过滤掉所有的标点符号
i = re.sub("[\s+\.\!\/_,$%^*(+\"\'””《》]+|[+——!,。?、~@#¥%……&*():;‘]+", "", i)
if len(i) > 0:
words.append(i)
if len(words) > 0:
lines.append(words)
# 调用gensim Word2Vec的算法进行训练。
# 参数分别为:size: 嵌入后的词向量维度;window: 上下文的宽度,min_count为考虑计算的单词的最低词频阈值
model = Word2Vec(lines, size = 20, window = 2 , min_count = 0)
model.wv.most_similar('三体', topn = 10)
# 将词向量投影到二维空间
rawWordVec = []
word2ind = {}
for i, w in enumerate(model.wv.vocab):
rawWordVec.append(model[w])
word2ind[w] = i
rawWordVec = np.array(rawWordVec)
X_reduced = PCA(n_components=2).fit_transform(rawWordVec)
# 绘制星空图
# 绘制所有单词向量的二维空间投影
fig = plt.figure(figsize = (15, 10))
ax = fig.gca()
ax.set_facecolor('black')
ax.plot(X_reduced[:, 0], X_reduced[:, 1], '.', markersize = 1, alpha = 0.3, color = 'white')
# 绘制几个特殊单词的向量
words = ['智子', '地球', '三体', '质子', '科学', '世界', '文明', '太空', '加速器', '平面', '宇宙', '进展','的']
# 设置中文字体,否则无法在图形上显示中文
zhfont1 = matplotlib.font_manager.FontProperties(fname='/Library/Fonts/华文仿宋.ttf', size=26)
for w in words:
if w in word2ind:
ind = word2ind[w]
xy = X_reduced[ind]
plt.plot(xy[0], xy[1], '.', alpha =1, color = 'red')
plt.text(xy[0], xy[1], w, fontproperties = zhfont1, alpha = 1, color = 'yellow')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cosine Similarity
Step2: $$CosineDistance = 1- CosineSimilarity$$
Step3: Cosine similarity works for any number of dimensions.
Step5: $$King- Queen = Man - Woman$$
Step6: torch.mm Performs a matrix multiplication of the matrices
Step7: NGram词向量模型
Step8: 构造NGram神经网络模型 (三层的网络)
Step9: 12m 24s!!!
Step10: Gensim Word2vec
|
8,913
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
8,914
|
<ASSISTANT_TASK:>
Python Code:
import tweepy
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
autorizar = tweepy.OAuthHandler(consumer_key, consumer_secret)
autorizar.set_access_token(access_token, access_token_secret)
api = tweepy.API(autorizar)
print(api)
api.update_status(status="Big Data Python FIA")
info_tweet = api.update_status(status="O que é big data?")
print(type(info_tweet))
print(dir(info_tweet))
info_tweet.text # Mensagem do tweet
info_tweet.id # Id do Tweet
info_tweet.created_at # Data da criação do Twitter
info_tweet.source # De onde veio o Twitter
info_tweet.lang # Idioma do Twitter
print(dir(info_tweet.user))
info_tweet.user.created_at # Data da criação do usuário
info_tweet.user.location # Localização do usuário
info_tweet.user.friends_count # Quantidade de amigos
info_tweet.user.followers_count # Quantidade de seguidores
info_tweet.user.name # Nome do perfil do usuário.
info_tweet.user.screen_name # Nome do usuário @prof_dinomagri
info_tweet.user.id # Id do usuário.
info_tweet.user.statuses_count # Quantidade tweets feitos.
api.destroy_status(id=info_tweet.id_str)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Com as chaves e tokens de acesso, iremos criar a autenticação e definir o token de acesso.
Step2: Com a autorização criada, vamos passar as credenciais de acesso para a API do Tweepy. Desta forma, teremos acesso aos métodos disponíveis na API.
Step3: Salvando o retorno da publicação
Step4: Informações do Usuário que realizou o Tweet.
Step5: Removendo o tweet
|
8,915
|
<ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
%watermark -a 'Ethen' -d -t -v -p numpy,scipy,pandas,sklearn,matplotlib,seaborn
# we'll only be working with a subset of the variables in the raw dataset,
# feel free to experiment with more
AGE = 'age'
MEANBP1 = 'meanbp1'
CAT1 = 'cat1'
SEX = 'sex'
DEATH = 'death' # outcome variable in the our raw data
SWANG1 = 'swang1' # treatment variable in our raw data
TREATMENT = 'treatment'
num_cols = [AGE, MEANBP1]
cat_cols = [CAT1, SEX, DEATH, SWANG1]
input_path = 'data/rhc.csv'
dtype = {col: 'category' for col in cat_cols}
df = pd.read_csv(input_path, usecols=num_cols + cat_cols, dtype=dtype)
print(df.shape)
df.head()
# replace this column with treatment yes or no
df[SWANG1].value_counts()
# replace these values with shorter names
df[CAT1].value_counts()
cat1_col_mapping = {
'ARF': 'arf',
'MOSF w/Sepsis': 'mosf_sepsis',
'COPD': 'copd',
'CHF': 'chf',
'Coma': 'coma',
'MOSF w/Malignancy': 'mosf',
'Cirrhosis': 'cirrhosis',
'Lung Cancer': 'lung_cancer',
'Colon Cancer': 'colon_cancer'
}
df[CAT1] = df[CAT1].replace(cat1_col_mapping)
# convert features' value to numerical value, and store the
# numerical value to the original value mapping
col_mappings = {}
for col in (DEATH, SWANG1, SEX):
col_mapping = dict(enumerate(df[col].cat.categories))
col_mappings[col] = col_mapping
print(col_mappings)
for col in (DEATH, SWANG1, SEX):
df[col] = df[col].cat.codes
df = df.rename({SWANG1: TREATMENT}, axis=1)
df.head()
cat_cols = [CAT1]
df_one_hot = pd.get_dummies(df[cat_cols], drop_first=True)
df_cleaned = pd.concat([df[num_cols], df_one_hot, df[[SEX, TREATMENT, DEATH]]], axis=1)
df_cleaned.head()
features = df_cleaned.columns.tolist()
features.remove(TREATMENT)
features.remove(DEATH)
agg_operations = {TREATMENT: 'count'}
agg_operations.update({
feature: ['mean', 'std'] for feature in features
})
table_one = df_cleaned.groupby(TREATMENT).agg(agg_operations)
# merge MultiIndex columns together into 1 level
# table_one.columns = ['_'.join(col) for col in table_one.columns.values]
table_one.head()
def compute_table_one_smd(table_one: pd.DataFrame, round_digits: int=4) -> pd.DataFrame:
feature_smds = []
for feature in features:
feature_table_one = table_one[feature].values
neg_mean = feature_table_one[0, 0]
neg_std = feature_table_one[0, 1]
pos_mean = feature_table_one[1, 0]
pos_std = feature_table_one[1, 1]
smd = (pos_mean - neg_mean) / np.sqrt((pos_std ** 2 + neg_std ** 2) / 2)
smd = round(abs(smd), round_digits)
feature_smds.append(smd)
return pd.DataFrame({'features': features, 'smd': feature_smds})
table_one_smd = compute_table_one_smd(table_one)
table_one_smd
# treatment will be our label for estimating the propensity score,
# and death is the outcome that we care about, thus is also removed
# from the step that is estimating the propensity score
death = df_cleaned[DEATH]
treatment = df_cleaned[TREATMENT]
df_cleaned = df_cleaned.drop([DEATH, TREATMENT], axis=1)
column_transformer = ColumnTransformer(
[('numerical', StandardScaler(), num_cols)],
sparse_threshold=0,
remainder='passthrough'
)
data = column_transformer.fit_transform(df_cleaned)
data.shape
logistic = LogisticRegression(solver='liblinear')
logistic.fit(data, treatment)
pscore = logistic.predict_proba(data)[:, 1]
pscore
roc_auc_score(treatment, pscore)
mask = treatment == 1
pos_pscore = pscore[mask]
neg_pscore = pscore[~mask]
print('treatment count:', pos_pscore.shape)
print('control count:', neg_pscore.shape)
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
sns.distplot(neg_pscore, label='control')
sns.distplot(pos_pscore, label='treatment')
plt.xlim(0, 1)
plt.title('Propensity Score Distribution of Control vs Treatment')
plt.ylabel('Density')
plt.xlabel('Scores')
plt.legend()
plt.tight_layout()
plt.show()
def get_similar(pos_pscore: np.ndarray, neg_pscore: np.ndarray, topn: int=5, n_jobs: int=1):
from sklearn.neighbors import NearestNeighbors
knn = NearestNeighbors(n_neighbors=topn + 1, metric='euclidean', n_jobs=n_jobs)
knn.fit(neg_pscore.reshape(-1, 1))
distances, indices = knn.kneighbors(pos_pscore.reshape(-1, 1))
sim_distances = distances[:, 1:]
sim_indices = indices[:, 1:]
return sim_distances, sim_indices
sim_distances, sim_indices = get_similar(pos_pscore, neg_pscore, topn=1)
sim_indices
_, counts = np.unique(sim_indices[:, 0], return_counts=True)
np.bincount(counts)
df_cleaned[TREATMENT] = treatment
df_cleaned[DEATH] = death
df_pos = df_cleaned[mask]
df_neg = df_cleaned[~mask].iloc[sim_indices[:, 0]]
df_matched = pd.concat([df_pos, df_neg], axis=0)
df_matched.head()
table_one_matched = df_matched.groupby(TREATMENT).agg(agg_operations)
table_one_smd_matched = compute_table_one_smd(table_one_matched)
table_one_smd_matched
num_matched_pairs = df_neg.shape[0]
print('number of matched pairs: ', num_matched_pairs)
# pair t-test
stats.ttest_rel(df_pos[DEATH].values, df_neg[DEATH].values)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Causal Inference
Step2: Usually, our treatment group will be smaller than the control group.
Step3: Given all of these covariates and our column treatment that indicates whether the subject received the treatment or control, we wish to have a quantitative way of measuring whether our covariates are balanced between the two groups.
Step4: The next few code chunk will actually fit the propensity score.
Step5: We won't be spending too much time tweaking the model here, checking some evaluation metric of the model serves as a quick sanity check.
Step6: Looking at the plot below, we can see that our features, $X$, does in fact contain information about the user receiving treatment. The distributional difference between the propensity scores for the two group justifies the need for matching, since they are not directly comparable otherwise.
Step7: Keep in mind that not every plot will look like this, if there's major lack of overlap in some part of the propensity score distribution plot that means our positivity assumption would essentially be violated. Or in other words, we can't really estimate a causal effect in those area of the distribution since in those areas, those are subjects that have close to zero chance of being in the control/treatment group. One thing that we may wish to do when encountered with this scenario is either look and see if we're missing some covariates or get rid of individuals who have extreme propensity scores and focus on the areas where there are strong overlapping.
Step8: We can still check the number of occurrences for the matched control record. As mentioned in the previous section, we can add these information as weights to our dataset, but we won't be doing that here.
Step9: After applying the matching procedure, it's important to check and validate that the matched dataset are indeed indistinguishable in terms of the covariates that we were using to balance the control and treatment group.
Step10: Upon completing propensity score matching and verified that our covariates are now fairly balanced using standardized mean difference (smd), we can carry out a outcome analysis using a paired t-test. For all the various knobs that we've described when introducing the matching process, we can experiment with various options and see if our conclusions change.
|
8,916
|
<ASSISTANT_TASK:>
Python Code:
# Dollar volume factor
dollar_volume = AverageDollarVolume(window_length=30)
# High dollar volume filter
high_dollar_volume = (dollar_volume > 10000000)
# Average close price factors
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10, mask=high_dollar_volume)
mean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30, mask=high_dollar_volume)
# Relative difference factor
percent_difference = (mean_close_10 - mean_close_30) / mean_close_30
# Dollar volume factor
dollar_volume = AverageDollarVolume(window_length=30)
# High dollar volume filter
high_dollar_volume = dollar_volume.percentile_between(90,100)
# Top open price filter (high dollar volume securities)
top_open_price = USEquityPricing.open.latest.top(50, mask=high_dollar_volume)
# Top percentile close price filter (high dollar volume, top 50 open price)
high_close_price = USEquityPricing.close.latest.percentile_between(90, 100, mask=top_open_price)
def make_pipeline():
# Dollar volume factor
dollar_volume = AverageDollarVolume(window_length=30)
# High dollar volume filter
high_dollar_volume = dollar_volume.percentile_between(90,100)
# Top open securities filter (high dollar volume securities)
top_open_price = USEquityPricing.open.latest.top(50, mask=high_dollar_volume)
# Top percentile close price filter (high dollar volume, top 50 open price)
high_close_price = USEquityPricing.close.latest.percentile_between(90, 100, mask=top_open_price)
return Pipeline(
screen=high_close_price
)
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
print 'Number of securities that passed the filter: %d' % len(result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Applying the mask to SimpleMovingAverage restricts the average close price factors to a computation over the ~2000 securities passing the high_dollar_volume filter, as opposed to ~8000 without a mask. When we combine mean_close_10 and mean_close_30 to form percent_difference, the computation is performed on the same ~2000 securities.
Step2: Let's put this into make_pipeline and output an empty pipeline screened with our high_close_price filter.
Step3: Running this pipeline outputs 5 securities on May 5th, 2015.
|
8,917
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import numpy as np
import pylab as pb
import GPy
np.random.seed(1)
# Domain Parameters
a = 0. # lower bound of the space
b = 20 # upper bound
# kernel parameters
per = 2*np.pi # period
#var = 1. # variance
#lenscl=10. # lengthscale
#N = 20 # max frequency in the decomposition (the number of basis functions is 2N)
# test function
def ftest(X):
return(np.sin(X) + X/20.)
# observations points and outputs
X = np.linspace(5.,15.,10)[:,None]
Y = ftest(X)
# grid for plots
Xgrid = np.linspace(a,b,100)[:,None]
Ygrid = ftest(Xgrid)
class AperiodicMatern52(GPy.kern.Kern):
Kernel of the aperiodic subspace (up to a given frequency) of a Matern 5/2 RKHS.
Only defined for input_dim=1.
def __init__(self, input_dim=1, variance=1., lengthscale=1., period=2.*np.pi,
n_freq=10, lower=0., upper=4*np.pi,
active_dims=None, name='aperiodic_Matern52'):
self.per_kern = GPy.kern.PeriodicMatern52(input_dim, variance, lengthscale, period, n_freq, lower, upper, active_dims, name='dummy kernel')
self.whole_kern = GPy.kern.Matern52(input_dim, variance, lengthscale, name='dummy kernel')
GPy.kern.Kern.__init__(self, input_dim, active_dims, name)
self.variance = GPy.core.Param('variance', np.float64(variance), GPy.core.parameterization.transformations.Logexp())
self.lengthscale = GPy.core.Param('lengthscale', np.float64(lengthscale), GPy.core.parameterization.transformations.Logexp())
self.period = GPy.core.Param('period', np.float64(period), GPy.core.parameterization.transformations.Logexp())
self.link_parameters(self.variance, self.lengthscale, self.period)
def parameters_changed(self):
self.whole_kern.variance = self.variance * 1.
self.per_kern.variance = self.variance * 1.
self.whole_kern.lengthscale = self.lengthscale * 1.
self.per_kern.lengthscale = self.lengthscale * 1.
self.per_kern.period = self.period * 1.
def K(self, X, X2=None):
return self.whole_kern.K(X, X2) - self.per_kern.K(X, X2)
def Kdiag(self, X):
return np.diag(self.K(X))
def update_gradients_full(self, dL_dK, X, X2=None):
self.whole_kern.update_gradients_full(dL_dK, X, X2)
self.per_kern.update_gradients_full(-dL_dK, X, X2)
self.variance.gradient = self.whole_kern.variance.gradient + self.per_kern.variance.gradient
self.lengthscale.gradient = self.whole_kern.lengthscale.gradient + self.per_kern.lengthscale.gradient
self.period.gradient = self.per_kern.period.gradient
# kernel definitions
kp = GPy.kern.PeriodicMatern52(input_dim=1,lower=a,upper=b)
kp.period.fix()
ka = AperiodicMatern52(input_dim=1,lower=a,upper=b)
ka.period.fix()
k = kp + ka
# model definition
m = GPy.models.GPRegression(X,Y,kernel = k)
m.Gaussian_noise.variance.fix(1e-5)
# model optimization
m.randomize()
m.optimize(messages=True)
pb.figure(figsize=(4,4))
ax=pb.subplot(111)
pb.plot(Xgrid,Ygrid,'r-' , label='line 1', linewidth=1.5)
m.plot(plot_limits=[a,b],ax=ax)
pb.ylim([-1.5,2.5])
def predict_submodels(x,X,Y,kernelSignal,kernelNoise):
kxX = kernelSignal.K(x,X)
K_1 = np.linalg.inv(kernelSignal.K(X)+kernelNoise.K(X))
lamb = np.dot(kxX,K_1)
mean = np.dot(lamb,Y)
var = (kernelSignal.Kdiag(x) - np.sum(lamb.T * kxX.T,0))[:,None]
lower = mean - 2*np.sqrt(np.abs(var))
upper = 2*mean - lower
return((mean,var,lower,upper))
pred_p = predict_submodels(Xgrid,X,Y,kp,ka)
pb.figure(figsize=(4,4))
GPy.plotting.matplot_dep.base_plots.gpplot(Xgrid,pred_p[0],pred_p[2],pred_p[3])
pb.ylim([-1.5,2.5])
pred_a = predict_submodels(Xgrid,X,Y,ka,kp)
pb.figure(figsize=(4,4))
GPy.plotting.matplot_dep.base_plots.gpplot(Xgrid,pred_a[0],pred_a[2],pred_a[3])
pb.ylim([-1.5,2.5])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The boundary limits for the plots are set to $[0,20]$, and we consider a period of $2 \pi$. The test function for this example is $f_{test}=\sin(x)+\frac{1}{20} x$.
Step3: The kernel and models parameters are initialized with random value before the optimization.
Step4: Subfigure a
Step5: Subfigure b
Step6: Subfigure c
|
8,918
|
<ASSISTANT_TASK:>
Python Code:
import sys
import scipy.io as sio
import glob
import numpy as np
import matplotlib.pyplot as plt
from skimage.filters import threshold_otsu
sys.path.append('../code/functions')
import qaLib as qLib
sys.path.append('../../pipeline_1/code/functions')
import connectLib as cLib
from IPython.display import Image

def otsuVox(argVox):
probVox = np.nan_to_num(argVox)
bianVox = np.zeros_like(probVox)
for zIndex, curSlice in enumerate(probVox):
#if the array contains all the same values
if np.max(curSlice) == np.min(curSlice):
#otsu thresh will fail here, leave bianVox as all 0's
continue
thresh = threshold_otsu(curSlice)
bianVox[zIndex] = curSlice > thresh
return bianVox
procData = []
for mat in glob.glob('../../data/matlabData/collman15v2/*_p1.mat'):
name = mat[34:-7]
rawData = sio.loadmat(mat)
npData = np.rollaxis(rawData[name], 2, 0)
procData.append([name, npData])
goodData = procData[12][1]
plt.imshow(goodData[0], cmap='gray')
plt.title('Good Data Raw Plot At Slice 0')
plt.axis('off')
plt.show()
plt.hist(goodData[0])
plt.title("Histogram of Good Data")
plt.show()
simDiff = np.zeros((100, 100, 100))
for i in range(100):
for j in range(100):
for k in range(100):
simDiff[i][j][k] = j
plt.imshow(simDiff[5])
plt.axis('off')
plt.title('Challenging Data Raw Plot at z=0')
plt.show()
plt.hist(simDiff[0], bins=20)
plt.title("Histogram of Challenging Data")
plt.show()
simEasyGrid = np.zeros((100, 100, 100))
for i in range(100):
for j in range(100):
for k in range(100):
simEasyGrid[i][j][k] = 10
for i in range(4):
for j in range(4):
for k in range(4):
simEasyGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = 1000
plt.imshow(simEasyGrid[5])
plt.axis('off')
plt.title('Easy Data Slice at z=5')
plt.show()
plt.hist(simEasyGrid[5])
plt.title('Histogram of Easy Data')
plt.show()
simDiffGrid = np.zeros((100, 100, 100))
for i in range(100):
for j in range(100):
for k in range(100):
simDiffGrid[i][j][k] = 10 * j
for i in range(4):
for j in range(4):
for k in range(4):
simDiffGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = 1000
plt.imshow(simDiffGrid[5])
plt.axis('off')
plt.title('Difficult Data Slice at z=5')
plt.show()
plt.hist(simDiffGrid[5])
plt.title('Histogram of Difficult Data')
plt.show()
otsuOutEasy = otsuVox(simEasyGrid)
plt.imshow(otsuOutEasy[5])
plt.axis('off')
plt.title('Otsu Output for Easy Data Slice at z=5')
plt.show()
plt.hist(otsuOutEasy[5], bins = 100)
plt.title('Histogram of Easy Data Post Otsu')
plt.show()
otsuOutEasy = otsuVox(simDiffGrid)
plt.imshow(otsuOutEasy[5])
plt.axis('off')
plt.title('Otsu Output for Easy Data Slice at z=5')
plt.show()
plt.hist(otsuOutEasy[5])
plt.title('Histogram of Easy Data Post Otsu')
plt.show()
realData = procData[12][1]
plt.imshow(goodData[0], cmap='gray')
plt.title('Good Data Raw Plot At Slice 0')
plt.axis('off')
plt.show()
plt.hist(goodData[0])
plt.title("Histogram of Good Data")
plt.show()
otsuOutReal = otsuVox(realData)
plt.imshow(otsuOutReal[0], cmap='gray')
plt.title('Good Data otsuVox Output At Slice 0')
plt.axis('off')
plt.show()
plt.hist(otsuOutReal[0])
plt.title("Histogram of Post-Otsu Data")
plt.show()
labelClusters = cLib.clusterThresh(procData[0][1], 0, 10000000)
rawClusters = cLib.clusterThresh(procData[12][1], 0, 10000000)
precision, recall, F1 = qLib.precision_recall_f1(labelClusters, rawClusters)
print 'Precision: ' + str(precision)
print 'Recall: ' + str(recall)
print 'F1: ' + str(F1)
otsuClusters = cLib.clusterThresh(otsuOutReal, 0, 10000000)
precision, recall, F1 = qLib.precision_recall_f1(labelClusters, otsuClusters)
print 'Precision: ' + str(precision)
print 'Recall: ' + str(recall)
print 'F1: ' + str(F1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Algorithm
Step2: Actual Code
Step3: Algorithm Conditions
Step4: Good Data
Step5: Prediction on Good Data
Step6: Prediction on Challenging Data
Step7: The easy data looks exactly as I expected. The histogram is clearly bimodal, which is the kind of data otsuVox performs well on.
Step8: The difficult data looks exactly as I expected. The histogram is clearly not bimodal, which is the kind of data otsuVox performs poorly on.
Step9: As expected, otsuVox separated the background of the image from the foreground.
Step10: As expected, otsuVox failed to separate the background from the foreground.
Step11: As we can see, the real data is clearly bimodal. This means that otsuVox should be able to extract the foreground.
Step12: Precision/Recall/F1 before otsuVox
Step13: Precision/Recall/F1 After otsuVox
|
8,919
|
<ASSISTANT_TASK:>
Python Code:
cd /notebooks/exercise-06/
!cat ssh_config
fmt=r'{{.NetworkSettings.IPAddress}}'
!docker -H tcp://172.17.0.1:2375 inspect ansible101_bastion_1 --format {fmt} # pass variables *before* commands ;)
# Use this cell to create the pin file and then encrypt the vault
# Use this cell to test/run the playbook. You can --limit the execution to the bastion host only.
!ssh -Fssh_config bastion hostname
fmt=r'{{.NetworkSettings.IPAddress}}'
!docker -H tcp://172.17.0.1:2375 inspect ansible101_web_1 --format {fmt} # pass variables *before* commands ;)
!ssh -F ssh_config root@172.17.0.4 ip -4 -o a # get host ip
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ssh_config
Step2: If we don't use it, we can turn off GSSApiAuthentication which attempts may slow down the connection.
Step3: Exercise
Step4: ansible.cfg and ssh_config
|
8,920
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import thinkbayes2
import thinkplot
import numpy as np
from scipy import stats
%matplotlib inline
data = {
2008: ['Gardiner', 'McNatt', 'Terry'],
2009: ['McNatt', 'Ryan', 'Partridge', 'Turner', 'Demers'],
2010: ['Gardiner', 'Barrett', 'Partridge'],
2011: ['Barrett', 'Partridge'],
2012: ['Sagar'],
2013: ['Hammer', 'Wang', 'Hahn'],
2014: ['Partridge', 'Hughes', 'Smith'],
2015: ['Barrett', 'Sagar', 'Fernandez'],
}
def MakeBinomialPmf(n, p):
ks = range(n+1)
ps = stats.binom.pmf(ks, n, p)
pmf = thinkbayes2.Pmf(dict(zip(ks, ps)))
pmf.Normalize()
return pmf
class Bear1(thinkbayes2.Suite, thinkbayes2.Joint):
def Likelihood(self, data, hypo):
n, p = hypo
like = 1
for year, sobs in data.items():
k = len(sobs)
if k > n:
return 0
like *= stats.binom.pmf(k, n, p)
return like
def Predict(self):
metapmf = thinkbayes2.Pmf()
for (n, p), prob in bear.Items():
pmf = MakeBinomialPmf(n, p)
metapmf[pmf] = prob
mix = thinkbayes2.MakeMixture(metapmf)
return mix
hypos = [(n, p) for n in range(15, 70)
for p in np.linspace(0, 1, 101)]
bear = Bear1(hypos)
bear.Update(data)
pmf_n = bear.Marginal(0)
thinkplot.PrePlot(5)
thinkplot.Pdf(pmf_n, label='n')
thinkplot.Config(xlabel='Number of runners (n)',
ylabel='PMF', loc='upper right')
pmf_n.Mean()
pmf_p = bear.Marginal(1)
thinkplot.Pdf(pmf_p, label='p')
thinkplot.Config(xlabel='Probability of showing up (p)',
ylabel='PMF', loc='upper right')
pmf_p.CredibleInterval(95)
thinkplot.Contour(bear, pcolor=True, contour=False)
thinkplot.Config(xlabel='Number of runners (n)',
ylabel='Probability of showing up (p)',
ylim=[0, 0.4])
predict = bear.Predict()
thinkplot.Hist(predict, label='k')
thinkplot.Config(xlabel='# Runners who beat me (k)',
ylabel='PMF', xlim=[-0.5, 12])
predict[0]
ss = thinkbayes2.Beta(2, 1)
thinkplot.Pdf(ss.MakePmf(), label='S')
thinkplot.Config(xlabel='Probability of showing up (S)',
ylabel='PMF', loc='upper left')
os = thinkbayes2.Beta(3, 1)
thinkplot.Pdf(os.MakePmf(), label='O')
thinkplot.Config(xlabel='Probability of outrunning me (O)',
ylabel='PMF', loc='upper left')
bs = thinkbayes2.Beta(1, 1)
thinkplot.Pdf(bs.MakePmf(), label='B')
thinkplot.Config(xlabel='Probability of being in my age group (B)',
ylabel='PMF', loc='upper left')
n = 1000
sample = ss.Sample(n) * os.Sample(n) * bs.Sample(n)
cdf = thinkbayes2.Cdf(sample)
thinkplot.PrePlot(1)
prior = thinkbayes2.Beta(1, 3)
thinkplot.Cdf(prior.MakeCdf(), color='grey', label='Model')
thinkplot.Cdf(cdf, label='SOB sample')
thinkplot.Config(xlabel='Probability of displacing me',
ylabel='CDF', loc='lower right')
from itertools import chain
from collections import Counter
counter = Counter(chain(*data.values()))
len(counter), counter
def MakeBeta(count, num_races, precount=3):
beta = thinkbayes2.Beta(1, precount)
beta.Update((count, num_races-count))
return beta
num_races = len(data)
betas = [MakeBeta(count, num_races)
for count in counter.values()]
[beta.Mean() for beta in betas]
class Bear2(thinkbayes2.Suite, thinkbayes2.Joint):
def ComputePmfs(self, data):
num_races = len(data)
counter = Counter(chain(*data.values()))
betas = [MakeBeta(count, num_races)
for count in counter.values()]
self.pmfs = dict()
low = len(betas)
high = max(self.Values())
for n in range(low, high+1):
self.pmfs[n] = self.ComputePmf(betas, n, num_races)
def ComputePmf(self, betas, n, num_races, label=''):
no_show = MakeBeta(0, num_races)
all_betas = betas + [no_show] * (n - len(betas))
ks = []
for i in range(2000):
ps = [beta.Random() for beta in all_betas]
xs = np.random.random(len(ps))
k = sum(xs < ps)
ks.append(k)
return thinkbayes2.Pmf(ks, label=label)
def Likelihood(self, data, hypo):
n = hypo
k = data
return self.pmfs[n][k]
def Predict(self):
metapmf = thinkbayes2.Pmf()
for n, prob in self.Items():
pmf = bear2.pmfs[n]
metapmf[pmf] = prob
mix = thinkbayes2.MakeMixture(metapmf)
return mix
bear2 = Bear2()
thinkplot.PrePlot(3)
pmf = bear2.ComputePmf(betas, 18, num_races, label='n=18')
pmf2 = bear2.ComputePmf(betas, 22, num_races, label='n=22')
pmf3 = bear2.ComputePmf(betas, 26, num_races, label='n=24')
thinkplot.Pdfs([pmf, pmf2, pmf3])
thinkplot.Config(xlabel='# Runners who beat me (k)',
ylabel='PMF', loc='upper right')
low = 15
high = 35
bear2 = Bear2(range(low, high))
bear2.ComputePmfs(data)
for year, sobs in data.items():
k = len(sobs)
bear2.Update(k)
thinkplot.PrePlot(1)
thinkplot.Pdf(bear2, label='n')
thinkplot.Config(xlabel='Number of SOBs (n)',
ylabel='PMF', loc='upper right')
predict = bear2.Predict()
thinkplot.Hist(predict, label='k')
thinkplot.Config(xlabel='# Runners who beat me (k)', ylabel='PMF', xlim=[-0.5, 12])
predict[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Almost every year since 2008 I have participated in the Great Bear Run, a 5K road race in Needham MA. I usually finish in the top 20 or so, and in my age group I have come in 4th, 6th, 4th, 3rd, 2nd, 4th and 4th. In 2015 I didn't run because of a scheduling conflict, but based on the results I estimate that I would have come 4th again.
Step2: Having come close in 2012, I have to wonder what my chances of winning are.
Step3: The binomial model
Step4: The prior distribution for $n$ is uniform from 15 to 70 (15 is the number of unique runners who have beat me; 70 is an arbitrary upper bound).
Step5: Next we update bear with the data.
Step6: From the joint posterior distribution we can extract the marginal distributions of $n$ and $p$.
Step7: The posterior distribution for $p$ is better behaved. The credible interval is between 4% and 21%.
Step8: The following figure shows the joint distribution of $n$ and $p$. They are inversely related
Step9: Finally, we can generate a predictive distribution for the number of people who will finish ahead of me, $k$. For each pair of $n$ and $p$, the distribution of $k$ is binomial. So the predictive distribution is a weighted mixture of binomials (see Bear1.Predict above).
Step10: A better model
Step11: The prior distribution of $O$ is biased toward high values. Of the people who have the potential to beat me, many of them will beat me every time. I am only competitive with a few of them.
Step12: The probability that a runner is in my age group depends on the difference between his age and mine. Someone exactly my age will always be in my age group. Someone 4 years older will be in my age group only once every 5 years (the Great Bear run uses 5-year age groups).
Step13: I used Beta distributions for each of the three factors, so each $p_i$ is the product of three Beta-distributed variates. In general, the result is not a Beta distribution, but maybe we can find a Beta distribution that is a good approximation of the actual distribution.
Step14: Now let's look more carefully at the data. There are 16 people who have displaced me during at least one year, several more than once.
Step15: The following function makes a Beta distribution to represent the posterior distribution of $p_i$ for each runner. It starts with the prior, Beta(1, 3), and updates it with the number of times the runner displaces me, and the number of times he doesn't.
Step16: Now we can make a posterior distribution for each runner
Step17: Let's check the posterior means to see if they make sense. For Rich Partridge, who has displaced me 4 times out of 8, the posterior mean is 42%; for someone who has displaced me only once, it is 17%.
Step18: Now we're ready to do some inference. The model only has one parameter, the total number of runners who could displace me, $n$. For the 16 SOBS we have actually observed, we use previous results to estimate $p_i$. For additional hypothetical runners, we update the distribution with 0 displacements out of num_races.
Step19: Here's what some of the precomputed distributions look like, for several values of $n$.
Step20: For the prior distribution of $n$, I'll use a uniform distribution from 16 to 35 (this upper bound turns out to be sufficient).
Step21: And here's the update, using the number of runners who displaced me each year
Step22: Here's the posterior distribution of $n$. It's noisy because I used random sampling to estimate the conditional distributions of $k$. But that's ok because we don't really care about $n$; we care about the predictive distribution of $k$. And noise in the distribution of $n$ has very little effect on $k$.
Step23: The predictive distribution for $k$ is a weighted mixture of the conditional distributions we already computed
Step24: And here's what it looks like
|
8,921
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
# # YOUR CODE HERE
# f = s.split()
# b = 0
# a = []
# while b < len(f):
# a.append(''.join([c for c in f[b]]))
# b+=1
# return a
# return s[1]
# char_probs('aaaa')
result_dict = dict([(i, s.count(i)) for i in s])
prob = dict([(l, result_dict[l]/len(s)) for l in s])
return prob
char_probs('aaaa')
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
#prob = np.array(d[1])
prob = sorted(d.items(), key=lambda d: d[1],reverse = True)
prob_1 = [x[1] for x in prob]
H = -sum(prob_1 * np.log2(prob_1))
return H
entropy({'a': 0.5, 'b': 0.5})
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
w = interact(entropy,d = ('Enter String Here'))
#I have to do char_probs of the string that is entered somehow. Once I do that it will work fine.
assert True # use this for grading the pi digits histogram
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Character counting and entropy
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
|
8,922
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
n = 100
# Random
# A = np.random.randn(n, n)
# A = A.T.dot(A)
# Clustered eigenvalues
A = np.diagflat([np.ones(n//4), 10 * np.ones(n//4), 100*np.ones(n//4), 1000* np.ones(n//4)])
U = np.random.rand(n, n)
Q, _ = np.linalg.qr(U)
A = Q.dot(A).dot(Q.T)
A = (A + A.T) * 0.5
print("A is normal matrix: ||AA* - A*A|| =", np.linalg.norm(A.dot(A.T) - A.T.dot(A)))
b = np.random.randn(n)
# Hilbert matrix
# A = np.array([[1.0 / (i+j - 1) for i in range(1, n+1)] for j in range(1, n+1)])
# b = np.ones(n)
f = lambda x: 0.5 * x.dot(A.dot(x)) - b.dot(x)
grad_f = lambda x: A.dot(x) - b
x0 = np.zeros(n)
USE_COLAB = False
%matplotlib inline
import matplotlib.pyplot as plt
if not USE_COLAB:
plt.rc("text", usetex=True)
plt.rc("font", family='serif')
if USE_COLAB:
!pip install git+https://github.com/amkatrutsa/liboptpy
import seaborn as sns
sns.set_context("talk")
eigs = np.linalg.eigvalsh(A)
plt.semilogy(np.unique(eigs))
plt.ylabel("Eigenvalues", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
import scipy.optimize as scopt
def callback(x, array):
array.append(x)
scopt_cg_array = []
scopt_cg_callback = lambda x: callback(x, scopt_cg_array)
x = scopt.minimize(f, x0, method="CG", jac=grad_f, callback=scopt_cg_callback)
x = x.x
print("||f'(x*)|| =", np.linalg.norm(A.dot(x) - b))
print("f* =", f(x))
def ConjugateGradientQuadratic(x0, A, b, tol=1e-8, callback=None):
x = x0
r = A.dot(x0) - b
p = -r
while np.linalg.norm(r) > tol:
alpha = r.dot(r) / p.dot(A.dot(p))
x = x + alpha * p
if callback is not None:
callback(x)
r_next = r + alpha * A.dot(p)
beta = r_next.dot(r_next) / r.dot(r)
p = -r_next + beta * p
r = r_next
return x
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
print("\t CG quadratic")
cg_quad = methods.fo.ConjugateGradientQuad(A, b)
x_cg = cg_quad.solve(x0, tol=1e-7, disp=True)
print("\t Gradient Descent")
gd = methods.fo.GradientDescent(f, grad_f, ss.ExactLineSearch4Quad(A, b))
x_gd = gd.solve(x0, tol=1e-7, disp=True)
print("Condition number of A =", abs(max(eigs)) / abs(min(eigs)))
plt.figure(figsize=(8,6))
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()], label=r"$\|f'(x_k)\|^{CG}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array[:50]], label=r"$\|f'(x_k)\|^{CG_{PR}}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r"$\|f'(x_k)\|^{G}_2$", linewidth=2)
plt.legend(loc="best", fontsize=20)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Convergence rate", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
print([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()])
plt.figure(figsize=(8,6))
plt.plot([f(x) for x in cg_quad.get_convergence()], label=r"$f(x^{CG}_k)$", linewidth=2)
plt.plot([f(x) for x in scopt_cg_array], label=r"$f(x^{CG_{PR}}_k)$", linewidth=2)
plt.plot([f(x) for x in gd.get_convergence()], label=r"$f(x^{G}_k)$", linewidth=2)
plt.legend(loc="best", fontsize=20)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Function value", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
import numpy as np
import sklearn.datasets as skldata
import scipy.special as scspec
n = 300
m = 1000
X, y = skldata.make_classification(n_classes=2, n_features=n, n_samples=m, n_informative=n//3)
C = 1
def f(w):
return np.linalg.norm(w)**2 / 2 + C * np.mean(np.logaddexp(np.zeros(X.shape[0]), -y * X.dot(w)))
def grad_f(w):
denom = scspec.expit(-y * X.dot(w))
return w - C * X.T.dot(y * denom) / X.shape[0]
# f = lambda x: -np.sum(np.log(1 - A.T.dot(x))) - np.sum(np.log(1 - x*x))
# grad_f = lambda x: np.sum(A.dot(np.diagflat(1 / (1 - A.T.dot(x)))), axis=1) + 2 * x / (1 - np.power(x, 2))
x0 = np.zeros(n)
print("Initial function value = {}".format(f(x0)))
print("Initial gradient norm = {}".format(np.linalg.norm(grad_f(x0))))
def ConjugateGradientFR(f, gradf, x0, num_iter=100, tol=1e-8, callback=None, restart=False):
x = x0
grad = gradf(x)
p = -grad
it = 0
while np.linalg.norm(gradf(x)) > tol and it < num_iter:
alpha = utils.backtracking(x, p, method="Wolfe", beta1=0.1, beta2=0.4, rho=0.5, f=f, grad_f=gradf)
if alpha < 1e-18:
break
x = x + alpha * p
if callback is not None:
callback(x)
grad_next = gradf(x)
beta = grad_next.dot(grad_next) / grad.dot(grad)
p = -grad_next + beta * p
grad = grad_next.copy()
it += 1
if restart and it % restart == 0:
grad = gradf(x)
p = -grad
return x
import scipy.optimize as scopt
import liboptpy.restarts as restarts
n_restart = 60
tol = 1e-5
max_iter = 600
scopt_cg_array = []
scopt_cg_callback = lambda x: callback(x, scopt_cg_array)
x = scopt.minimize(f, x0, tol=tol, method="CG", jac=grad_f, callback=scopt_cg_callback, options={"maxiter": max_iter})
x = x.x
print("\t CG by Polak-Rebiere")
print("Norm of garient = {}".format(np.linalg.norm(grad_f(x))))
print("Function value = {}".format(f(x)))
print("\t CG by Fletcher-Reeves")
cg_fr = methods.fo.ConjugateGradientFR(f, grad_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.1, beta2=0.4, init_alpha=1.))
x = cg_fr.solve(x0, tol=tol, max_iter=max_iter, disp=True)
print("\t CG by Fletcher-Reeves with restart n")
cg_fr_rest = methods.fo.ConjugateGradientFR(f, grad_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.1, beta2=0.4,
init_alpha=1.), restarts.Restart(n // n_restart))
x = cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter, disp=True)
print("\t Gradient Descent")
gd = methods.fo.GradientDescent(f, grad_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.1, beta2=0.4, init_alpha=1.))
x = gd.solve(x0, max_iter=max_iter, tol=tol, disp=True)
plt.figure(figsize=(8, 6))
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_fr.get_convergence()], label=r"$\|f'(x_k)\|^{CG_{FR}}_2$ no restart", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_fr_rest.get_convergence()], label=r"$\|f'(x_k)\|^{CG_{FR}}_2$ restart", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array], label=r"$\|f'(x_k)\|^{CG_{PR}}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r"$\|f'(x_k)\|^{G}_2$", linewidth=2)
plt.legend(loc="best", fontsize=16)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Convergence rate", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
%timeit scopt.minimize(f, x0, method="CG", tol=tol, jac=grad_f, options={"maxiter": max_iter})
%timeit cg_fr.solve(x0, tol=tol, max_iter=max_iter)
%timeit cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter)
%timeit gd.solve(x0, tol=tol, max_iter=max_iter)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Распределение собственных значений
Step2: Правильный ответ
Step3: Реализация метода сопряжённых градиентов
Step4: График сходимости
Step5: Неквадратичная функция
Step6: Реализация метода Флетчера-Ривса
Step7: График сходимости
Step8: Время выполнения
|
8,923
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from trasferencia_calor import solve_explicit, pretty_plot
import time
a = time.time()
t_out, dic = solve_explicit(metodo='explicit_py')
print('Explicito python puro demoro ',time.time() - a, 'segundos')
pretty_plot(t_out, dic)
a = time.time()
t_out, dic = solve_explicit(metodo='explicit_cython')
print('Explicito cython demoro ',time.time() - a, 'segundos')
pretty_plot(t_out, dic)
%matplotlib inline
from trasferencia_calor import solve_explicit, pretty_plot
import time
a = time.time()
t_out, dic = solve_explicit(dz=40, metodo='explicit_cython3')
time.time() - a
a = time.time()
t_out, dic = solve_explicit(dz=40, metodo='explicit_cython2')
time.time() - a
a = time.time()
t_out, dic = solve_explicit(dz=40, metodo='explicit_numba')
time.time() - a
pretty_plot(t_out, dic)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Copiamos la implementación de python en un archivo separado que termina en .pyx y lo compilamos usando el scrip setup.py haciendo
Step2: Notar que solo obtuvimos 8 segundo menos.
|
8,924
|
<ASSISTANT_TASK:>
Python Code:
# Import matplotlib (plotting) and numpy (numerical arrays).
# This enables their use in the Notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Create an array of 30 values for x equally spaced from 0 to 5.
x = np.linspace(0, 5, 30)
y = x**2
# Plot y versus x
fig, ax = plt.subplots(nrows=1, ncols=1)
ax.plot(x, y, color='red')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('A simple graph of $y=x^2$');
# Import matplotlib (plotting) and numpy (numerical arrays).
# This enables their use in the Notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Import IPython's interact function which is used below to
# build the interactive widgets
from IPython.html.widgets import interact
def plot_sine(frequency=4.0, grid_points=12, plot_original=True):
Plot discrete samples of a sine wave on the interval ``[0, 1]``.
x = np.linspace(0, 1, grid_points + 2)
y = np.sin(2 * frequency * np.pi * x)
xf = np.linspace(0, 1, 1000)
yf = np.sin(2 * frequency * np.pi * xf)
fig, ax = plt.subplots(figsize=(8, 6))
ax.set_xlabel('x')
ax.set_ylabel('signal')
ax.set_title('Aliasing in discretely sampled periodic signal')
if plot_original:
ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2)
ax.plot(x, y, marker='o', linewidth=2)
# The interact function automatically builds a user interface for exploring the
# plot_sine function.
interact(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 16, 1), plot_original=True);
# Import matplotlib (plotting), skimage (image processing) and interact (user interfaces)
# This enables their use in the Notebook.
%matplotlib inline
from matplotlib import pyplot as plt
from skimage import data
from skimage.feature import blob_doh
from skimage.color import rgb2gray
from IPython.html.widgets import interact, fixed
# Extract the first 500px square of the Hubble Deep Field.
image = data.hubble_deep_field()[0:500, 0:500]
image_gray = rgb2gray(image)
def plot_blobs(max_sigma=30, threshold=0.1, gray=False):
Plot the image and the blobs that have been found.
blobs = blob_doh(image_gray, max_sigma=max_sigma, threshold=threshold)
fig, ax = plt.subplots(figsize=(8,8))
ax.set_title('Galaxies in the Hubble Deep Field')
if gray:
ax.imshow(image_gray, interpolation='nearest', cmap='gray_r')
circle_color = 'red'
else:
ax.imshow(image, interpolation='nearest')
circle_color = 'yellow'
for blob in blobs:
y, x, r = blob
c = plt.Circle((x, y), r, color=circle_color, linewidth=2, fill=False)
ax.add_patch(c)
# Use interact to explore the galaxy detection algorithm.
interact(plot_blobs, max_sigma=(10, 40, 2), threshold=(0.005, 0.02, 0.001));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Above, you should see a plot of $y=x^2$.
Step4: Counting galaxies in the Hubble deep field
|
8,925
|
<ASSISTANT_TASK:>
Python Code:
import espressomd
espressomd.assert_features('DIPOLES', 'LENNARD_JONES')
from espressomd.magnetostatics import DipolarP3M
from espressomd.magnetostatic_extensions import DLC
from espressomd.cluster_analysis import ClusterStructure
from espressomd.pair_criteria import DistanceCriterion
import numpy as np
# Lennard-Jones parameters
lj_sigma = 1
lj_epsilon = 1
lj_cut = 2**(1./6.) * lj_sigma
# Particles
N = 1000
# Area fraction of the mono-layer
phi = 0.1
# Dipolar interaction parameter lambda = mu_0 m^2 /(4 pi sigma^3 kT)
dip_lambda = 4
# Temperature
kT =1.0
# Friction coefficient
gamma = 1.0
# Time step
dt = 0.01
# System setup
box_size = (N * np.pi * (lj_sigma/2.)**2 /phi)**0.5
print("Box size",box_size)
# Note that the dipolar P3M and dipolar layer correction need a cubic
# simulation box for technical reasons.
system = espressomd.System(box_l=(box_size,box_size,box_size))
system.time_step = dt
# tune verlet list skin
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
system.thermostat.set_langevin(kT=kT,gamma=gamma,seed=1)
# Lennard-Jones interaction
system.non_bonded_inter[0,0].lennard_jones.set_params(epsilon=lj_epsilon,sigma=lj_sigma,cutoff=lj_cut, shift="auto")
# Random dipole moments
np.random.seed(seed = 1)
dip_phi = np.random.random((N,1)) *2. * np.pi
dip_cos_theta = 2*np.random.random((N,1)) -1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta *np.sin(dip_phi),
dip_sin_theta *np.cos(dip_phi),
dip_cos_theta))
# Random positions in the monolayer
pos = box_size* np.hstack((np.random.random((N,2)), np.zeros((N,1))))
# Add particles
system.part.add(pos=pos,rotation=N*[(1,1,1)],dip=dip,fix=N*[(0,0,1)])
# Remove overlap between particles by means of the steepest descent method
system.integrator.set_steepest_descent(
f_max=0,gamma=0.1,max_displacement=0.05)
while system.analysis.energy()["total"] > 5*kT*N:
system.integrator.run(20)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
# Setup dipolar P3M and dipolar layer correction
dp3m = DipolarP3M(accuracy=5E-4,prefactor=dip_lambda*lj_sigma**3*kT)
dlc = DLC(maxPWerror=1E-4, gap_size=box_size-lj_sigma)
system.actors.add(dp3m)
system.actors.add(dlc)
# tune verlet list skin again
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# print skin value
print('tuned skin = {}'.format(system.cell_system.skin))
# Equilibrate
print("Equilibration...")
equil_rounds = 20
equil_steps = 1000
for i in range(equil_rounds):
system.integrator.run(equil_steps)
print("progress: {:3.0f}%, dipolar energy: {:9.2f}".format(
(i + 1) * 100. / equil_rounds, system.analysis.energy()["dipolar"]), end="\r")
print("\nEquilibration done")
# Setup cluster analysis
cs=ClusterStructure(pair_criterion=DistanceCriterion(cut_off=1.3*lj_sigma))
n_clusters = []
cluster_sizes = []
loops = 100
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from tempfile import NamedTemporaryFile
import base64
VIDEO_TAG = <video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264'])
with open(f.name, "rb") as g:
video = g.read()
anim._encoded_video = base64.b64encode(video).decode('ascii')
plt.close(anim._fig)
return VIDEO_TAG.format(anim._encoded_video)
animation.Animation._repr_html_ = anim_to_html
def init():
# Set x and y range
ax.set_ylim(0, box_size)
ax.set_xlim(0, box_size)
xdata, ydata = [], []
part.set_data(xdata, ydata)
return part,
def run(i):
# Run cluster analysis
cs.run_for_all_pairs()
# Gather statistics:
n_clusters.append(len(cs.clusters))
for c in cs.clusters:
cluster_sizes.append(c[1].size())
system.integrator.run(100)
# Save current system state as a plot
xdata, ydata = system.part[:].pos_folded[:,0], system.part[:].pos_folded[:,1]
ax.figure.canvas.draw()
part.set_data(xdata, ydata)
print("progress: {:3.0f}%".format((i + 1) * 100. / loops), end="\r")
return part,
fig, ax = plt.subplots(figsize=(10,10))
part, = ax.plot([],[], 'o')
animation.FuncAnimation(fig, run, frames=loops, blit=True, interval=0, repeat=False, init_func=init)
for i in range(loops):
# Run cluster analysis
cs.run_for_all_pairs()
# Gather statistics:
n_clusters.append(len(cs.clusters))
for c in cs.clusters:
cluster_sizes.append(c[1].size())
system.integrator.run(100)
print("progress: {:3.0f}%".format((float(i)+1)/loops * 100), end="\r")
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
plt.xlim(0, box_size)
plt.ylim(0, box_size)
plt.xlabel('x-position', fontsize=20)
plt.ylabel('y-position', fontsize=20)
plt.plot(system.part[:].pos_folded[:,0], system.part[:].pos_folded[:,1], 'o')
plt.show()
size_dist=np.histogram(cluster_sizes,range=(2,21),bins=19)
plt.figure(figsize=(10,10))
plt.grid()
plt.xticks(range(0,20))
plt.plot(size_dist[1][:-2],size_dist[0][:-1]/float(loops))
plt.xlabel('size of clusters',fontsize=20)
plt.ylabel('distribution',fontsize=20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we set up all simulation parameters.
Step2: Note that we declared a <tt>lj_cut</tt>. This will be used as the cut-off radius of the Lennard-Jones potential to obtain a purely repulsive WCA potential.
Step3: Now we set up the interaction between the particles as a non-bonded interaction and use the Lennard-Jones potential as the interaction potential. Here we use the above mentioned cut-off radius to get a purely repulsive interaction.
Step4: Now we generate random positions and orientations of the particles and their dipole moments.
Step5: Now we add the particles with their positions and orientations to our system. Thereby we activate all degrees of freedom for the orientation of the dipole moments. As we want a two dimensional system we only allow the particles to translate in x- and y-direction and not in z-direction by using the <tt>fix</tt> argument.
Step6: Be aware that we do not commit the magnitude of the magnetic dipole moments to the particles. As in our case all particles have the same dipole moment it is possible to rewrite the dipole-dipole interaction potential to
Step7: For the simulation of our system we choose the velocity Verlet integrator
Step8: To calculate the dipole-dipole interaction we use the Dipolar P3M method (see Ref. <a href='#[1]'>[1]</a>) which is based on the Ewald summation. By default the boundary conditions of the system are set to conducting which means the dielectric constant is set to infinity for the surrounding medium. As we want to simulate a two dimensional system we additionally use the dipolar layer correction (DLC) (see Ref. <a href='#[2]'>[2]</a>). As we add <tt>DipolarP3M</tt> to our system as an actor, a tuning function is started automatically which tries to find the optimal parameters for Dipolar P3M and prints them to the screen. The last line of the output is the value of the tuned skin.
Step9: Now we equilibrate the dipole-dipole interaction for some time
Step10: As we are interested in the cluster sizes and the number of clusters we now set up the cluster analysis with the distance criterion that a particle is added to a cluster if the nearest neighbors are closer than $1.3\cdot\sigma_{lj}$.
Step11: Sampling
Step12: We sample over 100 loops
Step14: As the system is two dimensional, we can simply do a scatter plot to get a visual representation of a system state. To get a better insight of how a ferrofluid system develops during time we will create a video of the development of our system during the sampling. If you only want to sample the system simply go to Sampling without animation
Step15: Now we use the <tt>animation</tt> class of <tt>matplotlib</tt> to save snapshots of the system as frames of a video which is then displayed after the sampling is finished. Between two frames are 100 integration steps.
Step16: Sampling without animation
Step17: You may want to get a visualization of the current state of the system. For that we plot the particle positions folded to the simulation box using <tt>matplotlib</tt>.
Step18: In the plot chain-like and ring-like clusters should be visible. Some of them are connected via Y- or X-links to each other. Also some monomers should be present.
Step19: Now we can plot this histogram and should see an exponential decrease in the number of particles in a cluster along the size of a cluster, i.e. the number of monomers in it
|
8,926
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from sympy import *
import matplotlib.pyplot as plt
import numpy as np
init_printing(use_unicode=True)
r, u, v, c, r_c, u_c, v_c, E, p, r_p, u_p, v_p, e, a, b, q, b_0, b_1, b_2, b_3, q_0, q_1, q_2, q_3, q_4, q_5, beta, rho, epsilon, delta, d, K_3, Omega, Lambda, lamda, C, mu, Gamma, tau, nu, xi, P_x, eta, varep, gamma, P_0, theta_0, z, a_0, alpha_0, alpha, T_p, T_c, T = symbols('r u v c r_c u_c v_c E p r_p u_p v_p e a b q b_0 b_1 b_2 b_3 q_0 q_1 q_2 q_3 q_4 q_5 beta rho epsilon delta d K_3 Omega Lambda lamda C mu Gamma tau nu xi_2 P_x eta varepsilon gamma P_0 theta_0 z a_0 alpha_0 alpha T_p T_c T')
eptil, atil, btil, ctil, Ctil, gtil, thetatil, Ptil, gprm, thetaprm, Pprm, Tprm, chiprm = symbols('epsilontilde atilde btilde ctilde Ctilde gtilde_{0} thetatilde_{0} Ptilde_{0} gprm thetaprm Pprm Tprm chiprm')
def g1(a,b,q,xi,P_x,varep,eta,C,Omega):
return (a*xi**2)/2+(b*xi**4)/4+(q*xi**6)/6+P_x**2/(2*varep)+(eta*P_x**4)/4+C*P_x*xi-(Omega*(P_x*xi)**2)/2
g1(a,b,q,xi,P_x,varep,eta,C,Omega)
g = (a*xi**2)/2+(b*xi**4)/4+(q*xi**6)/6+P_x**2/(2*epsilon)+(eta*P_x**4)/4+C*P_x*xi-(Omega*(P_x*xi)**2)/2
g
g.subs([(epsilon,1/r_p),(eta,u_p),(C,-gamma),(Omega,2*e),(xi,c),(P_x,p)])
fc = a*c**2+b*c**4+q*c**6
fp = alpha*p**2+beta*p**4+gamma*p**6-E*p
fcp = -Omega*(p*c)**2
fc
fp
fcp
# fc = fc.subs([(a_0*(T-T_c),a)])#,6e4),(b,10**4),(q,9e7)])
# fc
fp = fp.subs([(gamma,0)])#,(alpha_0*(T-T_p),alpha),(beta,10**4)])
fp
collect((fc+fp+fcp).diff(c),c)
pc = solve((fc+fp+fcp).diff(c),p)[1]
pc
solve((fc+fp+fcp).diff(p),E)[0]
(solve((fc+fp+fcp).diff(p),E)[0]).subs(p,pc)
simplify(series((solve((fc+fp+fcp).diff(p),E)[0]).subs(p,pc),c,n=7))
def Ecc(a,b,q,c,Omega,alpha,beta):
return [2*np.sqrt((a+2*b*c**2+3*q*c**4)/Omega)*(alpha-Omega*c**2+(2*beta/Omega)*(a+2*b*c**2+3*q*c**4)),
(Omega*a**3*(4*beta*(a/Omega)**(3/2)+2*alpha*np.sqrt(a/Omega))
+2*(a*c)**2*np.sqrt(a/Omega)*(-Omega**2*a+Omega*alpha*b+6*a*b*beta)
+a*c**4*np.sqrt(a/Omega)*(Omega*a*(-2*Omega*b+3*alpha*q)-Omega*alpha*b**2+18*a**2*beta*q+6*a*b**2*beta)
+c**60*np.sqrt(a/Omega)*(-3*q*(Omega*a)**2+Omega*a*b*(Omega*b-3*alpha*q)+Omega*alpha*b**3+18*a**2*b*beta*q-2*a*b**3*beta))/(Omega*a**3)]
def coeff0(a,alpha,beta,Omega):
return 4*beta*(a/Omega)**(3/2)+2*alpha*np.sqrt(a/Omega)
def coeff2(a,Omega,alpha,b,beta):
return 2*a**2*np.sqrt(a/Omega)*(-Omega**2*a+Omega*alpha*b+6*a*b*beta)/(Omega*a**3)
def coeff4(a, Omega, b, alpha, q, beta):
return a*np.sqrt(a/Omega)*(Omega*a*(-2*Omega*b+3*alpha*q)-Omega*alpha*b**2+18*a**2*beta*q+6*a*b**2*beta)/(Omega*a**3)
def coeff6(a, Omega, b, alpha, q, beta):
return np.sqrt(a/Omega)*(-3*q*(Omega*a)**2+Omega*a*b*(Omega*b-3*alpha*q)+Omega*alpha*b**3+18*a**2*b*beta*q-2*a*b**3*beta)/(Omega*a**3)
a = 6e5*(116-114.1)
Omega = 8e4
q = 1
b = 1.01e3
alpha = 0.5e3
beta = 1.05e6
coeff0(a,alpha,beta,Omega)
coeff2(a,Omega,alpha,b,beta)
coeff4(a, Omega, b, alpha, q, beta)
coeff6(a, Omega, b, alpha, q, beta)
plt.figure(figsize=(11,8))
plt.plot(np.linspace(-17.5,17.5,201),Ecc(a,b,q,np.linspace(-17.5,17.5,201),Omega,alpha,beta)[0],label='$\mathregular{E}$')
# plt.plot(np.linspace(-17.5,17.5,201),Ecc(a,b,q,np.linspace(-17.5,17.5,201),Omega,alpha,beta)[1])
plt.scatter(0,Ecc(a,b,q,0,Omega,alpha,beta)[0],label='$\mathregular{E_{th}}$')
plt.ylim(2.259e8,2.2595e8),plt.xlim(-4,4)
plt.xlabel(c,fontsize=18)
plt.ylabel(E,fontsize=18,rotation='horizontal',labelpad=25)
plt.legend(loc='lower right',fontsize=18);
150000+2.258e8
plt.figure(figsize=(11,8))
plt.scatter(np.linspace(-17.5,17.5,201),Ecc(6e4*(116-114.1),10**4,9e7,np.linspace(-17.5,17.5,201),10**10,0.6e9,10**4)[1],color='r',label='expand')
plt.ylim(-0.5e14,0.25e14),plt.xlim(-5,5)
plt.xlabel('c',fontsize=18)
plt.ylabel('E',rotation='horizontal',labelpad=25,fontsize=18);
def quadterm(alpha,Omega,Eth,E,a):
return [np.sqrt((Eth-E)/np.sqrt(4*Omega*a)),-np.sqrt((Eth-E)/np.sqrt(4*Omega*a))]
# np.linspace(0,60e6,1000)
plt.plot(np.linspace(0,60e6,1000),quadterm(0.6e9,10**10,23663663.66366366,np.linspace(0,60e6,1000),6e4*(116-114.1))[0],'b')
plt.plot(np.linspace(0,60e6,1000),quadterm(0.6e9,10**10,23663663.66366366,np.linspace(0,60e6,1000),6e4*(116-114.1))[1],'r');
plt.figure(figsize=(11,8))
plt.scatter(Ecc(6e4*(116-114.1),10**4,9e7,np.linspace(7.5,17.5),10**10,0.6e9,10**4)[0],np.linspace(7.5,17.5),color='r',label='no expand')
plt.ylim(0,35)
plt.xlabel('E',fontsize=18)
plt.ylabel('c',rotation='horizontal',labelpad=25,fontsize=18);
(fc+fp+fcp).diff(p)
solve((fc+fp+fcp).diff(p),c)[1]
expand(simplify(pc.subs(c,solve((fc+fp+fcp).diff(p),c)[1])),p)
simplify(solve(expand(simplify(pc.subs(c,solve((fc+fp+fcp).diff(p),c)[1])),p)-p,E)[1])
def Epp(a,b,q,alpha,beta,Omega,p):
return (2*p*(Omega*b+3*alpha*q+6*beta*q*p**2)+2*Omega*p*np.sqrt(3*Omega*q*p**2-3*a*q+b**2))/(3*q)
plt.figure(figsize=(11,8))
plt.scatter(Epp(6e4*(116-114.1),10**4,9e7,0.6e9,10**4,10**11,np.linspace(0,0.005))/10**7,np.linspace(0,0.005),color='r',label='T = 116')
plt.scatter(Epp(6e4*(110-114.1),10**4,9e7,0.6e9,10**4,10**11,np.linspace(0,0.005))/10**7,np.linspace(0,0.005),color='g',label='T = 110')
plt.scatter(Epp(6e4*(105-114.1),10**4,9e7,0.6e9,10**4,10**11,np.linspace(0,0.005))/10**7,np.linspace(0,0.005),color='b',label='T = 105')
plt.ylim(0,0.006),plt.xlim(0,20)
plt.xlabel('E',fontsize=18)
plt.ylabel('p',rotation='horizontal',labelpad=25,fontsize=18)
plt.legend(loc='upper right',fontsize=16);
def ppp(a,b,q,Omega,c):
return np.sqrt((a+2*b*c**2+3*q*c**4)/Omega)
plt.figure(figsize=(11,8))
plt.scatter(np.linspace(0,35),ppp(6e4*(116-114.1),10**4,9e7,10**10,np.linspace(0,35)))
plt.ylim(-2,220),plt.xlim(-1,36)
plt.xlabel('c',fontsize=18)
plt.ylabel('p',rotation='horizontal',labelpad=25,fontsize=18);
simplify((solve((fp+fcp).diff(p),E)[0]).subs(p,solve((fc+fp+fcp).diff(c),p)[1]))
def Ec(om, a, b, c, q, alp, beta):
return (2/om)*np.sqrt((a+2*b*c**2+3*q*c**4)/om)*(alp*om-(c*om)**2+2*beta*a+4*b*beta*c**2+6*beta*q*c**4)
expand((alpha-E/(2*p)+2*beta*p**2)**2)
plt.scatter(Ec(10**11,6e4,10**4,np.linspace(0,15,200),9e7,0.6e9,10**4),np.linspace(0,15,200),marker='s');
def Eth(T):
E = []
for i in T:
if i > 114.1:
E.append(0.64e6*(i-100)*np.sqrt(i-114.1))
else:
E.append(0)
return E
plt.plot(np.linspace(113,119,10),Eth(np.linspace(113,119,10)))
plt.xlim(113,122),plt.ylim(0,3e7);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generalized Landau Model of Ferroelectric Liquid Crystals
Step2: $f(c,p) = \dfrac{1}{2}r_{c}c^{2}+\dfrac{1}{4}u_{c}c^{4}+\dfrac{1}{6}v_{c}c^{6}+\dfrac{1}{2}r_{p}p^{2}+\dfrac{1}{4}u_{p}p^{4}+\dfrac{1}{6}v_{p}p^{6}-\gamma cp-\dfrac{1}{2}ec^{2}p^{2}-Ep$
Step3: Electrically Induced Tilt in Achiral Bent-Core Liquid Crystals
Step4: Mapping
Step5: $\dfrac{\partial f}{\partial c} = 0$
Step6: Solve for $p$
Step7: Solve $\dfrac{\partial f}{\partial p} = 0$ for $E$
Step8: Sub $p(c)$ into $E(c,p)$
Step9: Define above equation for plotting
Step10: $\dfrac{\partial f}{\partial p} = 0$
Step11: Solve for $c$
Step12: Sub $c(p)$ into $p(c)$
Step13: Solve for $E(p)$
Step14: Define above equation for plotting
|
8,927
|
<ASSISTANT_TASK:>
Python Code:
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
r = 0.015
k = 500
real_parameters = [r, k]
times = np.linspace(0, 1000, 100)
signal_values = model.simulate(real_parameters, times)
# Add independent Gaussian noise
sigma = 10
observed_values = signal_values + pints.noise.independent(sigma, signal_values.shape)
# Plot
plt.plot(times,signal_values,label = 'signal')
plt.plot(times,observed_values,label = 'observed')
plt.xlabel('Time')
plt.ylabel('Values')
plt.legend()
plt.show()
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, observed_values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, sigma * 0.5],
[0.02, 600, sigma * 1.5])
# Create a nested ellipsoidal rejectection sampler
sampler = pints.NestedController(log_likelihood, log_prior, method=pints.NestedEllipsoidSampler)
# Set number of iterations
sampler.set_iterations(8000)
# Set the number of posterior samples to generate
sampler.set_n_posterior_samples(1600)
# Do proposals in parallel
sampler.set_parallel(True)
# Use dynamic enlargement factor
sampler._sampler.set_dynamic_enlargement_factor(1)
samples = sampler.run()
print('Done!')
# Plot output
import pints.plot
pints.plot.histogram([samples], ref_parameters=[r, k, sigma])
plt.show()
pints.plot.series(samples[:100], problem)
plt.show()
print('marginal log-likelihood = ' + str(sampler.marginal_log_likelihood())
+ ' ± ' + str(sampler.marginal_log_likelihood_standard_deviation()))
print('effective sample size = ' + str(sampler.effective_sample_size()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the nested sampler that will be used to sample from the posterior.
Step2: Run the sampler!
Step3: Plot posterior samples versus true parameter values (dashed lines)
Step4: Plot posterior predictive simulations versus the observed data
Step5: Marginal likelihood estimate
Step6: Effective sample size
|
8,928
|
<ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
encoded[:100]
len(vocab)
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
tf.train.get_checkpoint_state('checkpoints')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
Step8: LSTM Cell
Step9: RNN Output
Step10: Training loss
Step11: Optimizer
Step12: Build the network
Step13: Hyperparameters
Step14: Time for training
Step15: Saved checkpoints
Step16: Sampling
Step17: Here, pass in the path to a checkpoint and sample from the network.
|
8,929
|
<ASSISTANT_TASK:>
Python Code:
# Pure python modules and jupyter notebook functionality
# first you should import the third-party python modules which you'll use later on
# the first line enables that figures are shown inline, directly in the notebook
%pylab inline
import os
import datetime as dt
from os import path
import sys
from matplotlib import pyplot as plt
# try to auto-configure the path, -will work in all cases where doc and data
# are checked out at same level
shyft_data_path = path.abspath("../../../shyft-data")
if path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ:
os.environ['SHYFT_DATA']=shyft_data_path
from shyft.time_series import IntVector, DoubleVector
import numpy as np
# works:
iv = IntVector([0, 1, 4, 5])
print(iv)
# won't work:
# iv[2] = 2.2
# see the DoubleVector
dv = DoubleVector([1.0, 3, 4.5, 10.110293])
print(dv)
dv[0] = 2.3
IV1 = IntVector([int(i) for i in np.arange(1000)])
# once the shyft_path is set correctly, you should be able to import shyft modules
import shyft
# if you have problems here, it may be related to having your LD_LIBRARY_PATH
# pointing to the appropriate libboost_python libraries (.so files)
from shyft import api
from shyft.repository.default_state_repository import DefaultStateRepository
from shyft.orchestration.configuration.yaml_configs import YAMLSimConfig
from shyft.orchestration.simulators.config_simulator import ConfigSimulator
# here is the *.yaml file that configures the simulation:
config_file_path = os.path.abspath("../nea-example/nea-config/neanidelva_simulation.yaml")
cfg = YAMLSimConfig(config_file_path, "neanidelva")
simulator = ConfigSimulator(cfg)
# run the model, and we'll just pull the `api.model` from the `simulator`
simulator.run()
model = simulator.region_model
# First, we can also plot the statistical distribution of the
# discharges over the sub-catchments
from shyft.time_series import TsVector,IntVector,TimeAxis,Calendar,time,UtcPeriod
# api.TsVector() is a a strongly typed list of time-series,that supports time-series vector operations.
discharge_ts = TsVector() # except from the type, it just works as a list()
# loop over each catchment, and extract the time-series (we keep them as such for now)
for cid in model.catchment_ids: # fill in discharge time series for all subcatchments
discharge_ts.append(model.statistics.discharge([int(cid)]))
# get the percentiles we want, note -1 = arithmetic average
percentiles= IntVector([10,25,50,-1,75,90])
# create a Daily(for the fun of it!) time-axis for the percentile calculations
# (our simulation could be hourly)
ta_statistics = TimeAxis(model.time_axis.time(0), Calendar.DAY, 365)
# then simply get out a new set of time-series, corresponding to the percentiles we specified
# note that discharge_ts is of the TsVector type, not a simple list as in our first example above
discharge_percentiles = discharge_ts.percentiles(ta_statistics, percentiles)
#utilize that we know that all the percentile time-series share a common time-axis
ts_timestamps = [dt.datetime.utcfromtimestamp(p.start) for p in ta_statistics]
# Then we can make another plot of the percentile data for the sub-catchments
fig, ax = plt.subplots(figsize=(20,15))
# plot each discharge percentile in the discharge_percentiles
for i,ts_percentile in enumerate(discharge_percentiles):
clr='k'
if percentiles[i] >= 0.0:
clr= str(float(percentiles[i]/100.0))
ax.plot(ts_timestamps, ts_percentile.values, label = "{}".format(percentiles[i]), color=clr)
# also plot catchment discharge along with the statistics
# notice that we use .average(ta_statistics) to properly align true-average values to time-axis
ax.plot(ts_timestamps, discharge_ts[0].average(ta_statistics).values,
label = "CID {}".format(model.catchment_ids[0]),
linewidth=2.0, alpha=0.7, color='b')
fig.autofmt_xdate()
ax.legend(title="Percentiles")
ax.set_ylabel("discharge [m3 s-1]")
# a simple percentile plot, from orchestration looks nicer
from shyft.orchestration import plotting as splt
oslo = Calendar('Europe/Oslo')
fig, ax = plt.subplots(figsize=(16,8))
splt.set_calendar_formatter(oslo)
h, ph = splt.plot_np_percentiles(ts_timestamps,[ p.values for p in discharge_percentiles],
base_color=(0.03,0.01,0.3))
ax = plt.gca()
ax.set_ylabel("discharge [m3 s-1]")
plt.title("CID {}".format(model.catchment_ids[0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. The key classes within the api and being "pythonic"
Step2: Note, however, that these containers are very basic lists. They don't have methods such as .pop and .index. In generally, they are meant just to be used as first class containers for which you'll pass data into before passing it to Shyft. For those familiar with python and numpy, you can think of it similar to the advantages of using numpy arrays over pure python lists. The numpy array is far more efficient. In the case of Shyft, it is similar, and the IntVector, DoubleVector and other specialized vector types are much more efficient.
Step3: TODO
Step4: 1. shyft.time_series.TimeSeries
|
8,930
|
<ASSISTANT_TASK:>
Python Code:
# The Developer Key is used to retrieve a discovery document containing the
# non-public Full Circle Query v2 API. This is used to build the service used
# in the samples to make API requests. Please see the README for instructions
# on how to configure your Google Cloud Project for access to the Full Circle
# Query v2 API.
DEVELOPER_KEY = 'xxxx' #'INSERT_DEVELOPER_KEY_HERE'
# The client secrets file can be downloaded from the Google Cloud Console.
CLIENT_SECRETS_FILE = 'adh-key.json' #'Make sure you have correctly renamed this file and you have uploaded it in this colab'
import json
import sys
import argparse
import pprint
import random
import datetime
import pandas as pd
import plotly.plotly as py
import plotly.graph_objs as go
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient import discovery
from oauthlib.oauth2.rfc6749.errors import InvalidGrantError
from google.auth.transport.requests import AuthorizedSession
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from plotly.offline import iplot
from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter
from googleapiclient.errors import HttpError
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
# Allow plot images to be displayed
%matplotlib inline
# Functions
def enable_plotly_in_cell():
import IPython
from plotly.offline import init_notebook_mode
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
'''))
init_notebook_mode(connected=False)
#!/usr/bin/python
#
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Utilities used to step through OAuth 2.0 flow.
These are intended to be used for stepping through samples for the Full Circle
Query v2 API.
_APPLICATION_NAME = 'ADH Campaign Overlap'
_CREDENTIALS_FILE = 'fcq-credentials.json' #just to rewrite a new credential file, not for users to provide
_SCOPES = 'https://www.googleapis.com/auth/adsdatahub'
_DISCOVERY_URL_TEMPLATE = 'https://%s/$discovery/rest?version=%s&key=%s'
_FCQ_DISCOVERY_FILE = 'fcq-discovery.json'
_FCQ_SERVICE = 'adsdatahub.googleapis.com'
_FCQ_VERSION = 'v1'
_REDIRECT_URI = 'urn:ietf:wg:oauth:2.0:oob'
_SCOPE = ['https://www.googleapis.com/auth/adsdatahub']
_TOKEN_URI = 'https://accounts.google.com/o/oauth2/token'
MAX_PAGE_SIZE = 50
def _GetCredentialsFromInstalledApplicationFlow():
Get new credentials using the installed application flow.
flow = InstalledAppFlow.from_client_secrets_file(
CLIENT_SECRETS_FILE, scopes=_SCOPE)
flow.redirect_uri = _REDIRECT_URI # Set the redirect URI used for the flow.
auth_url, _ = flow.authorization_url(prompt='consent')
print ('Log into the Google Account you use to access the adsdatahub Query '
'v1 API and go to the following URL:\n%s\n' % auth_url)
print 'After approving the token, enter the verification code (if specified).'
code = raw_input('Code: ')
try:
flow.fetch_token(code=code)
except InvalidGrantError as ex:
print 'Authentication has failed: %s' % ex
sys.exit(1)
credentials = flow.credentials
_SaveCredentials(credentials)
return credentials
def _LoadCredentials():
Loads and instantiates Credentials from JSON credentials file.
with open(_CREDENTIALS_FILE, 'rb') as handler:
stored_creds = json.loads(handler.read())
creds = Credentials(client_id=stored_creds['client_id'],
client_secret=stored_creds['client_secret'],
token=None,
refresh_token=stored_creds['refresh_token'],
token_uri=_TOKEN_URI)
return creds
def _SaveCredentials(creds):
Save credentials to JSON file.
stored_creds = {
'client_id': getattr(creds, '_client_id'),
'client_secret': getattr(creds, '_client_secret'),
'refresh_token': getattr(creds, '_refresh_token')
}
with open(_CREDENTIALS_FILE, 'wb') as handler:
handler.write(json.dumps(stored_creds))
def GetCredentials():
Get stored credentials if they exist, otherwise return new credentials.
If no stored credentials are found, new credentials will be produced by
stepping through the Installed Application OAuth 2.0 flow with the specified
client secrets file. The credentials will then be saved for future use.
Returns:
A configured google.oauth2.credentials.Credentials instance.
try:
creds = _LoadCredentials()
creds.refresh(Request())
except IOError:
creds = _GetCredentialsFromInstalledApplicationFlow()
return creds
def GetDiscoveryDocument():
Downloads the adsdatahub v1 discovery document.
Downloads the adsdatahub v1 discovery document to fcq-discovery.json
if it is accessible. If the file already exists, it will be overwritten.
Raises:
ValueError: raised if the discovery document is inaccessible for any reason.
credentials = GetCredentials()
discovery_url = _DISCOVERY_URL_TEMPLATE % (
_FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY)
auth_session = AuthorizedSession(credentials)
discovery_response = auth_session.get(discovery_url)
if discovery_response.status_code == 200:
with open(_FCQ_DISCOVERY_FILE, 'wb') as handler:
handler.write(discovery_response.text)
else:
raise ValueError('Unable to retrieve discovery document for api name "%s"'
'and version "%s" via discovery URL: %s'
% _FCQ_SERVICE, _FCQ_VERSION, discovery_url)
def GetService():
Builds a configured adsdatahub v1 API service.
Returns:
A googleapiclient.discovery.Resource instance configured for the adsdatahub v1 service.
credentials = GetCredentials()
discovery_url = _DISCOVERY_URL_TEMPLATE % (
_FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY)
service = discovery.build(
'adsdatahub', _FCQ_VERSION, credentials=credentials,
discoveryServiceUrl=discovery_url)
return service
def GetServiceFromDiscoveryDocument():
Builds a configured Full Circle Query v2 API service via discovery file.
Returns:
A googleapiclient.discovery.Resource instance configured for the Full Circle
Query API v2 service.
credentials = GetCredentials()
with open(_FCQ_DISCOVERY_FILE, 'rb') as handler:
discovery_doc = handler.read()
service = discovery.build_from_document(
service=discovery_doc, credentials=credentials)
return service
try:
full_circle_query = GetService()
except IOError as ex:
print ('Unable to create ads data hub service - %s' % ex)
print ('Did you specify the client secrets file in samples_util.py?')
sys.exit(1)
try:
# Execute the request.
response = full_circle_query.customers().list().execute()
except HttpError as e:
print (e)
sys.exit(1)
if 'customers' in response:
print ('ADH API Returned {} Ads Data Hub customers for the current user!'.format(len(response['customers'])))
for customer in response['customers']:
print(json.dumps(customer))
else:
print ('No customers found for current user.')
#@title Define ADH configuration parameters
customer_id = 000000001 #@param
query_name = 'test1' #@param
big_query_project = 'adh-scratch' #@param Destination Project ID
big_query_dataset = 'test' #@param Destination Dataset
big_query_destination_table = 'freqanalysis_test' #@param Destination Table
big_query_destination_table_affinity = 'freqanalysis_test_affinity' #@param Affinity Destination Table
big_query_destination_table_inmarket = 'freqanalysis_test_inmarket' #@param In-market Destination Table
big_query_destination_table_age_gender = 'freqanalysis_test_age_gender' #@param Age/Gender Destination Table
start_date = '2019-12-01' #@param {type:"date", allow-input: true}
end_date = '2019-12-31' #@param {type:"date", allow-input: true}
max_freq = 100 #@param {type:"integer", allow-input: true}
cpm = 12#@param {type:"number", allow-input: true}
id_type = "campaign_id" #@param ["", "advertiser_id", "campaign_id", "placement_id", "ad_id"] {type: "string", allow-input: false}
IDs = "12345678" #@param {type: "string", allow-input: true}
def df_calc_fields(df):
df['ctr'] = df.clicks / df.impressions
df['cpc'] = df.cost / df.clicks
df['cumulative_clicks'] = df.clicks.cumsum()
df['cumulative_impressions'] = df.impressions.cumsum()
df['cumulative_reach'] = df.reach.cumsum()
df['cumulative_cost'] = df.cost.cumsum()
df['coverage_clicks'] = df.cumulative_clicks / df.clicks.sum()
df['coverage_impressions'] = df.cumulative_impressions / df.impressions.sum()
df['coverage_reach'] = df.cumulative_reach / df.reach.sum()
return df
# Build the query
dc = {}
if (IDs == ""):
dc['ID_filters'] = ""
else:
dc['id_type'] = id_type
dc['IDs'] = IDs
dc['ID_filters'] = '''AND {id_type} IN ({IDs})'''.format(**dc)
#create global query list
global_query_name = []
q1 =
WITH
imp_u_clicks AS (
SELECT
user_id,
query_id.time_usec AS interaction_time,
0 AS cost,
'imp' AS interaction_type
FROM
adh.google_ads_impressions
WHERE
user_id != '0' {ID_filters}
q2 =
UNION ALL (
SELECT
user_id,
click_id.time_usec AS interaction_time,
advertiser_click_cost_usd AS cost,
'click' AS interaction_type
FROM
adh.google_ads_clicks
WHERE
user_id != '0' AND impression_data.{id_type} IN ({IDs}) ) ),
q3 =
user_level_data AS (
SELECT
user_id,
SUM(IF(interaction_type = 'imp',
1,
0)) AS impressions,
SUM(IF(interaction_type = 'click',
1,
0)) AS clicks,
SUM(cost) AS cost
FROM
imp_u_clicks
GROUP BY
user_id)
q4 =
SELECT
impressions AS frequency,
SUM(clicks) AS clicks,
SUM(impressions) AS impressions,
COUNT(*) AS reach,
SUM(cost) AS cost
FROM
user_level_data
GROUP BY
1
ORDER BY
frequency ASC
query_text = (q1 + q2 + q3 + q4).format(**dc)
print(query_text)
import datetime
try:
full_circle_query = GetService()
except IOError, ex:
print 'Unable to create ads data hub service - %s' % ex
print 'Did you specify the client secrets file?'
sys.exit(1)
d = datetime.datetime.today()
query_create_body = {
'name': query_name + '_' + d.strftime('%d-%m-%Y') + '_freq',
'title': query_name + '_' + d.strftime('%d-%m-%Y') + '_freq',
'queryText': query_text
}
try:
# Execute the request.
new_query = full_circle_query.customers().analysisQueries().create(body=query_create_body, parent='customers/' + str(customer_id)).execute()
global_query_name.append(new_query["name"])
except HttpError as e:
print e
sys.exit(1)
print 'New query %s created for customer ID "%s":' % (query_name, customer_id)
print(json.dumps(new_query))
qb1 =
SELECT
a.affinity_category,
COUNT(imp.query_id.time_usec) AS impression,
COUNT(clk.click_id.time_usec) AS clicks,
COUNT(DISTINCT imp.user_id) AS reach,
COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency,
COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr
FROM
adh.google_ads_impressions imp, UNNEST(affinity) aff
LEFT JOIN adh.google_ads_clicks clk USING (query_id)
JOIN adh.affinity a on aff = a.affinity_id
WHERE a.affinity_category != '' {ID_filters}
GROUP BY 1
ORDER BY 3 DESC
qb2 =
SELECT
a.in_market_category,
COUNT(imp.query_id.time_usec) AS impression,
COUNT(clk.click_id.time_usec) AS clicks,
COUNT(DISTINCT imp.user_id) AS reach,
COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency,
COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr
FROM
adh.google_ads_impressions imp, UNNEST(in_market) aff
LEFT JOIN adh.google_ads_clicks clk USING (query_id)
JOIN adh.in_market a on aff = a.in_market_id
WHERE a.in_market_category != '' {ID_filters}
GROUP BY 1
ORDER BY 3 DESC
qb3 =
SELECT
gen.gender_name,
age.age_group_name,
COUNT(imp.query_id.time_usec) AS impression,
COUNT(clk.click_id.time_usec) AS clicks,
COUNT(DISTINCT imp.user_id) AS reach,
COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency,
COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr
FROM
adh.google_ads_impressions imp
LEFT JOIN adh.google_ads_clicks clk USING (query_id)
LEFT JOIN adh.age_group age ON imp.demographics.age_group = age_group_id
LEFT JOIN adh.gender gen ON imp.demographics.gender = gender_id
WHERE {id_type} IN ({IDs})
GROUP BY 1,2
ORDER BY 1,2 DESC
import datetime
try:
full_circle_query = GetService()
except IOError, ex:
print 'Unable to create ads data hub service - %s' % ex
print 'Did you specify the client secrets file?'
sys.exit(1)
# create request body method
d = datetime.datetime.today()
demoQuery = [qb1, qb2, qb3]
def createRequestBody(arg):
data = {}
data['name'] = query_name + '_' + d.strftime('%d-%m-%Y') + '_demo_' +str(arg)
data['title'] = query_name + '_' + d.strftime('%d-%m-%Y') + '_demo_' + str(arg)
data['queryText'] = demoQuery[arg].format(**dc)
return data
#create multiple query
for i in range(len(demoQuery)):
try:
# Execute the request.
queryBody = createRequestBody(i)
new_query = full_circle_query.customers().analysisQueries().create(body=queryBody, parent='customers/' + str(customer_id)).execute()
global_query_name.append(new_query["name"])
except HttpError as e:
print e
sys.exit(1)
print 'New query %s created for customer ID "%s":' % (new_query_name, customer_id)
destination_table_full_path = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table
destination_table_full_path_affinity = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_affinity
destination_table_full_path_inmarket = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_inmarket
destination_table_full_path_age_gender = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_age_gender
CUSTOMER_ID = customer_id
QUERY_NAME = query_name
DEST_TABLES = [destination_table_full_path, destination_table_full_path_affinity, destination_table_full_path_inmarket, destination_table_full_path_age_gender]
#Dates
format_str = '%Y-%m-%d' # The format
start_date_obj = datetime.datetime.strptime(start_date, format_str)
end_date_obj = datetime.datetime.strptime(end_date, format_str)
START_DATE = {
"year": start_date_obj.year,
"month": start_date_obj.month,
"day": start_date_obj.day
}
END_DATE = {
"year": end_date_obj.year,
"month": end_date_obj.month,
"day": end_date_obj.day
}
try:
full_circle_query = GetService()
except IOError, ex:
print('Unable to create ads data hub service - %s' % ex)
print('Did you specify the client secrets file?')
sys.exit(1)
query_start_body = {
'spec': {
'startDate': START_DATE,
'endDate': END_DATE
},
'destTable': '',
'customerId': CUSTOMER_ID
}
#run all queries
for i in range(len(global_query_name)):
try:
# Execute the request.
query_start_body['destTable'] = DEST_TABLES[i]
operation = full_circle_query.customers().analysisQueries().start(body=query_start_body, name=global_query_name[i].encode("ascii")).execute()
except HttpError as e:
print(e)
sys.exit(1)
print('Running query with name "%s" via the following operation:' % query_name)
print(json.dumps(operation))
import time
statusDone = False
while statusDone is False:
print("waiting for the job to complete...")
updatedOperation = full_circle_query.operations().get(name=operation['name']).execute()
if updatedOperation.has_key('done') and updatedOperation['done'] == True:
statusDone = True
time.sleep(5)
print("Job completed... Getting results")
#run bigQuery query
dc = [big_query_dataset + '.' + big_query_destination_table,
big_query_dataset + '.' + big_query_destination_table_affinity,
big_query_dataset + '.' + big_query_destination_table_inmarket,
big_query_dataset + '.' + big_query_destination_table_age_gender]
qs = ['select * from {}'.format(q) for q in dc]
# Run query as save as a table (also known as dataframe)
df = pd.io.gbq.read_gbq(q1, project_id=big_query_project, dialect='standard', reauth=True)
dfs = [pd.io.gbq.read_gbq(q, project_id=big_query_project, dialect='standard', reauth=True) for q in qs]
for i in range(4):
print(dfs[i])
# Save the original dataframe as a csv file in case you need to recover the original data
dfs[0].to_csv('data_reach_freq.csv', index=False)
dfs[1].to_csv('data_affinity.csv', index=False)
dfs[2].to_csv('data_inmarket.csv', index=False)
dfs[3].to_csv('data_age_gender.csv', index=False)
#prepare reach & frequency data
df = pd.read_csv('data_reach_freq.csv')
print(df.head())
#prepare affinity data
df3 = pd.read_csv('data_affinity.csv')
print(df3.head())
#prepare in_market data
df4 = pd.read_csv('data_inmarket.csv')
print(df4.head())
#prepare age & gender data
df5 = pd.read_csv('data_age_gender.csv')
print(df5.head())
df = df[:max_freq+1] # Reduces the dataframe to have the size you set as the maximum frequency (max_freq)
df = df_calc_fields(df)
df2 = df.copy() # Copy the dataframe you calculated the fields in case you need to recover it
graphs = [] # Variable to save all graphics
df.head()
# Save all data into a list to plot the graphics
impressions = dict(type='bar', x=df.frequency, y=df.impressions,
name='impressions',
marker=dict(color='rgb(0, 29, 255)',
line=dict(width=1)))
ctr = dict(
type='scatter',
x=df.frequency,
y=df.ctr,
name='ctr',
marker=dict(color='rgb(255, 148, 0)', line=dict(width=1)),
xaxis='x1',
yaxis='y2',
)
layout = dict(
title='Impressions and CTR Comparison on Each Frequency',
autosize=True,
legend=dict(x=1.15, y=1),
hovermode='x',
xaxis=dict(tickangle=-45, autorange=True, tickfont=dict(size=10),
title='frequency', type='category'),
yaxis=dict(showgrid=True, title='impressions'),
yaxis2=dict(overlaying='y', anchor='x', side='right',
showgrid=False, title='ctr'),
)
fig = dict(data=[impressions, ctr], layout=layout)
graphs.append(fig)
clicks = dict(type='bar',
x= df.frequency,
y= df.clicks,
name='Clicks',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
ctr = dict(type='scatter',
x= df.frequency,
y= df.cpc,
name='cpc',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='Clicks and CPC Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'clicks'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'cpc'
)
)
fig = dict(data=[clicks, ctr], layout=layout)
graphs.append(fig)
ctr = dict(type='scatter',
x= df.frequency,
y= df.ctr,
name='ctr',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
cpc = dict(type='scatter',
x= df.frequency,
y= df.cpc,
name='cpc',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='CTR and CPC Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category',
showgrid =False
),
yaxis=dict(
showgrid=False,
title= 'ctr'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'cpc'
)
)
fig = dict(data=[ctr, cpc], layout=layout)
graphs.append(fig)
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_clicks,
name='Cumulative % Clicks',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
cpc = dict(type='scatter',
x= df.frequency,
y= df.cpc,
name='cpc',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='Cumulative Clicks and CPC Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cum clicks'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'cpc'
)
)
fig = dict(data=[pareto, cpc], layout=layout)
graphs.append(fig)
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_clicks,
name='Cumulative % Clicks',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
cpc = dict(type='scatter',
x= df.frequency,
y= df.ctr,
name='ctr',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='Cumulative Clicks and CTR Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cum clicks'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'ctr'
)
)
fig = dict(data=[pareto, cpc], layout=layout)
graphs.append(fig)
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_reach,
name='Cumulative % Reach',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
cpc = dict(type='scatter',
x= df.frequency,
y= df.cost,
name='cost',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='Cumulative Reach and Cost Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cummulative reach'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'cost'
)
)
# Show the first 5 rows of the dataframe (data matrix) with the final data
df.head()
# Export the whole dataframe to a csv file that can be used in an external environment
df.to_csv('freq_analysis.csv', index=False)
enable_plotly_in_cell()
iplot(graphs[0])
enable_plotly_in_cell()
iplot(graphs[1])
enable_plotly_in_cell()
iplot(graphs[2])
enable_plotly_in_cell()
iplot(graphs[3])
enable_plotly_in_cell()
iplot(graphs[4])
#Understand the logic behind calculation
graphs2 = []
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_reach,
name='Cummulative % Reach',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
ccm_imp = dict(type='scatter',
x= df.frequency,
y= df.coverage_impressions,
name='Cummulative % Impressions',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y'
)
layout = dict(autosize= True,
title='Cummulative Impressions and Cummulative Reach on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cummulative %'
)
)
fig = dict(data=[pareto, ccm_imp], layout=layout)
graphs2.append(fig)
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_clicks,
name='Cummulative % Clicks',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
ccm_imp = dict(type='scatter',
x= df.frequency,
y= df.coverage_impressions,
name='Cummulative % Impressions',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y'
)
layout = dict(autosize= True,
title='Cumulative Impressions and Cummulative Clicks on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cummulative %'
)
)
fig = dict(data=[pareto, ccm_imp], layout=layout)
graphs2.append(fig)
enable_plotly_in_cell()
iplot(graphs2[0])
enable_plotly_in_cell()
iplot(graphs2[1])
#@title 1.1 - Optimal Frequency
optimal_freq = 3#@param {type:"integer", allow-input: true}
slider_value = 1 #@param test
if optimal_freq > len(df2):
raise Exception('Your optimal frequency is higher than the maxmium frequency in your campaign please make sure it is lower than {}'.format(len(df2)))
from __future__ import division
df2 = df_calc_fields(df2)
df_opt, df_not_opt = df[:optimal_freq], df[optimal_freq:]
total_impressions = list(df2.cumulative_impressions)[-1]
total_imp_not_opt = list(df_not_opt.cumulative_impressions)[-1] - list(df_opt.cumulative_impressions)[-1]
imp_not_opt_ratio = total_imp_not_opt / total_impressions
total_clicks = list(df2.cumulative_clicks)[-1]
total_clicks_not_opt = list(df_not_opt.cumulative_clicks)[-1] - list(df_opt.cumulative_clicks)[-1]
clicks_within_opt_ratio = 1-(total_clicks_not_opt / total_clicks)
print("{:.1f}% of your total impressions are out of the optimal frequency.".format(imp_not_opt_ratio*100))
print("{:,} of your impressions are out of the optimal frequency".format(total_imp_not_opt))
print("At a CPM of {} - preventing these would result in a cost saving of {:,}".format(cpm, cpm*total_imp_not_opt))
print("")
print("If you limited frequency to {}, you would still achieve {:.1f}% of your clicks").format(optimal_freq, clicks_within_opt_ratio*100)
# calculate the color scale
diff_affinity = df3.ctr.max(0) - df3.ctr.min(0)
df3['colorscale'] = (df3.ctr - df3.ctr.min(0))/ diff_affinity *40 + 130
diff_in_market = df4.ctr.max(0) - df4.ctr.min(0)
df4['colorscale'] = (df4.ctr - df4.ctr.min(0))/ diff_in_market *40 + 130
diff_in_market = df5.ctr.max(0) - df5.ctr.min(0)
df5['colorscale'] = (df5.ctr - df5.ctr.min(0))/ diff_in_market *40 + 130
#prepare the graph, rerun if no graph shown
#affinity
graphs3 = []
size1=df3.reach
affinity = dict(type='scatter',
x= df3.frequency,
y= df3.ctr,
text= df3.affinity_category,
name='Cummulative % Reach',
marker=dict(color= df3.colorscale,
size=size1,
sizemode='area',
sizeref=2.*max(size1)/(40.**2),
sizemin=4,
showscale=True),
mode='markers'
)
layout = dict(autosize= True,
title='affinity bubble chart',
xaxis=dict(autorange=True,
title= 'frequency'
),
yaxis=dict(autorange=True,
title= 'CTR'
)
)
fig = dict(data=[affinity], layout=layout)
graphs3.append(fig)
#in market
size2=df5.reach
in_market = dict(type='scatter',
x= df4.frequency,
y= df4.ctr,
text= df4.in_market_category,
name='Cummulative % Reach',
marker=dict(color= df4.colorscale,
size=size2,
sizemode='area',
sizeref=2.*max(size2)/(40.**2),
sizemin=4,
showscale=True),
mode='markers'
)
layout = dict(autosize= True,
title='in_market bubble chart',
xaxis=dict(autorange=True,
title= 'frequency'
),
yaxis=dict(autorange=True,
title= 'CTR'
)
)
fig = dict(data=[in_market], layout=layout)
graphs3.append(fig)
#age & demo
male = dict(type='scatter',
x= df5.frequency[df5.gender_name == 'male'],
y= df5.ctr[df5.gender_name == 'male'],
text= df5.age_group_name[df5.gender_name == 'male'],
name='male',
marker=dict(size=df5.reach[df5.gender_name == 'male'],
sizemode='area',
sizeref=2.*max(df5.reach[df5.gender_name == 'male'])/(40.**2),
sizemin=4),
mode='markers'
)
female = dict(type='scatter',
x= df5.frequency[df5.gender_name == 'female'],
y= df5.ctr[df5.gender_name == 'female'],
text= df5.age_group_name[df5.gender_name == 'female'],
name='female',
marker=dict(size=df5.reach[df5.gender_name == 'female'],
sizemode='area',
sizeref=2.*max(df5.reach[df5.gender_name == 'female'])/(40.**2),
sizemin=4
),
mode='markers'
)
unknown = dict(type='scatter',
x= df5.frequency[df5.gender_name == 'unknown'],
y= df5.ctr[df5.gender_name == 'unknown'],
text= df5.age_group_name[df5.gender_name == 'unknown'],
name='unknown',
marker=dict(size=df5.reach[df5.gender_name == 'unknown'],
sizemode='area',
sizeref=2.*max(df5.reach[df5.gender_name == 'unknown'])/(40.**2),
sizemin=4),
mode='markers'
)
layout = dict(autosize= True,
title='age & gender bubble chart',
xaxis=dict(autorange=True,
title= 'frequency'
),
yaxis=dict(autorange=True,
title= 'CTR'
)
)
fig = dict(data=[male,female,unknown], layout=layout)
graphs3.append(fig)
enable_plotly_in_cell()
iplot(graphs3[0])
enable_plotly_in_cell()
iplot(graphs3[1])
enable_plotly_in_cell()
iplot(graphs3[2])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install Dependencies
Step2: Define function to enable charting library
Step11: Authenticate against the ADH API
Step12: Frequency Analysis
Step13: Step 2 - Create a function for the final calculations
Step14: Step 3 - Build the query
Step 3a - Query for reach & frequency
Step16: Part 1 - Find all impressions from the impression table
Step18: Part 2 - Find all clicks from the clicks table
Step20: output example
Step22: output example
Step23: output example
Step24: Create the query required for ADH
Step26: Step 3b - Query for demographics & interest
Step28: output example
Step30: output example
Step31: output example
Step32: Step 5 - Retrieve the table from BigQuery
Step33: We are using the pandas library to run the query.
Step34: Save the output as a CSV
Step35: Setup up the dataframe and preview to check the data.
Step36: Step 6 - Set up the data and all the charts that will be plotted
Step37: Analysis 1
Step38: Step 2
Step39: Output
Step40: Clicks and CPC Comparison on Each Frequency
Step41: CTR and CPC Comparison on Each Frequency
Step42: Cumulative Clicks and CPC Comparison on Each Frequency
Step43: Cumulative Clicks and CTR Comparison on Each Frequency
Step44: Analysis 2
Step45: Output
Step46: Cummulative Impressions and Cummulative Clicks on Each Frequency
Step47: Analysis 3
Step48: Output
Step49: Analysis 4
Step50: Ouput
Step51: check whether big bubble is in the bottom left corner, those are the affinities with lowest efficency
Step52: check whether big bubble is in the bottom left corner, those are the in market segment with lowest efficency
|
8,931
|
<ASSISTANT_TASK:>
Python Code:
def say_hello():
print('hello, world!')
say_hello()
def hi(name):
print('hi', name)
hi("pythonistas")
def double(value):
return value*2
print(double(4))
"abcde"[:2]
empty_list = []
list_with_numbers = [0, 1, 2, 3, 4, 5, 6]
list_with_mixed = ["zero", 1, "TWO", 3, 4, "FIVE", "Six"]
list_with_lists = ["this", "contains", "a", ["list", "of", ["lists"] ]]
def some_func(message):
print('message is ', message)
list_with_function = [some_func]
# Let's call the function
list_with_function[0]('hello from a list')
iter_example = ['a', 'b', 'c', 'd', 'e']
for elem in iter_example:
print(elem)
list_with_dupes = [1, 2, 3, 3, 4, 5]
print(set(list_with_dupes)) #removes duplicate 3
a_dict = {'some': 'value', 'another': 'value'}
print(a_dict['another'])
'another'.__hash__()
mixed_keys = {4 : 'somevalue'}
print(mixed_keys[4])
New elements will be added or changed just by referencing them. The `del` word will delete entries.
changing_dict = {'foo': 'bar', 'goner': "gone soon"}
changing_dict['foo'] = 'biv' #update existing value
changing_dict['notfound'] = 'found now!' #insert new value
del changing_dict['goner'] #removes key / value
print(changing_dict)
You can iterate over keys in the dictionary using a for loop
starter_dictionary = {'a': 1, 'b': 2, 'c': 3}
for key in starter_dictionary:
print(key)
for val in starter_dictionary.values():
print(val)
class ExampleClass:
pass
ex = ExampleClass() #construct an instance of the ExampleClass
# foo.py
def foo():
return 42
# test_foo.py
import unittest
class TestFoo(unittest.TestCase):
def test_foo_returns_42(self):
expected = 42
actual = foo()
self.assertTrue(expected == actual)
# Below doesn't work in a jupyter notebook
# if __name__ == '__main__':
# unittest.main()
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions are invoked using parenthesis (). Argumements are passed between the parenthesis.
Step2: Functions can use the return keyword to stop execution and send a value back to the caller.
Step3: exercise Write a function, stars that takes a number and returns that number of * (asterisk).
Step4: Lists
Step5: Any type of element, including functions and other lists, can be stored in a list.
Step6: Lists are a class of data structure called iterables that allow for easily going over all the elements of the list
Step7: List Comprehension
Step8: Dictionaries (dicts)
Step9: The dictionary works by calculating the hash of the key. You can see this using the __hash__ function.
Step10: Any value that is hashable can be used as a key, including numbers.
Step11: You can iterate over the values using the values() method.
Step12: We'll cover these if there's time
Step13: Unit Testing
|
8,932
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
8,933
|
<ASSISTANT_TASK:>
Python Code:
demo_tb = Table()
demo_tb['Study_Hours'] = [2.0, 6.9, 1.6, 7.8, 3.1, 5.8, 3.4, 8.5, 6.7, 1.6, 8.6, 3.4, 9.4, 5.6, 9.6, 3.2, 3.5, 5.9, 9.7, 6.5]
demo_tb['Grade'] = [67.0, 83.6, 35.4, 79.2, 42.4, 98.2, 67.6, 84.0, 93.8, 64.4, 100.0, 61.6, 100.0, 98.4, 98.4, 41.8, 72.0, 48.6, 90.8, 100.0]
demo_tb['Pass'] = [0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1]
demo_tb.show()
demo_tb.scatter('Study_Hours','Grade')
demo_tb.scatter('Study_Hours','Grade', fit_line=True)
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
X = demo_tb['Study_Hours'].reshape(-1,1)
X
y = demo_tb['Grade'].reshape(len(demo_tb['Grade']),)
y
linreg.fit(X, y)
B0, B1 = linreg.intercept_, linreg.coef_[0]
B0, B1
y_pred = linreg.predict(X)
print(X)
print(y_pred)
plt.scatter(X, y)
plt.plot(X, y_pred)
linreg.score(X, y)
linreg.predict([[5]])
linreg.predict([[20]])
demo_tb.scatter('Study_Hours','Pass')
def logistic(p):
return 1 / (1 + np.exp(-p))
B0, B1 = 0, 1
xmin, xmax = -10,10
xlist = [float(x)/int(1e4) for x in range(xmin*int(1e4), xmax*int(1e4))] # just a lot of points on the x-axis
ylist = [logistic(B0 + B1*x) for x in xlist]
plt.axis([-10, 10, -0.1,1.1])
plt.plot(xlist,ylist)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
X = demo_tb['Study_Hours'].reshape(-1,1)
y = demo_tb['Pass'].reshape(len(demo_tb['Pass']),)
X, y
lr.fit(X, y)
B0, B1 = lr.intercept_[0], lr.coef_[0][0]
B0, B1
xmin, xmax = 0,10
xlist = [float(x)/int(1e4) for x in range(xmin*int(1e4), xmax*int(1e4))]
ylist = [logistic(B0 + B1*x) for x in xlist]
plt.plot(xlist,ylist)
# add our "observed" data points
plt.scatter(demo_tb['Study_Hours'],demo_tb['Pass'])
X_train = demo_tb.column('Study_Hours')[:-2]
y_train = demo_tb.column('Pass')[:-2]
X_test = demo_tb.column('Study_Hours')[-2:]
y_test = demo_tb.column('Pass')[-2:]
print(X_test, y_test)
lr.fit(X_train.reshape(-1,1),y_train.reshape(len(y_train),))
B0, B1 = lr.intercept_[0], lr.coef_[0]
fitted = [logistic(B1*th + B0) for th in X_test]
fitted
prediction = [pred >.5 for pred in fitted]
prediction
lr.predict(X_test.reshape(-1, 1))
prediction_eval = [prediction[i]==y_test[i] for i in range(len(prediction))]
float(sum(prediction_eval)/len(prediction_eval))
lr.score(X_test.reshape(-1, 1), y_test.reshape(len(y_test),))
import nltk
nltk.download("movie_reviews")
from nltk.corpus import movie_reviews
reviews = [movie_reviews.raw(fileid) for fileid in movie_reviews.fileids()]
judgements = [movie_reviews.categories(fileid)[0] for fileid in movie_reviews.fileids()]
print(reviews[0])
print(judgements[0])
from sklearn.utils import shuffle
np.random.seed(1)
X, y = shuffle(reviews, judgements, random_state=0)
X[0], y[0]
LogisticRegression?
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer, TfidfTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score, train_test_split
text_clf = Pipeline([('vect', CountVectorizer(ngram_range=(1, 2))),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression(random_state=0, penalty='l2', C=1000))
])
scores = cross_val_score(text_clf, X, y, cv=5)
print(scores, np.mean(scores))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=50)
# get tfidf values
tfidf = TfidfVectorizer()
tfidf.fit(X)
X_train = tfidf.transform(X_train)
X_test = tfidf.transform(X_test)
# build and test logit
logit_class = LogisticRegression(random_state=0, penalty='l2', C=1000)
model = logit_class.fit(X_train, y_train)
model.score(X_test, y_test)
from nltk.util import ngrams
ngs = ngrams("Text analysis is so cool. I can really see why classification can be a valuable tool.".split(), 2)
list(ngs)
ngs = ngrams("Text analysis is so cool. I can really see why classification can be a valuable tool.".split(), 3)
list(ngs)
feature_names = tfidf.get_feature_names()
top10pos = np.argsort(model.coef_[0])[-10:]
print("Top features for positive reviews:")
print(list(feature_names[j] for j in top10pos))
print()
print("Top features for negative reviews:")
top10neg = np.argsort(model.coef_[0])[:10]
print(list(feature_names[j] for j in top10neg))
new_bad_review = "This movie really sucked. I can't believe how long it dragged on. The actors are absolutely terrible. They should rethink their career paths"
features = tfidf.transform([new_bad_review])
model.predict(features)
new_good_review = "I loved this film! The cinematography was incredible, and Leonardo Dicarpio is flawless. Super cute BTW."
features = tfidf.transform([new_good_review])
model.predict(features)
CountVectorizer?
TfidfTransformer?
LogisticRegression?
text_clf = Pipeline([('vect', CountVectorizer(ngram_range=(1, 2))),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression(random_state=0))
])
scores = cross_val_score(text_clf, X, y, cv=5)
print(scores, np.mean(scores))
from sklearn.datasets import fetch_20newsgroups
fetch_20newsgroups(subset="train").target_names
train = fetch_20newsgroups(subset="train", categories=['sci.electronics', 'rec.autos'])
train.data[0]
train.target[0]
len(train.data)
test = fetch_20newsgroups(subset="test", categories=['sci.electronics', 'rec.autos'])
test.data[0]
test.target[0]
len(test.data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Intuiting the Linear Regression Model
Step2: In the example above, we're interested in Study_Hours and Grade. This is a natural "input" "output" situation. To plot the regression line, or best-fit, we can feed in fit_line=True to the scatter method
Step3: The better this line fits the points, the better we can predict one's Grade based on their Study_Hours, even if we've never seen anyone put in that number of study hours before.
Step4: Before we go any further, sklearn likes our data in a very specific format. The X must be in an array of arrays, each sub array is an observation. Because we only have one independent variable, we'll have sub arrays of len 1. We can do that with the reshape method
Step5: Your output, or dependent variable, is just one array with no sub arrays.
Step6: We then use the fit method to fit the model. This happens in-place, so we don't have to reassign the variable
Step7: We can get back the intercept_ and $\beta$ coef_ with attributes of the linreg object
Step8: So this means
Step9: We can evaluate how great our model is with the score method. We need to give it the X and observed y values, and it will predict its own y values and compare
Step10: For the Linear Regression, sklearn returns an R-squared from the score method. The R-squared tells us how much of the variation in the data can be explained by our model, .559 isn't that bad, but obviously more goes into your Grade than just Study_Hours.
Step11: Maybe I should study more?
Step12: Wow! I rocked it.
Step13: How would we fit a line to that? That's where the logistic function can be handy. The general logistic function is
Step14: We'll also need to assign a couple $\beta$ coefficients for the intercept and variable just like we saw in linear regression
Step15: Let's plot the logistic curve
Step16: When things get complicated, however, with several independent variables, we don't want to write our own code. Someone has done that for us. We'll go back to sklearn.
Step17: We'll reshape our arrays again too, since we know how sklearn likes them
Step18: We can use the fit function again on our X and y
Step19: We can get those $\beta$ coefficients back out from sklearn for our grade data
Step20: Then we can plot the curve just like we did earlier, and we'll add our points
Step21: How might this curve be used for a binary classification task?
Step22: Let's see the observations we're setting aside for later
Step23: Now we'll fit our model again but only on the _train data, and get out the $\beta$ coefficients
Step24: We can send these coefficients back into the logistic function we wrote earlier to get the probability that a student would pass given our X_test values
Step25: We can take the probability and change this to a binary outcome based on probability > or < .5
Step26: The sklearn built-in methods can make this predict process faster
Step27: To see how accurate our model is, we'd predict on the "unseen" _testing data and see how many we got correct. In this case there's only two, so not a whole lot to test with
Step28: We can do this quickly in sklearn too with the score method like in the linear regression example
Step29: Classification of Textual Data
Step30: Now we import the movie_reviews object
Step31: As you might expect, this is a corpus of IMDB movie reviews. Someone went through and read each review, labeling it as either "positive" or "negative". The task we have before us is to create a model that can accurately predict whether a never-before-seen review is positive or negative. This is analogous to Underwood and Sellers looking at whether a poem volume was reviewed or randomly selected.
Step32: Let's read the first review
Step33: Do you consider this a positive or negative review? Let's see what the human annotator said
Step34: So right now we have a list of movie reviews in the reviews variable and a list of their corresponding judgements in the judgements variable. Awesome. What does this sound like to you? Independent and dependent variables? You'd be right!
Step35: If you don't believe me that all we did is reassign and shuffle
Step36: To get meaningful independent variables (words) we have to do some processing too (think DTM!). With sklearn's text pipelines, we can quickly build a text a classifier in only a few lines of Python
Step37: Whoa! What just happened?!? The pipeline tells us three things happened
Step38: The concise code we first ran actually uses "cross validation", where we split up testing and training data k number of times and average our score on all of them. This is a more reliable metric than just testing the accuracy once. It's possible that you're random train/test split just didn't provide a good split, so averaging it over multiple splits is preferred.
Step39: Trigram
Step40: You get the point. This helps us combat this "bag of words" idea, but doesn't completely save us. For our purposes here, just as we counted the frequency of individual words, we've added counting the frequency of groups of 2s and 3s.
Step41: Prediction
Step42: Homework
Step43: I've copied the cell from above below. Try playing with the parameters to these objects and see if you can improve the cross_val_score for the model.
Step44: Why do you think your score improved (or didn't)?
Step45: Let's see what categories they have
Step46: The subset parameter will give you training and testing data. You can also use the categories parameter to choose only certain categories.
Step47: The list of documents (strings) is in the .data property, we can access the first one like so
Step48: And here is the assigment category
Step49: How many training documents are there?
Step50: We can do the same for the testing data
|
8,934
|
<ASSISTANT_TASK:>
Python Code:
import cv2
import numpy as np
import pandas as pd
import urllib
import math
import boto3
import os
import copy
from tqdm import tqdm
from matplotlib import pyplot as plt
%matplotlib inline
# Temporarily load from np arrays
chi_photos_np = np.load('chi_photos_np_0.03_compress.npy')
lars_photos_np = np.load('lars_photos_np_0.03_compress.npy')
# View shape of numpy array
chi_photos_np.shape
# Set width var
width = chi_photos_np.shape[-1]
width
# Try out scaler on a manually set data (min of 0, max of 255)
from sklearn.preprocessing import MinMaxScaler
# Set test data list to train on (min of 0, max of 255)
test_list = np.array([0, 255]).reshape(-1, 1)
test_list
# Initialize scaler
scaler = MinMaxScaler()
# Fit test list
scaler.fit(test_list)
chi_photos_np.reshape(-1, width, width, 1).shape
# Reshape to prepare for scaler
chi_photos_np_flat = chi_photos_np.reshape(1, -1)
chi_photos_np_flat[:10]
# Scale
chi_photos_np_scaled = scaler.transform(chi_photos_np_flat)
chi_photos_np_scaled[:10]
# Reshape to prepare for scaler
lars_photos_np_flat = lars_photos_np.reshape(1, -1)
lars_photos_np_scaled = scaler.transform(lars_photos_np_flat)
# Reshape
chi_photos_reshaped = chi_photos_np_scaled.reshape(-1, width, width, 1)
lars_photos_reshaped = lars_photos_np_scaled.reshape(-1, width, width, 1)
print('{} has shape: {}'. format('chi_photos_reshaped', chi_photos_reshaped.shape))
print('{} has shape: {}'. format('lars_photos_reshaped', lars_photos_reshaped.shape))
# Create copy of chi's photos to start populating x_input
x_input = copy.deepcopy(chi_photos_reshaped)
print('{} has shape: {}'. format('x_input', x_input.shape))
# Concatentate lars' photos to existing x_input
x_input = np.append(x_input, lars_photos_reshaped, axis = 0)
print('{} has shape: {}'. format('x_input', x_input.shape))
# Create label arrays
y_chi = np.array([[1, 0] for i in chi_photos_reshaped])
y_lars = np.array([[0, 1] for i in lars_photos_reshaped])
print('{} has shape: {}'. format('y_chi', y_chi.shape))
print('{} has shape: {}'. format('y_lars', y_lars.shape))
# Preview the first few elements
y_chi[:5]
y_lars[:5]
# Create copy of chi's labels to start populating y_input
y_input = copy.deepcopy(y_chi)
print('{} has shape: {}'. format('y_input', y_input.shape))
# Concatentate lars' labels to existing y_input
y_input = np.append(y_input, y_lars, axis = 0)
print('{} has shape: {}'. format('y_input', y_input.shape))
# TFlearn libraries
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
# sentdex's code to build the neural net using tflearn
# Input layer --> conv layer w/ max pooling --> conv layer w/ max pooling --> fully connected layer --> output layer
convnet = input_data(shape = [None, width, width, 1], name = 'input')
convnet = conv_2d(convnet, 32, 10, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 64, 10, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
# Import library
from sklearn.cross_validation import train_test_split
print(x_input.shape)
print(y_input.shape)
# Perform train test split
x_train, x_test, y_train, y_test = train_test_split(x_input, y_input, test_size = 0.1, stratify = y_input)
# Train with data
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 3,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Save model
model.save('model_4_epochs_0.03_compression_99.6_named.tflearn')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scaling Inputs
Step2: Reshaping 3D Array To 4D Array
Step3: Putting It All Together
Step4: Now let's reshape.
Step5: Preparing Labels
Step6: Training
Step7: Train Test Split
Step8: Training
|
8,935
|
<ASSISTANT_TASK:>
Python Code:
def make_a_pile(n):
return [n + 2*i for i in range(n)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
8,936
|
<ASSISTANT_TASK:>
Python Code:
from os.path import join, expandvars
from tax_credit.simulated_communities import generate_simulated_communities
# Project directory
project_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment/")
# Directory containing reference sequence databases
reference_database_dir = join(project_dir, 'data', 'ref_dbs')
# simulated communities directory
sim_dir = join(project_dir, "data", "simulated-community")
dataset_reference_combinations = [
# (community_name, ref_db)
('sake', 'gg_13_8_otus'),
('wine', 'unite_20.11.2016')
]
reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim250.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'unite_20.11.2016' : (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_BITSf-B58S3r_trim250.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv'))}
generate_simulated_communities(sim_dir, dataset_reference_combinations, reference_dbs, strain_max=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the following cell, we define the natural datasets that we want to use for simulated community generation. The directory for each dataset is located in sim_dir, and contains the files expected-composition.txt, containing the taxonomic composition of each sample, and map.txt, containing sample metadata.
Step2: The following cell will generate
|
8,937
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
x1, x2, x3 = symbols("x_1 x_2 x_3")
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha3")
R, L, ga, gv = symbols("R L g_a g_v")
init_printing()
a1 = pi / 2 + (L / 2 - alpha1)/R
x = R * cos(a1)
y = alpha2
z = R * sin(a1)
r = x*N.i + y*N.j + z*N.k
r
r1 = r.diff(alpha1)
r2 = r.diff(alpha2)
k1 = trigsimp(r1.magnitude()**3)
k2 = trigsimp(r2.magnitude()**3)
r1 = r1/k1
r2 = r2/k2
r1
r2
n = r1.cross(r2)
n = trigsimp(n.normalize())
n
dr1=r1.diff(alpha1)
k1 = trigsimp(r1.cross(dr1).magnitude()/k1)
k1
dr2=r2.diff(alpha2)
k2 = trigsimp(r2.cross(dr2).magnitude()/k2)
k2
n.diff(alpha1)
r1.diff(alpha1)
R_alpha=r+alpha3*n
R_alpha
R1=R_alpha.diff(alpha1)
R2=R_alpha.diff(alpha2)
R3=R_alpha.diff(alpha3)
trigsimp(R1)
R2
R3
eps=trigsimp(R1.dot(R2.cross(R3)))
R_1=simplify(trigsimp(R2.cross(R3)/eps))
R_2=simplify(trigsimp(R3.cross(R1)/eps))
R_3=simplify(trigsimp(R1.cross(R2)/eps))
R_1
R_2
R_3
dx1da1=R1.dot(N.i)
dx1da2=R2.dot(N.i)
dx1da3=R3.dot(N.i)
dx2da1=R1.dot(N.j)
dx2da2=R2.dot(N.j)
dx2da3=R3.dot(N.j)
dx3da1=R1.dot(N.k)
dx3da2=R2.dot(N.k)
dx3da3=R3.dot(N.k)
A=Matrix([[dx1da1, dx1da2, dx1da3], [dx2da1, dx2da2, dx2da3], [dx3da1, dx3da2, dx3da3]])
simplify(A)
A_inv = trigsimp(A**-1)
simplify(trigsimp(A_inv))
trigsimp(A.det())
g11=R1.dot(R1)
g12=R1.dot(R2)
g13=R1.dot(R3)
g21=R2.dot(R1)
g22=R2.dot(R2)
g23=R2.dot(R3)
g31=R3.dot(R1)
g32=R3.dot(R2)
g33=R3.dot(R3)
G=Matrix([[g11, g12, g13],[g21, g22, g23], [g31, g32, g33]])
G=trigsimp(G)
G
g_11=R_1.dot(R_1)
g_12=R_1.dot(R_2)
g_13=R_1.dot(R_3)
g_21=R_2.dot(R_1)
g_22=R_2.dot(R_2)
g_23=R_2.dot(R_3)
g_31=R_3.dot(R_1)
g_32=R_3.dot(R_2)
g_33=R_3.dot(R_3)
G_con=Matrix([[g_11, g_12, g_13],[g_21, g_22, g_23], [g_31, g_32, g_33]])
G_con=trigsimp(G_con)
G_con
G_inv = G**-1
G_inv
dR1dalpha1 = trigsimp(R1.diff(alpha1))
dR1dalpha1
dR1dalpha2 = trigsimp(R1.diff(alpha2))
dR1dalpha2
dR1dalpha3 = trigsimp(R1.diff(alpha3))
dR1dalpha3
dR2dalpha1 = trigsimp(R2.diff(alpha1))
dR2dalpha1
dR2dalpha2 = trigsimp(R2.diff(alpha2))
dR2dalpha2
dR2dalpha3 = trigsimp(R2.diff(alpha3))
dR2dalpha3
dR3dalpha1 = trigsimp(R3.diff(alpha1))
dR3dalpha1
dR3dalpha2 = trigsimp(R3.diff(alpha2))
dR3dalpha2
dR3dalpha3 = trigsimp(R3.diff(alpha3))
dR3dalpha3
u1=Function('u^1')
u2=Function('u^2')
u3=Function('u^3')
q=Function('q') # q(alpha3) = 1+alpha3/R
K = Symbol('K') # K = 1/R
u1_nabla1 = u1(alpha1, alpha2, alpha3).diff(alpha1) + u3(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla1 = u2(alpha1, alpha2, alpha3).diff(alpha1)
u3_nabla1 = u3(alpha1, alpha2, alpha3).diff(alpha1) - u1(alpha1, alpha2, alpha3) * K * q(alpha3)
u1_nabla2 = u1(alpha1, alpha2, alpha3).diff(alpha2)
u2_nabla2 = u2(alpha1, alpha2, alpha3).diff(alpha2)
u3_nabla2 = u3(alpha1, alpha2, alpha3).diff(alpha2)
u1_nabla3 = u1(alpha1, alpha2, alpha3).diff(alpha3) + u1(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla3 = u2(alpha1, alpha2, alpha3).diff(alpha3)
u3_nabla3 = u3(alpha1, alpha2, alpha3).diff(alpha3)
# $\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
grad_u = Matrix([[u1_nabla1, u2_nabla1, u3_nabla1],[u1_nabla2, u2_nabla2, u3_nabla2], [u1_nabla3, u2_nabla3, u3_nabla3]])
grad_u
G_s = Matrix([[q(alpha3)**2, 0, 0],[0, 1, 0], [0, 0, 1]])
grad_u_down=grad_u*G_s
expand(simplify(grad_u_down))
B = zeros(9, 12)
B[0,1] = (1+alpha3/R)**2
B[0,8] = (1+alpha3/R)/R
B[1,2] = (1+alpha3/R)**2
B[2,0] = (1+alpha3/R)/R
B[2,3] = (1+alpha3/R)**2
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[6,0] = -(1+alpha3/R)/R
B[7,10] = S(1)
B[8,11] = S(1)
B
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
Q=E*B
Q=simplify(Q)
Q
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
Q=E*B*T
Q=simplify(Q)
Q
from sympy import MutableDenseNDimArray
C_x = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x[i,j,k,l] = el
C_x
C_x_symmetry = MutableDenseNDimArray.zeros(3, 3, 3, 3)
def getCIndecies(index):
if (index == 0):
return 0, 0
elif (index == 1):
return 1, 1
elif (index == 2):
return 2, 2
elif (index == 3):
return 0, 1
elif (index == 4):
return 0, 2
elif (index == 5):
return 1, 2
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x_symmetry[i,j,k,l] = el
C_x_symmetry[i,j,l,k] = el
C_x_symmetry[j,i,k,l] = el
C_x_symmetry[j,i,l,k] = el
C_x_symmetry[k,l,i,j] = el
C_x_symmetry[k,l,j,i] = el
C_x_symmetry[l,k,i,j] = el
C_x_symmetry[l,k,j,i] = el
C_x_symmetry
C_isotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_isotropic_matrix = zeros(6)
mu = Symbol('mu')
la = Symbol('lambda')
for s in range(6):
for t in range(s, 6):
if (s < 3 and t < 3):
if(t != s):
C_isotropic_matrix[s,t] = la
C_isotropic_matrix[t,s] = la
else:
C_isotropic_matrix[s,t] = 2*mu+la
C_isotropic_matrix[t,s] = 2*mu+la
elif (s == t):
C_isotropic_matrix[s,t] = mu
C_isotropic_matrix[t,s] = mu
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_isotropic_matrix[s, t]
C_isotropic[i,j,k,l] = el
C_isotropic[i,j,l,k] = el
C_isotropic[j,i,k,l] = el
C_isotropic[j,i,l,k] = el
C_isotropic[k,l,i,j] = el
C_isotropic[k,l,j,i] = el
C_isotropic[l,k,i,j] = el
C_isotropic[l,k,j,i] = el
C_isotropic
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_isotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_isotropic, A_inv, i, j, k, l)
C_isotropic_alpha[i,j,k,l] = c
C_isotropic_alpha[0,0,0,0]
C_isotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha[s,t] = C_isotropic_alpha[i,j,k,l]
C_isotropic_matrix_alpha
C_orthotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_orthotropic_matrix = zeros(6)
for s in range(6):
for t in range(s, 6):
elem_index = 'C^{{{}{}}}'.format(s+1, t+1)
el = Symbol(elem_index)
if ((s < 3 and t < 3) or t == s):
C_orthotropic_matrix[s,t] = el
C_orthotropic_matrix[t,s] = el
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_orthotropic_matrix[s, t]
C_orthotropic[i,j,k,l] = el
C_orthotropic[i,j,l,k] = el
C_orthotropic[j,i,k,l] = el
C_orthotropic[j,i,l,k] = el
C_orthotropic[k,l,i,j] = el
C_orthotropic[k,l,j,i] = el
C_orthotropic[l,k,i,j] = el
C_orthotropic[l,k,j,i] = el
C_orthotropic
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_orthotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_orthotropic, A_inv, i, j, k, l)
C_orthotropic_alpha[i,j,k,l] = c
C_orthotropic_alpha[0,0,0,0]
C_orthotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha[s,t] = C_orthotropic_alpha[i,j,k,l]
C_orthotropic_matrix_alpha
P=eye(12,12)
P[0,0]=1/(1+alpha3/R)
P[1,1]=1/(1+alpha3/R)
P[2,2]=1/(1+alpha3/R)
P[3,0]=-1/(R*(1+alpha3/R)**2)
P[3,3]=1/(1+alpha3/R)
P
Def=simplify(E*B*P)
Def
rows, cols = Def.shape
D_p=zeros(rows, cols)
q = 1+alpha3/R
for i in range(rows):
ratio = 1
if (i==0):
ratio = q*q
elif (i==3 or i == 4):
ratio = q
for j in range(cols):
D_p[i,j] = Def[i,j] / ratio
D_p = simplify(D_p)
D_p
C_isotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_isotropic_alpha_p[i,j,k,l] = simplify(C_isotropic_alpha[i,j,k,l]*fact)
C_isotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha_p[s,t] = C_isotropic_alpha_p[i,j,k,l]
C_isotropic_matrix_alpha_p
C_orthotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_orthotropic_alpha_p[i,j,k,l] = simplify(C_orthotropic_alpha[i,j,k,l]*fact)
C_orthotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha_p[s,t] = C_orthotropic_alpha_p[i,j,k,l]
C_orthotropic_matrix_alpha_p
D_p_T = D_p*T
K = Symbol('K')
D_p_T = D_p_T.subs(R, 1/K)
simplify(D_p_T)
theta, h1, h2=symbols('theta h_1 h_2')
square_geom=theta/2*(R+h2)**2-theta/2*(R+h1)**2
expand(simplify(square_geom))
square_int=integrate(integrate(1+alpha3/R, (alpha3, h1, h2)), (alpha1, 0, theta*R))
expand(simplify(square_int))
simplify(D_p.T*C_isotropic_matrix_alpha_p*D_p)
W = simplify(D_p_T.T*C_isotropic_matrix_alpha_p*D_p_T*(1+alpha3*K)**2)
W
h=Symbol('h')
E=Symbol('E')
v=Symbol('nu')
W_a3 = integrate(W, (alpha3, -h/2, h/2))
W_a3 = simplify(W_a3)
W_a3.subs(la, E*v/((1+v)*(1-2*v))).subs(mu, E/((1+v)*2))
A_M = zeros(3)
A_M[0,0] = E*h/(1-v**2)
A_M[1,1] = 5*E*h/(12*(1+v))
A_M[2,2] = E*h**3/(12*(1-v**2))
Q_M = zeros(3,6)
Q_M[0,1] = 1
Q_M[0,4] = K
Q_M[1,0] = -K
Q_M[1,2] = 1
Q_M[1,5] = 1
Q_M[2,3] = 1
W_M=Q_M.T*A_M*Q_M
W_M
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cylindrical coordinates
Step2: Mid-surface coordinates is defined with the following vector $\vec{r}=\vec{r}(\alpha_1, \alpha_2)$
Step3: Tangent to curve
Step4: Normal to curve
Step5: Curvature
Step6: Derivative of base vectors
Step7: $ \frac { d\vec{n} } { d\alpha_1} = -\frac {1}{R} \vec{v} = -k \vec{v} $
Step8: $ \frac { d\vec{v} } { d\alpha_1} = \frac {1}{R} \vec{n} = k \vec{n} $
Step9: Base Vectors $\vec{R}^1, \vec{R}^2, \vec{R}^3$
Step10: Jacobi matrix
Step11: Metric tensor
Step12: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
Step13: Derivatives of vectors
Step14: $ \frac { d\vec{R_1} } { d\alpha_1} = -\frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} $
Step15: $ \frac { d\vec{R_1} } { d\alpha_3} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
Step16: $ \frac { d\vec{R_3} } { d\alpha_1} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
Step17: $ \frac { d\vec{R_3} } { d\alpha_3} = \vec{0} $
Step18: $
Step19: Deformations tensor
Step20: Tymoshenko theory
Step21: Elasticity tensor(stiffness tensor)
Step22: Include symmetry
Step23: Isotropic material
Step24: Orthotropic material
Step25: Orthotropic material in shell coordinates
Step26: Physical coordinates
Step27: Stiffness tensor
Step28: Tymoshenko
Step29: Square of segment
Step30: ${\displaystyle A=\int_{0}^{L}\int_{h_1}^{h_2} \left( 1+\frac{\alpha_3}{R} \right) d \alpha_1 d \alpha_3}, L=R \theta$
Step31: Virtual work
Step32: Isotropic material physical coordinates - Tymoshenko
|
8,938
|
<ASSISTANT_TASK:>
Python Code:
header =
My President Was Black
A history of the first African American White House—and of what came next
By Ta-Nehisi Coates
Photograph by Ian Allen
?repr
print(repr(header))
header_list = header.split('\n')
print(header_list)
#Removing extra white spaces in each element of our list
for i in range(len(header_list)):
header_list[i] = header_list[i].strip()
print(header_list)
##EXAMPLE of remove not doing what I want.
c = ['a', 'a', 'a', 'a', 'a', '', '', '', '']
for element in c:
if element == '':
c.remove(element)
print(c)
header_list = list(filter(lambda item: item!='' , header_list))
print(header_list)
title = header_list[0]
intro = header_list[1]
print('The title is: ', title)
print('The intro is: ', intro)
print('The title is: {}. \nThe introduction is: {}.'.format(title, intro))
author = header_list[2].strip('By ')
photographer = header_list[3].strip('Photograph by ')
print('The author is: {}. \nThe photographer is: {}.'.format(author, photographer))
print('Title : ', title)
print('Introduction: ', intro)
print('Author : ', author)
print('Photographer: ', photographer)
first_paragraph =
In the waning days of President Barack Obama’s administration, he and his wife, Michelle, hosted a farewell party, the full import of which no one could then grasp. It was late October, Friday the 21st, and the president had spent many of the previous weeks, as he would spend the two subsequent weeks, campaigning for the Democratic presidential nominee, Hillary Clinton. Things were looking up. Polls in the crucial states of Virginia and Pennsylvania showed Clinton with solid advantages. The formidable GOP strongholds of Georgia and Texas were said to be under threat. The moment seemed to buoy Obama. He had been light on his feet in these last few weeks, cracking jokes at the expense of Republican opponents and laughing off hecklers. At a rally in Orlando on October 28, he greeted a student who would be introducing him by dancing toward her and then noting that the song playing over the loudspeakers—the Gap Band’s “Outstanding”—was older than she was. “This is classic!” he said. Then he flashed the smile that had launched America’s first black presidency, and started dancing again. Three months still remained before Inauguration Day, but staffers had already begun to count down the days. They did this with a mix of pride and longing—like college seniors in early May. They had no sense of the world they were graduating into. None of us did.
repr(first_paragraph)
paragraph_list = first_paragraph.split()
print(paragraph_list)
#We want to keep first_paragraph without changes, so we create a copy and work with that.
revised_paragraph = first_paragraph
for element in revised_paragraph:
if element == '—':
revised_paragraph = revised_paragraph.replace(element, ' ')
print(revised_paragraph)
words_list = revised_paragraph.split()
words = len(words_list)
print('The amount of words in first_paragraph is: ', words)
#We want to keep first_paragraph without changes, so we create a copy, in this case replacing ? by .
#and work with that copy.
sentence_paragraph = first_paragraph.replace('?', '.')
sentence_paragraph = sentence_paragraph.replace('\n', '')
#Split the paragraph into sentences
sentence_list = sentence_paragraph.split('.')
print(sentence_list)
#Let's remove the '' elements
sentence_list = list(filter(lambda item: item!='' , sentence_list))
print(sentence_list)
sentences = len(sentence_list)
print('The amount of sentences in first_paragraph is: ', sentences)
obama_count = 0
for word in words_list:
if 'Obama' in word:
obama_count +=1
print(obama_count)
print('The word Obama in first_paragraph appears {} times.'.format(obama_count))
#Lower case the whole paragraph
lower_paragraph = first_paragraph.lower()
print(lower_paragraph)
#First we import the string constants available in python.
import string
#Let's print the string punctuation.
print(string.punctuation)
#loop in the character of string.punctuation and we replace the characters that appear
#in our lower_paragraph for a space.
for character in string.punctuation:
lower_paragraph = lower_paragraph.replace(character, ' ')
print(lower_paragraph)
more_punct = '’‘“”—\n'
for character in more_punct:
lower_paragraph = lower_paragraph.replace(character, ' ')
print(lower_paragraph)
no_punctuation_list = lower_paragraph.split()
words_no_punctuation = len(no_punctuation_list)
print('The amount of words in the paragraph with no punctuation is: ', words_no_punctuation)
print('The amount of words in first_paragraph is: ', words)
print('The amount of sentences in first_paragraph is: ', sentences)
print('The word Obama in first_paragraph appears {} times.'.format(obama_count))
print('The amount of words in the paragraph with no punctuation is: ', words_no_punctuation)
name = input('Enter file name with its path location: ')
with open(name, 'r') as file:
article = file.read()
print(article)
#Let's create a string with all the punctuation we want to remove to count words.
all_punct = string.punctuation + more_punct
all_punct
#We will modify article_no_punct but we want to keep intact article. So
article_no_punct = article
for char in all_punct:
if char in article_no_punct:
article_no_punct = article_no_punct.replace(char, ' ')
article_no_punct
words_list = article_no_punct.split()
words_total = len(words_list)
print('The total amount of words is: {}'.format(words_total))
count = {}
for word in words_list:
count[word] = count.get(word,0) + 1
print(count)
#We will modify article_sentences but we want to keep intact article. So
article_sentences = article
article_sentences = article_sentences.replace('\n','.')
sentences_article_list = article_sentences.split('.')
print(sentences_article_list)
list_clean = list(filter(lambda item: len(item)>3 , sentences_article_list))
print(list_clean)
sentence_total = len(list_clean)
print('The total amount of sentences is: {}'.format(sentence_total))
words_lower = []
for word in words_list:
words_lower.append(word.lower())
indx_white = [i for i, x in enumerate(words_lower) if x == "white"]
indx_black = [i for i, x in enumerate(words_lower) if x == "black"]
print('idx white:\n',indx_white)
print('idx black:\n',indx_black)
#Looking for the words that follow white
lst_white =[]
for i in indx_white:
lst_white.append(words_lower[i+1])
#Looking for the words that follow white
lst_black =[]
for i in indx_black:
lst_black.append(words_lower[i+1])
print('Words that follows white:\n',lst_white)
print('Words that follows black:\n',lst_black)
follows_white = {}
for word in lst_white:
follows_white[word] = follows_white.get(word,0) + 1
print(follows_white)
follows_black = {}
for word in lst_black:
follows_black[word] = follows_black.get(word,0) + 1
print(follows_black)
most_follow_white = max(follows_white, key=follows_white.get)
most_follow_black = max(follows_black, key=follows_black.get)
print("The most used word that follows 'white' is: ",most_follow_white)
print("The most used word that follows 'black' is: ",most_follow_black)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DC Python Lab - Class 02/18/2017
Step2: Let's use the hint
Step3: Splitting the header
Step4: Let's clean up a little this data.
Step5: You can avoid removing the empty spaces, but if you want to remove them then you can follow the next lines.
Step6: Comming back and deleting the ''
Step7: Now our data is clean and neat, let's get the things we were assigned to. Refreshing the exercise stament, we need
Step8: Let's print them!
Step9: Another way of formatting print statements is by doing the following. Remeber that \n means "go to the next line".
Step10: Getting the author and the photographer
Step11: Let's print them all together.
Step13: Exercise 2
Step14: How many words are in the first paragraph?
Step15: Let's take a look at the split paragraph.
Step16: As you can notice, the punctuation doesn't the word count because the symbols are attached to the previous word. However, the em dash '—' symbol links two words and that will affect the count, so let's replace them with a space so when we split by spaces it will do what we want.
Step17: Now our revised_paragraph is the same than the first paragraph but without the "em dashes" . Let's split it and then count the words.
Step18: To count the words we just need to know the length of our list. To do that we use len()
Step19: How many sentences are in the first paragraph?
Step20: Now our sentences_list just contains the sentences, let's use len() to count the amount of sentences.
Step21: How many times is the word 'Obama' is in the first paragraph?
Step22: If you remove all of the punctuation and lower case all text, how many words?
Step23: Remove punctuation.
Step24: It seems that our paragraph is not using the curly quation marks, single and double. Those are not in our strin.punctuation constant and the "em dash" either. Let's create another constant string with all the ramining characters we want to remove and take them out. We might want to add also the '\n' symbol.
Step25: Now we removed all kind of punctuation, let's create our list and count the elements to know the amount of words.
Step26: Printing all together
Step27: Eercise 3
Step28: How many words are in part one?
Step29: Now we replace all the punctuation characters for a space. We create an article to modify that is a copy of the original article.
Step30: No we split and count by using len()
Step31: What if we want to know how many words of each kind are?
Step32: If we print the dictionary we should have every word as a key and then next to it the value will indicate the amount of it that we have.
Step33: How many sentences are in part one?
Step34: Let's ommit the '' and the elements that are "sentences" that are actually two letters because they were part of an abbreviation. For example '— F' or '"'. These elements have the particularity that their length is always smaller than 3, so let's filter that.
Step35: Which words follow 'black' and 'white' in the text? Which ones are used the most for each?
Step36: We want to know were in the list appears the word white and where the word black. We look for those indices doing the following.
Step37: Let's look for the word that follows each repetition of white/black and save them into lists.
Step38: Let's count for each list the repetitions of each word using dictionaries and the get method.
Step39: Let's get the word in each dictionary that has the biggest value.
|
8,939
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
x = np.linspace(0,10,23)
f = np.sin(x)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x,f,'o-')
plt.plot(4,0,'ro')
# f1 = f[1:-1] * f[:]
print(np.shape(f[:-1]))
print(np.shape(f[1:]))
ff = f[:-1] * f[1:]
print(ff.shape)
x_zero = x[np.where(ff < 0)]
x_zero2 = x[np.where(ff < 0)[0] + 1]
f_zero = f[np.where(ff < 0)]
f_zero2 = f[np.where(ff < 0)[0] + 1]
print(x_zero)
print(f_zero)
Dx = x_zero2 - x_zero
df = np.abs(f_zero)
Df = np.abs(f_zero - f_zero2)
print(Dx)
print(df)
print(Df)
xz = x_zero + (df * Dx) / Df
xz
plt.plot(x,f,'o-')
plt.plot(x_zero,f_zero,'ro')
plt.plot(x_zero2,f_zero2,'go')
plt.plot(xz,np.zeros_like(xz),'yo-')
np.where(ff < 0)[0] + 1
Z = np.random.random(30)
x = np.linspace(0,3,64)
y = np.linspace(0,3,64)
X,Y = np.meshgrid(x,y)
X
Y
np.sin(X**2+Y**2)
plt.contourf(X,Y,np.sin(X**2+Y**2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 9. Utwórz macierz 3x3
Step2: 12. Siatka 2d.
|
8,940
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # needed to use WD-style meshing, which isn't fully supported yet
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.7
b['requiv@secondary'] = 0.7
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvdyn')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvnum')
b.add_compute(compute='phoebe2marching', irrad_method='none', mesh_method='marching')
b.add_compute(compute='phoebe2wd', irrad_method='none', mesh_method='wd', eclipse_method='graham')
b.add_compute('legacy', compute='phoebe1', irrad_method='none')
b.set_value_all('rv_method', dataset='rvdyn', value='dynamical')
b.set_value_all('rv_method', dataset='rvnum', value='flux-weighted')
b.set_value_all('atm', 'extern_planckint')
b.set_value_all('gridsize', 30)
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.,0.])
b.set_value_all('rv_grav', False)
b.set_value_all('ltte', False)
b.run_compute(compute='phoebe2marching', model='phoebe2marchingmodel')
b.run_compute(compute='phoebe2wd', model='phoebe2wdmodel')
b.run_compute(compute='phoebe1', model='phoebe1model')
colors = {'phoebe2marchingmodel': 'g', 'phoebe2wdmodel': 'b', 'phoebe1model': 'r'}
afig, mplfig = b['lc01'].plot(c=colors, legend=True, show=True)
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2marchingmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2wdmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'b-')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-0.003, 0.003)
afig, mplfig = b.filter(dataset='rvdyn', model=['phoebe2wdmodel', 'phoebe1model']).plot(c=colors, legend=True, show=True)
artist, = plt.plot(b.get_value('rvs@rvdyn@primary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvdyn@secondary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1.5e-12, 1.5e-12)
afig, mplfig = b.filter(dataset='rvnum').plot(c=colors, show=True)
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2marchingmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='g', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='g', ls='-.')
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2wdmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2wdmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1e-2, 1e-2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Adding Datasets and Compute Options
Step3: Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
Step4: Now we add compute options for the 'legacy' backend.
Step5: And set the two RV datasets to use the correct methods (for both compute options)
Step6: Let's use the external atmospheres available for both phoebe1 and phoebe2
Step7: Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
Step8: Let's also disable other special effect such as heating, gravity, and light-time effects.
Step9: Finally, let's compute all of our models
Step10: Plotting
Step11: Now let's plot the residuals between these two models
Step12: Dynamical RVs
Step13: And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
Step14: Numerical (flux-weighted) RVs
|
8,941
|
<ASSISTANT_TASK:>
Python Code:
# numpy provides python tools to easily load comma separated files.
import numpy as np
# use numpy to load disease #1 data
d1 = np.loadtxt(open("../31_Data_ML-IV/D1.csv", "rb"), delimiter=",")
# features are all rows for columns before 200
# The canonical way to name this is that X is our matrix of
# examples by features.
X1 = d1[:,:200]
# labels are in all rows at the 200th column
# The canonical way to name this is that y is our vector of
# labels.
y1 = d1[:,200]
# use numpy to load disease #2 data
d2 = np.loadtxt(open("../31_Data_ML-IV/D2.csv", "rb"), delimiter=",")
# features are all rows for columns before 200
X2 = d2[:,:200]
# labels are in all rows at the 200th column
y2 = d2[:,200]
# DATASET 1 CLASSIFIER CODE GOES HERE
# DATASET 2 CLASSIFIER CODE GOES HERE
d1_test = np.loadtxt(open("../32_Data_ML-V/D1_test.csv", "rb"), delimiter=",")
X1_test = d1_test[:,:200]
y1_test = d1_test[:,200]
d2_test = np.loadtxt(open("../32_Data_ML-V/D2_test.csv", "rb"), delimiter=",")
X2_test = d2_test[:,:200]
y2_test = d2_test[:,200]
d1_score = d1_classifier.score(X1_test, y1_test)
print("D1 Testing Accuracy: " + str(d1_score))
d2_score = d2_classifier.score(X2_test, y2_test)
print("D2 Testing Accuracy: " + str(d2_score))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Train your classifiers
Step2: Evaluate your classifiers
|
8,942
|
<ASSISTANT_TASK:>
Python Code:
from sympy import init_session
init_session()
%matplotlib notebook
f=sin(x)*sin(y)
f
from sympy.simplify.fu import *
g=TR8(f) # TR8 is a trigonometric expression function from Fu paper
Eq(f, g)
s = 0.03 # slip
fs = 50 # stator frequency in Hz
fr = (1-s)*fs # rotor frequency in Hz
fr
alpha=2*pi*fs*t
beta=2*pi*fr*t
# Create the plot of the stator rotation frequency in blue:
p1=plot(sin(alpha), (t, 0, 1), show=False, line_color='b', adaptive=False, nb_of_points=5000)
# Create the plot of the rotor rotation frequency in green:
p2=plot(0.5*sin(beta), (t, 0, 1), show=False, line_color='g', adaptive=False, nb_of_points=5000)
# Create the plot of the combined flux in red:
p3=plot(0.5*f.subs([(x, alpha), (y, beta)]), (t, 0, 1),
show=False, line_color='r', adaptive=False, nb_of_points=5000)
# Make the second and third one a part of the first one.
p1.extend(p2)
p1.extend(p3)
# Display the modified plot object.
p1.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Look at a superposition of two sinusoidal signals
Step2: This product can be rewritten as a sum using trigonometric equalities. SymPy has a special function for those
Step3: Now let us take an example with two different angular frequencies
Step4: Now plot the product of both frequencies
|
8,943
|
<ASSISTANT_TASK:>
Python Code:
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
import base64 # to produce human readable encoding of the bytes
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
digest.update(b"PyCon")
digest.update(b"2017")
msg_digest = digest.finalize()
# Notice the output size of the digest
print ("msg_digest:", len(msg_digest), len(msg_digest) * 8)
print ("base64 encoding:", base64.b64encode(msg_digest))
print()
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
digest.update(b"PyCon2017")
msg_digest = digest.finalize()
# Notice the output size of the digest
print ("msg_digest:", len(msg_digest), len(msg_digest) * 8)
print ("base64 encoding:", base64.b64encode(msg_digest))
print()
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
digest.update(b"PyCon 2017")
msg_digest = digest.finalize()
# Notice the output size of the digest
print ("msg_digest:", len(msg_digest), len(msg_digest) * 8)
print ("base64 encoding:", base64.b64encode(msg_digest))
import hashlib
sha256 = hashlib.sha256()
sha256.update(b"PyCon2017")
msg_digest = sha256.digest()
# Notice the output size of the digest
print ("msg_digest:", len(msg_digest), len(msg_digest) * 8)
print ("base64 encoding:", base64.b64encode(msg_digest))
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hmac, hashes
import os
import base64
hmc_key = k = os.urandom(16)
hmc = hmac.HMAC(hmc_key, hashes.SHA1(), default_backend())
hmc.update(b"PyCon2017")
hmc_sig = hmc.finalize()
print (base64.b64encode(hmc_sig))
# Verification Successufl
hmc = hmac.HMAC(hmc_key, hashes.SHA1(), default_backend())
hmc.update(b"PyCon2017")
hmc.verify(hmc_sig)
# Verification Fails
hmc = hmac.HMAC(hmc_key, hashes.SHA1(), default_backend())
hmc.update(b"PyCon2017")
hmc.verify(hmc_sig+b"1")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: *<font color=" #6495ED">Exercise</font> *
Step2: *<font color=" #6495ED">Exercise</font> *
Step3: HMAC Verification
|
8,944
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import math
import matplotlib.pyplot as plt
%matplotlib inline
import torch
assert torch.__version__ >= '1.0.0'
import tqdm
help(torch.sqrt)
# to close the Jupyter help bar, press `Esc` or `q`
?torch.cat
theta = torch.linspace(-math.pi, math.pi, 1000)
assert theta.shape == (1000,)
rho = (1 + 0.9 * torch.cos(8 * theta)) * (1 + 0.1 * torch.cos(24 * theta)) * (0.9 + 0.05 * torch.cos(200 * theta)) * (1 + torch.sin(theta))
assert torch.is_same_size(rho, theta)
x = rho * torch.cos(theta)
y = rho * torch.sin(theta)
# Run this cell and make sure the plot is correct
plt.figure(figsize=[6,6])
plt.fill(x.numpy(), y.numpy(), color='green')
plt.grid()
from scipy.signal import correlate2d as conv2d
def numpy_update(alive_map):
# Count neighbours with convolution
conv_kernel = np.array([[1,1,1],
[1,0,1],
[1,1,1]])
num_alive_neighbors = conv2d(alive_map, conv_kernel, mode='same')
# Apply game rules
born = np.logical_and(num_alive_neighbors == 3, alive_map == 0)
survived = np.logical_and(np.isin(num_alive_neighbors, [2,3]), alive_map == 1)
np.copyto(alive_map, np.logical_or(born, survived))
def torch_update(alive_map):
Game of Life update function that does to `alive_map` exactly the same as `numpy_update`.
:param alive_map: `torch.tensor` of shape `(height, width)` and dtype `torch.float32`
containing 0s (dead) an 1s (alive)
conv_kernel = torch.Tensor([[[[1, 1, 1], [1, 0, 1], [1, 1, 1]]]])
neighbors_map = torch.conv2d(alive_map.unsqueeze(0).unsqueeze(0),
conv_kernel, padding=1).squeeze()
born = (neighbors_map == 3) & (alive_map == 0)
survived = ((neighbors_map == 2) | (neighbors_map == 3)) & (alive_map == 1)
alive_map.copy_(born | survived)
# Generate a random initial map
alive_map_numpy = np.random.choice([0, 1], p=(0.5, 0.5), size=(100, 100))
alive_map_torch = torch.tensor(alive_map_numpy).float().clone()
numpy_update(alive_map_numpy)
torch_update(alive_map_torch)
# results should be identical
assert np.allclose(alive_map_torch.numpy(), alive_map_numpy), \
"Your PyTorch implementation doesn't match numpy_update."
print("Well done!")
%matplotlib notebook
plt.ion()
# initialize game field
alive_map = np.random.choice([0, 1], size=(100, 100))
alive_map = torch.tensor(alive_map).float()
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
for _ in range(100):
torch_update(alive_map)
# re-draw image
ax.clear()
ax.imshow(alive_map.numpy(), cmap='gray')
fig.canvas.draw()
# A fun setup for your amusement
alive_map = np.arange(100) % 2 + np.zeros([100, 100])
alive_map[48:52, 50] = 1
alive_map = torch.tensor(alive_map).float()
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
for _ in range(150):
torch_update(alive_map)
ax.clear()
ax.imshow(alive_map.numpy(), cmap='gray')
fig.canvas.draw()
np.random.seed(666)
torch.manual_seed(666)
from notmnist import load_notmnist
letters = 'ABCDEFGHIJ'
X_train, y_train, X_test, y_test = map(torch.tensor, load_notmnist(letters=letters))
X_train.squeeze_()
X_test.squeeze_();
%matplotlib inline
fig, axarr = plt.subplots(2, 10, figsize=(15,3))
for idx, ax in enumerate(axarr.ravel()):
ax.imshow(X_train[idx].numpy(), cmap='gray')
ax.axis('off')
ax.set_title(letters[y_train[idx]])
class NeuralNet:
def __init__(self, lr):
# Your code here
self.lr = lr
self.EPS = 1e-15
# First linear layer
self.linear1w = torch.randn(784, 300, dtype=torch.float32, requires_grad=True)
self.linear1b = torch.randn(1, 300, dtype=torch.float32, requires_grad=True)
# Second linear layer
self.linear2w = torch.randn(300, 10, dtype=torch.float32, requires_grad=True)
self.linear2b = torch.randn(1, 10, dtype=torch.float32, requires_grad=True)
def predict(self, images):
images: `torch.tensor` of shape `batch_size x height x width`
and dtype `torch.float32`.
returns: `output`, a `torch.tensor` of shape `batch_size x 10`,
where `output[i][j]` is the probability of `i`-th
batch sample to belong to `j`-th class.
def log_softmax(input):
input = input - torch.max(input, dim=1, keepdim=True)[0]
return input - torch.log(torch.sum(torch.exp(input), dim=1, keepdim=True))
linear1_out = torch.add(images @ self.linear1w, self.linear1b).clamp(min=0)
linear2_out = torch.add(linear1_out @ self.linear2w, self.linear2b)
return log_softmax(linear2_out)
def get_loss(self, input, target):
def nll(input, target):
return -torch.sum(target * input) /input.shape[0]
return nll(input, target)
def zero_grad(self):
with torch.no_grad():
self.linear1w.grad.zero_()
self.linear1b.grad.zero_()
self.linear2w.grad.zero_()
self.linear2b.grad.zero_()
def update_weights(self, loss):
loss.backward()
with torch.no_grad():
self.linear1w -= self.lr * self.linear1w.grad
self.linear1b -= self.lr * self.linear1b.grad
self.linear2w -= self.lr * self.linear2w.grad
self.linear2b -= self.lr * self.linear2b.grad
self.zero_grad()
def one_hot_encode(input, classes=10):
return torch.eye(classes)[input]
def accuracy(model, images, labels):
model: `NeuralNet`
images: `torch.tensor` of shape `N x height x width`
and dtype `torch.float32`
labels: `torch.tensor` of shape `N` and dtype `torch.int64`. Contains
class index for each sample
returns:
fraction of samples from `images` correctly classified by `model`
with torch.no_grad():
labels_pred = model.predict(images)
numbers = labels_pred.argmax(dim=-1)
numbers_target = labels.argmax(dim=-1)
return (numbers == numbers_target).float().mean()
class batch_generator:
def __init__(self, images, batch_size):
dataset_size = images[0].size()[0]
permutation = torch.randperm(dataset_size)
self.images = images[0][permutation]
self.targets = images[1][permutation]
self.images = self.images.split(batch_size, dim=0)
self.targets = self.targets.split(batch_size, dim=0)
self.current = 0
self.high = len(self.targets)
def __iter__(self):
return self
def __next__(self):
if self.current >= self.high:
raise StopIteration
else:
self.current += 1
return self.images[self.current - 1], self.targets[self.current - 1]
train_size, _, _ = X_train.shape
test_size, _, _ = X_test.shape
X_train = X_train.reshape(train_size, -1)
X_test = X_test.reshape(test_size, -1)
y_train_oh = one_hot_encode(y_train)
y_test_oh = one_hot_encode(y_test)
print("Train size: ", X_train.shape)
print("Test size: ", X_test.shape)
model = NeuralNet(1e-2)
batch_size = 128
epochs = 50
loss_history = torch.Tensor(epochs)
for epoch in tqdm.trange(epochs):
# Update weights
for X_batch, y_batch in batch_generator((X_train, y_train_oh), batch_size):
predicted = model.predict(X_batch)
loss = model.get_loss(predicted, y_batch)
model.update_weights(loss)
# Calculate loss
test_predicted = model.predict(X_test)
loss = model.get_loss(test_predicted, y_test_oh)
loss_history[epoch] = loss
model.zero_grad()
plt.figure(figsize=(14, 7))
plt.title("Loss")
plt.xlabel("#epoch")
plt.ylabel("Loss")
plt.plot(loss_history.detach().numpy(), label="Validation loss")
plt.legend(loc='best')
plt.grid()
plt.show()
train_acc = accuracy(model, X_train, y_train_oh) * 100
test_acc = accuracy(model, X_test, y_test_oh) * 100
print("Train accuracy: %.2f, test accuracy: %.2f" % (train_acc, test_acc))
assert test_acc >= 82.0, "You have to do better"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To learn best practices $-$ for example,
Step2: Task 1 (3 points)
Step4: Task 2 (7 points)
Step5: More fun with Game of Life
Step7: The cell below has an example layout for encapsulating your neural network. Feel free to modify the interface if you need to (add arguments, add return values, add methods etc.). For example, you may want to add a method do_gradient_step() that executes one optimization algorithm (SGD / Adadelta / Adam / ...) step.
Step9: Define subroutines for one-hot encoding, accuracy calculating and batch generating
Step10: Prepare dataset
Step11: Define model and train
Step12: Plot loss
Step13: Final evalutation
|
8,945
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from sklearn import __version__ as sklearn_version
print('Sklearn version:', sklearn_version)
from sklearn import datasets
iris = datasets.load_iris()
print(iris.DESCR)
# Print some data lines
print(iris.data[:10])
print(iris.target)
#Randomize and separate train & test
from sklearn.utils import shuffle
X, y = shuffle(iris.data, iris.target, random_state=0)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# Linear model
from sklearn.linear_model import LogisticRegression
# Define classifier
clf_logistic = LogisticRegression()
# Fit classifier
clf_logistic.fit(X_train, y_train)
# Evaluate accuracy in test
from sklearn.metrics import accuracy_score
# Predict test data
y_test_pred = clf_logistic.predict(X_test)
# Evaluate accuracy
print('Accuracy test: ', accuracy_score(y_test, y_test_pred))
from sklearn import tree
# Define classifier (use max_depth=3)
clf_tree = ...
# Fit over train data
...
# Evaluate test accuracy with accuracy_score
print('Tree accuracy test: ', ...)
# Evaluate ROC area with roc_auc_score
print('Tree average ROC area: ', ...)
# Configure model
from sklearn import svm
clf_svc = svm.LinearSVC()
# Fit over train
...
# Accuracy score over test
...
# ROC area
# Print probabilities
y_test_proba = clf_logistic.predict_proba(X_test)
print(y_test_proba[:5])
#Recode y from multiclass labels to binary labels
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit(y_train)
print('Test classes: ',lb.classes_)
y_test_bin = lb.transform(y_test)
print(y_test_bin[:5])
# Roc curve
from sklearn.metrics import roc_auc_score
print('Average ROC area: ', roc_auc_score(y_test_bin, y_test_proba))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Linear model
Step3: Decision tree
Step4: Test another clasifier
Step5: ROC area
|
8,946
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def conv(f, h):
f, h = np.asarray(f), np.asarray(h,float)
if len(f.shape) == 1: f = f[np.newaxis,:]
if len(h.shape) == 1: h = h[np.newaxis,:]
if f.size < h.size:
f, h = h, f
g = np.zeros(np.array(f.shape) + np.array(h.shape) - 1)
if f.ndim == 2:
H,W = f.shape
for (r,c) in np.transpose(np.nonzero(h)):
g[r:r+H, c:c+W] += f * h[r,c]
if f.ndim == 3:
D,H,W = f.shape
for (d,r,c) in np.transpose(np.nonzero(h)):
g[d:d+D, r:r+H, c:c+W] += f * h[d,r,c]
return g
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python conv.ipynb
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
if testing:
f = np.zeros((5,5))
f[2,2] = 1
print('f:\n', f)
h = np.array([[1,2,3],
[4,5,6]])
print('h=\n',h)
a1 = ia.conv(f,h)
print('a1.dtype',a1.dtype)
print('a1=f*h:\n',a1)
a2 = ia.conv(h,f)
print('a2=h*f:\n',a2)
if testing:
f = np.array([[1,0,0,0],
[0,0,0,0]])
print(f)
h = np.array([1,2,3])
print(h)
a = ia.conv(f,h)
print(a)
if testing:
f = np.array([[1,0,0,0,0,0],
[0,0,0,0,0,0]])
print(f)
h = np.array([1,2,3,4])
print(h)
a = ia.conv(f,h)
print(a)
if testing:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
f = mpimg.imread('../data/cameraman.tif')
h = np.array([[ 1, 2, 1],
[ 0, 0, 0],
[-1,-2,-1]])
g = ia.conv(f,h)
gn = ia.normalize(g, [0,255])
ia.adshow(f,title='input')
ia.adshow(gn,title='filtered')
if testing:
print('testing conv')
print(repr(ia.conv(np.array([[1,0,1,0],[0,0,0,0]]), np.array([1,2,3]))) == repr(np.array(
[[1., 2., 4., 2., 3., 0.],
[0., 0., 0., 0., 0., 0.]])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: Example 1
Step3: Example 2
Step4: Example 3
Step5: Example 4
Step6: Limitations
|
8,947
|
<ASSISTANT_TASK:>
Python Code:
# pycl imports
from pycl import *
#Std lib imports
import datetime
from glob import glob
from pprint import pprint as pp
from os.path import basename
from os import listdir, remove, rename
from os.path import abspath, basename, isdir
from collections import OrderedDict
# Third party import
import numpy as np
import scipy.stats as stats
import pylab as pl
from Bio import Entrez
# Jupyter specific imports
from IPython.core.display import display, HTML, Markdown, Image
# Pyplot tweaking
%matplotlib inline
pl.rcParams['figure.figsize'] = 30, 10 # that's default image size for this interactive session
# Jupyter display tweaking
toogle_code()
larger_display(75)
# Simplify warning reporting to lighten the notebook style
import warnings
warnings.formatwarning=(lambda message, *args: "{}\n".format(message))
# Allow to use R directly
%load_ext rpy2.ipython
# Specific helper functions
def generate_header (PMID, cell, modification, method):
h = "# Data cleaned, converted to BED6, coordinate converted to hg38 using liftOver\n"
h+= "# Maurits Evers (maurits.evers@anu.edu.au)\n"
h+= "# Data cleaned and standardized. {}\n".format(str (datetime.datetime.today()))
h+= "# Adrien Leger (aleg@ebi.ac.uk)\n"
h+= "# RNA_modification={}|Cell_type={}|Analysis_method={}|Pubmed_ID={}\n".format(modification, cell, method, PMID)
h+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
return h
def file_summary(file, separator=["\t", "|"], max_items=10):
n_line = fastcount(file)
print("Filename:\t{}".format(file))
print("Total lines:\t{}\n".format(n_line))
linerange(file, range_list=[[0,9],[n_line-5, n_line-1]])
print(colsum(file, header=False, ignore_hashtag_line=True, separator=separator, max_items=max_items))
def distrib_peak_len (file, range=None, bins=50, normed=True):
h = []
for line in open (file):
if line[0] != "#":
ls = line.split("\t")
delta = abs(int(ls[1])-int(ls[2]))
h.append(delta)
h.sort()
pl.hist(h,normed=normed, range=range, bins=bins)
pl.show()
def pubmed_fetch(pmid):
Entrez.email = 'your.email@example.com'
handle = Entrez.efetch (db='pubmed', id=pmid, retmode='xml', )
return Entrez.read(handle)[0]
def pmid_to_info(pmid):
results = pubmed_fetch(pmid)
try:
title = results['MedlineCitation']['Article']['ArticleTitle']
except (KeyError, IndexError) as E:
title = "NA"
try:
first_name = results['MedlineCitation']['Article']['AuthorList'][0]['LastName']
except (KeyError, IndexError) as E:
first_name = "NA"
try:
Year = results['MedlineCitation']['Article']['ArticleDate'][0]['Year']
except (KeyError, IndexError) as E:
Year = "NA"
try:
Month = results['MedlineCitation']['Article']['ArticleDate'][0]['Month']
except (KeyError, IndexError) as E:
Month = "NA"
try:
Day = results['MedlineCitation']['Article']['ArticleDate'][0]['Day']
except (KeyError, IndexError) as E:
Day = "NA"
d = {"title":title, "first_name":first_name, "Year":Year, "Month":Month, "Day":Day}
return d
file_summary("./PTM_Original_Datasets/DARNED_human_hg19_all_sites.txt")
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
header = "# DARNED database Human all sites hg19 coordinates\n"
header+= "# Data cleaned, filtered for Inosine editing, standardized and converted to BED6 format\n"
header+= "# Adrien Leger (aleg@ebi.ac.uk) - {}\n".format(str (datetime.datetime.today()))
header+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
reformat_table(
input_file = "./PTM_Original_Datasets/DARNED_human_hg19_all_sites.txt",
output_file = "./PTM_Original_Datasets/DARNED_human_hg19_inosine.bed",
init_template = [0,"\t",1,"\t",2,"\t",3,"\t",4,"\t",5,"\t",6,"\t",7,"\t",8,"\t",9],
final_template = ["chr",0,"\t",1,"\t",1,"\t",3,">",4,"|",8,"|-|",9,"|",5,"\t0\t",2],
keep_original_header = False,
header = header,
replace_internal_space = '_',
replace_null_val = "-",
filter_dict = {0:["-"],1:["-"],2:["?"],3:["T","G","C"],4:["C","T","A","U"],8:["-"],9:["-"]},
subst_dict = {4:{"G":"I"}}
)
file_summary("./PTM_Original_Datasets/DARNED_human_hg19_inosine.bed")
d={}
with open("./PTM_Original_Datasets/DARNED_human_hg19_inosine.bed", "r") as f:
for line in f:
if line [0] !="#":
ls = supersplit(line, ["\t","|"])
n_tissue = len(ls[4].split(","))
n_PMID = len(ls[6].split(","))
key="{}:{}".format(n_tissue,n_PMID)
if key not in d:
d[key]=0
d[key]+=1
print (d)
infile = "./PTM_Original_Datasets/DARNED_human_hg19_inosine.bed"
outclean = "./PTM_Original_Datasets/DARNED_human_hg19_inosine_cleaned.bed"
outunclean = "./PTM_Original_Datasets/DARNED_human_hg19_inosine_unclean.bed"
with open(infile, "r") as inf, open(outclean, "w") as outf_clean, open(outunclean, "w") as outf_unclean:
init_sites = uniq = several_tissue = several_pmid = several_all = final_sites = 0
for line in inf:
if line [0] == "#":
outf_clean.write(line)
outf_unclean.write(line)
else:
init_sites += 1
ls = supersplit(line, ["\t","|"])
tissue_list = ls[4].split(",")
PMID_list = ls[6].split(",")
n_tissue = len(tissue_list)
n_PMID = len(PMID_list)
if n_tissue == 1:
# 1 PMID, 1 tissue = no problem
if n_PMID == 1:
uniq += 1
final_sites += 1
outf_clean.write(line)
# Several PMID, 1 tissue = demultiplex PMID lines
else:
several_pmid += 1
for PMID in PMID_list:
final_sites += 1
outf_clean.write("{0}\t{1}\t{2}\t{3}|{4}|{5}|{6}|{7}\t{8}\t{9}".format(
ls[0],ls[1],ls[2],ls[3],ls[4],ls[5],PMID.strip(),ls[7],ls[8],ls[9]))
else:
# 1 PMID, several tissues = demultiplex tissues lines
if n_PMID == 1:
several_tissue += 1
for tissue in tissue_list:
final_sites += 1
outf_clean.write("{0}\t{1}\t{2}\t{3}|{4}|{5}|{6}|{7}\t{8}\t{9}".format(
ls[0],ls[1],ls[2],ls[3],tissue.strip().strip("_").strip("."),ls[5],ls[6],ls[7],ls[8],ls[9]))
# Several PMID, several tissues = extract the line in uncleanable datasets
else:
several_all += 1
outf_unclean.write(line)
print("Initial sites: ", init_sites)
print("Final clean sites: ", final_sites)
print("1 PMID 1 tissu: ", uniq)
print("1 PMID ++ tissue: ", several_tissue)
print("++ PMID 1 tissue: ", several_pmid)
print("++ PMID ++ tissue: ", several_all)
file_summary(outclean)
file_summary(outunclean)
# Conversion to hg38 with Crossmap/liftover
lifover_chainfile = "../LiftOver_chain_files/hg19ToHg38.over.chain.gz"
input_bed = "./PTM_Original_Datasets/DARNED_human_hg19_inosine_cleaned.bed"
temp_bed = "./PTM_Original_Datasets/DARNED_human_hg38_inosine_temp.bed"
cmd = "CrossMap.py bed {} {} {}".format(lifover_chainfile, input_bed, temp_bed)
bash(cmd)
# Rewriting and updating of the header removed by Crossmap
final_bed = "./PTM_Original_Datasets/DARNED_human_hg38_inosine_cleaned.bed"
header = "# DARNED database Human all sites hg38 coordinates\n"
header+= "# Data cleaned, filtered for Inosine editing, standardized, converted to BED6 format and updated to hg38 coordinates\n"
header+= "# Adrien Leger (aleg@ebi.ac.uk) - {}\n".format(str (datetime.datetime.today()))
header+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
with open (temp_bed, "r") as infile, open (final_bed, "w") as outfile:
outfile.write (header)
for line in infile:
outfile.write (line)
file_summary(final_bed)
file_summary("./PTM_Original_Datasets/RADAR_human_hg19_v2_primary.txt", separator=["\t"])
file_summary("./PTM_Original_Datasets/RADAR_human_hg19_v2_secondary.txt")
# Create a structured dict of dict to parse the main database file
from collections import OrderedDict
def parse_RADAR_main (file):
# Define the top level access dict
radar_dict = OrderedDict()
for line in open (file, "r"):
if line[0] != "#":
sl = line.split("\t")
assert len(sl) == 11
chromosome, position, gene, strand = sl[0].strip(), int(sl[1].strip()), sl[2].strip(), sl[3].strip()
if chromosome not in radar_dict:
radar_dict[chromosome] = OrderedDict()
# There should be only one line per position
assert position not in radar_dict[chromosome]
radar_dict[chromosome][position] = {"gene":gene,"strand":strand}
return radar_dict
# Create a class to store a line of the additional file.
from collections import OrderedDict
class Site (object):
#~~~~~~~CLASS FIELDS~~~~~~~#
# Table of correspondance reference => PMID
TITLE_TO_PMID = {
"Peng et al 2012":"22327324",
"Bahn et al 2012":"21960545",
"Ramaswami et al 2012":"22484847",
"Ramaswami et al 2013":"23291724",
}
# Table of correspondance reference => PMID
TISSUE_TO_SAMPLE = {
"Brain":"Brain",
"Illumina Bodymap":"Illumina_Bodymap",
"Lymphoblastoid cell line":"YH",
"U87 cell line":"U87MG"
}
#~~~~~~~FONDAMENTAL METHODS~~~~~~~#
# Parse a line of the aditional information file
def __init__(self, line, ):
sl = line.strip().split("\t")
self.chromosome = sl[0].split(":")[0].strip()
self.position = int(sl[0].split(":")[1].strip())
self.PMID = self.TITLE_TO_PMID[sl[1].strip()]
self.tissue = self.TISSUE_TO_SAMPLE[sl[2].strip()]
self.coverage = sl[3].strip()
self.editing_level = sl[4].strip()
# Fundamental class methods str and repr
def __repr__(self):
msg = "SITE CLASS\n"
# list all values in object dict in alphabetical order
keylist = [key for key in self.__dict__.keys()]
keylist.sort()
for key in keylist:
msg+="\t{}\t{}\n".format(key, self.__dict__[key])
return (msg)
def __str__(self):
return self.__repr__()
a = Site("chr1:1037916 Peng et al 2012 Lymphoblastoid cell line 9 66.67")
print (a)
# Create a structured dict of dict to parse the secondary database file
def parse_RADAR_secondary (file):
# Define a list to store Site object (not a dict because of redundancy)
radar_list = []
for line in open (file, "r"):
if line[0] != "#":
radar_list.append(Site(line))
# return a list sorted by chromosome and positions
return sorted(radar_list, key=lambda Site: (Site.chromosome, Site.position))
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
def reformat_RADAR (main_file, secondary_file, outfile, errfile, header):
# Read and structure the 2 database files
print("Parse the main database file")
main = parse_RADAR_main(main_file)
print("Parse the secondary database file")
secondary = parse_RADAR_secondary (secondary_file)
print("Combine the data together in a new bed formated file")
with open (outfile, "w+") as csvout, open (errfile, "w+") as errout:
# rewrite header
csvout.write(header)
fail = success = 0
for total, site in enumerate(secondary):
try:
line = "{0}\t{1}\t{1}\t{2}|{3}|{4}|{5}|{6}\t{7}\t{8}\n".format(
site.chromosome,
site.position,
"A>I",
site.tissue,
"-",
site.PMID,
main[site.chromosome][site.position]["gene"],
site.editing_level,
main[site.chromosome][site.position]["strand"],
)
csvout.write(line)
success += 1
except KeyError as E:
line = "{0}\t{1}\t{2}\t{3}\t{4}\n".format(
site.chromosome,
site.position,
site.tissue,
site.PMID,
site.editing_level
)
errout.write(line)
fail += 1
print ("{} Sites processed\t{} Sites pass\t{} Sites fail".format(total, success, fail))
header = "# RADAR database Human v2 all sites hg19 coordinates\n"
header+= "# Data cleaned, standardized and converted to BED6 format\n"
header+= "# Adrien Leger (aleg@ebi.ac.uk) - {}\n".format(str (datetime.datetime.today()))
header+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
reformat_RADAR(
main_file = "./PTM_Original_Datasets/RADAR_human_hg19_v2_primary.txt",
secondary_file = "./PTM_Original_Datasets/RADAR_human_hg19_v2_secondary.txt",
outfile = "./PTM_Original_Datasets/RADAR_Human_hg19_inosine_cleaned.bed",
errfile = "./PTM_Original_Datasets/RADAR_Human_hg19_inosine_orphan.bed",
header = header)
file_summary("./PTM_Original_Datasets/RADAR_Human_hg19_inosine_cleaned.bed")
# Conversion to hg38 with Crossmap/liftover
lifover_chainfile = "../LiftOver_chain_files/hg19ToHg38.over.chain.gz"
input_bed = "./PTM_Original_Datasets/RADAR_Human_hg19_inosine_cleaned.bed"
temp_bed = "./PTM_Original_Datasets/RADAR_Human_hg38_inosine_temp.bed"
cmd = "CrossMap.py bed {} {} {}".format(lifover_chainfile, input_bed, temp_bed)
bash(cmd)
# Rewriting and updating of the header removed by Crossmap
final_bed = "./PTM_Original_Datasets/RADAR_Human_hg38_inosine_cleaned.bed"
header = "# RADAR database Human v2 all sites hg38 coordinates\n"
header+= "# Data cleaned, standardized, converted to BED6 format and updated to hg38 coordinates\n"
header+= "# Adrien Leger (aleg@ebi.ac.uk) - {}\n".format(str (datetime.datetime.today()))
header+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
with open (temp_bed, "r") as infile, open (final_bed, "w") as outfile:
outfile.write (header)
for line in infile:
outfile.write (line)
file_summary(final_bed)
listdir("./PTM_Original_Datasets/")
infile="./PTM_Original_Datasets/editing_Peng_hg38.bed"
PMID = "22327324"
cell = "YH"
modification = "A>I"
method = "A_to_I_editing"
author = "Peng"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
print(colsum(infile, colrange=[9], header=False, ignore_hashtag_line=True, separator=["\t", "|"], max_items=20, ret_type="report"))
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",7,"|",8,"|",9,"|",10,"%|",11,"|",12,"|",13,"|",14,"|",15,"|",16,"|",17,"|",18,"\t",19,"\t",20]
final_template=[0,"\t",1,"\t",2,"\t",9,"|",cell,"|",method,"|",PMID,"|",18,"\t",10,"\t",20]
# filter out all but A>G transition which are Inosine transition
filter_dict={9:["T->C","G->A","C->T","T->G","C->G","G->C","A->C","T->A","C->A","G->T","A->T"]}
# Reformat the field value A->G to A>I for standardization
subst_dict={9:{"A->G":"A>I"}}
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-",
subst_dict = subst_dict,
filter_dict = filter_dict )
file_summary(outfile)
distrib_peak_len(outfile)
infile = "./PTM_Original_Datasets/editing_Sakurai_hg38.bed"
PMID = "24407955"
cell = "Brain"
modification = "A>I"
method = "ICE_seq"
author = "Sakurai"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"\t",7,"\t",8]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",7,"\t",8]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
infile = "./PTM_Original_Datasets/m5C_Hussain_hg38.bed"
PMID = "23871666"
cell = "HEK293"
modification = "m5C"
method = "miCLIP"
author = "Hussain"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|-\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
infile="./PTM_Original_Datasets/m5C_Khoddami_hg38.bed"
PMID = "23604283"
cell = "MEF"
modification = "m5C"
method = "AzaIP"
author = "Khoddami"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
infile="./PTM_Original_Datasets/m5C_Squires_hg38.bed"
PMID = "22344696"
cell = "HeLa"
modification = "m5C"
method = "bisulfite_seq"
author = "Squires"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|-\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
infile="./PTM_Original_Datasets/m6A_Dominissini_hg38.bed"
PMID = "22575960"
cell = "HepG2"
modification = "m6A"
method = "M6A_seq"
author = "Dominissini"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
distrib_peak_len(infile, normed=False, bins=500)
infile="./PTM_Original_Datasets/m6A_Dominissini_hg19_original_table.csv"
file_summary(infile)
distrib_peak_len(infile, normed=False, bins=500)
distrib_peak_len(infile, normed=False, range=[1,2000], bins=500)
distrib_peak_len(infile, normed=False, range=[1,200], bins=200)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
infile="./PTM_Original_Datasets/m6A_Dominissini_hg19_original_table.csv"
outfile = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg19_cleaned.bed"
init_template=[0,"\t",1,"\t",2,"\t",3,"\t",4]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t-\t",3]
# Predicate function to filter out large peaks
predicate = lambda val_list: abs(int(val_list[1])-int(val_list[2])) <= 1000
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-",
predicate = predicate)
file_summary(outfile)
# Conversion to hg38 with Crossmap/liftover
lifover_chainfile = "../LiftOver_chain_files/hg19ToHg38.over.chain.gz"
input_bed = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg19_cleaned.bed"
temp_bed = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg38_temp.bed"
cmd = "CrossMap.py bed {} {} {}".format(lifover_chainfile, input_bed, temp_bed)
bash(cmd)
# Rewriting and updating of the header removed by Crossmap
final_bed = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg38_cleaned.bed"
header = generate_header(PMID, cell, modification, method)
with open (temp_bed, "r") as infile, open (final_bed, "w") as outfile:
outfile.write (header)
for line in infile:
outfile.write (line)
file_summary(final_bed)
distrib_peak_len(final_bed, normed=False, bins=200)
infile="./PTM_Original_Datasets/m6A_Meyer_hg38.bed"
PMID = "22608085"
cell = "HEK293"
modification = "m6A"
method = "MeRIP_Seq"
author = "Meyer"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"\t",6,"\t",7]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",6,"\t",7]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile, bins = 100)
infile="./PTM_Original_Datasets/miCLIP_m6A_Linder2015_hg38.bed"
PMID = "26121403"
cell = "HEK293"
modification = "m6A:m6Am"
method = "miCLIP"
author = "Linder"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"\t",4,"\t",5]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|-\t",4,"\t",5]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
infile="./PTM_Original_Datasets/MeRIPseq_m1A_Dominissini2016_hg38.bed"
PMID = "26863196"
cell = "HeLa:HEK293:HepG2"
modification = "m1A"
method = "M1A_seq"
author = "Dominissini"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
###### chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"\t",6,"\t",7]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",5,"|",method,"|",PMID,"|",3,"\t",6,"\t",7]
# filter out all but A>G transition which are Inosine transition
filter_dict={5:["HEPG2_heat_shock_4h","HEPG2_Glucose_starv_4h","HEPG2_common_total_RNA"]}
# Reformat the field value A->G to A>I for standardization
subst_dict={5:{"HEPG2_common_mRNA":"HepG2"}}
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-",
subst_dict = subst_dict,
filter_dict = filter_dict
)
file_summary(outfile)
distrib_peak_len(outfile)
infile="./PTM_Original_Datasets/pseudoU_Carlile_hg38.bed"
PMID = "25192136"
cell = "HeLa"
modification = "Y"
method = "Pseudo_seq"
author = "Carlile"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
infile="./PTM_Original_Datasets/pseudoU_Li_hg38.bed"
PMID = "26075521"
cell = "HEK293"
modification = "Y"
method = "CeU_Seq"
author = "Li"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"\t",7,"\t",8]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",7,"\t",8]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
infile="./PTM_Original_Datasets/pseudoU_Schwartz_hg38.bed"
PMID = "25219674"
cell = "HEK293:Fibroblast"
modification = "Y"
method = "Psi-seq"
author = "Schwartz"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
for f in sorted(glob("./PTM_Clean_Datasets/*.bed")):
print (f)
linerange(f, [[10,12]])
for f in sorted(glob("./PTM_Clean_Datasets/*.bed")):
print ("\n", "-"*100)
print ("Dataset Name\t{}".format(basename(f)))
print ("Number sites\t{}".format(simplecount(f, ignore_hashtag_line=True)))
a = colsum(
f,
colrange = [3,4,5,6],
header=False,
ignore_hashtag_line=True,
separator=["\t", "|"],
max_items=20,
ret_type="dict"
)
# Get more info via pubmed
print ("PMID")
for pmid,count in a[6].items():
pubmed_info = pmid_to_info(pmid)
print ("\t*{}\t{}\n\t {}. et al, {}\{}\{}\n\t {}".format(
pmid,count,
pubmed_info["first_name"],
pubmed_info["Year"],
pubmed_info["Month"],
pubmed_info["Day"],
pubmed_info["title"]))
# Simple listing for the other fields
for title, col in [["RNA PTM",3],["Tissue/cell",4],["Method",5]]:
print (title)
print(dict_to_report(a[col], ntab=1, max_items=10, tab="\t", sep="\t"))
# New dir to create annotated files
mkdir("PTM_Annotated_Datasets")
mkdir("Test")
help(reformat_table)
head("../../Reference_Annotation/gencode_v24.gff3.gz")
import pybedtools
annotation_file = "../../Reference_Annotation/gencode_v24.gff3"
peak_file = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg38_cleaned.bed"
output_file = "./test.bed"
peak = pybedtools.BedTool(peak_file)
annotation = pybedtools.BedTool(annotation_file)
intersection = peak.intersect(annotation, wo=True, s=True)
# Reformat the file generated by pybedtools to a simple Bed format
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",7,"\t",8,"\t",9,"\t",10,"\t",11,"\t",12,"\t",13,
"\t",14,"\t",15,"\t",16,"\t",17,"\tID=",18,";gene_id=",19,";gene_type=",20,";gene_status=",21,
";gene_name=",22,";level=",23,";havana_gene=",24]
final_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",18,"|",20,"|",22,"\t",8,"\t",9]
print(intersection.head())
print("Post processing results")
reformat_table(
input_file=intersection.fn,
output_file=output_file,
init_template=init_template,
final_template=final_template,
replace_internal_space='_',
replace_null_val="-",
keep_original_header = False,
predicate = lambda v: v[12] == "gene"
)
head(output_file)
import pybedtools
def intersect_extract_genecodeID (annotation_file, peak_file, outdir):
output_file = "{}/{}_{}.bed".format(outdir, file_basename(peak_file), file_basename(annotation_file))
genecount_file = "{}/{}_{}_uniq-gene.csv".format(outdir, file_basename(peak_file), file_basename(annotation_file))
site_file = "{}/{}_{}_uniq-sites.csv".format(outdir, file_basename(peak_file), file_basename(annotation_file))
peak = pybedtools.BedTool(peak_file)
annotation = pybedtools.BedTool(annotation_file)
# Intersect the 2 files with pybedtools
print("Intersecting {} with {}".format(file_basename(peak_file), file_basename(annotation_file)))
intersection = peak.intersect(annotation, wo=True, s=True)
# Reformat the file generated by pybedtools to a simple Bed format
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",7,"\t",8,"\t",9,"\t",10,"\t",11,"\t",12,"\t",13,
"\t",14,"\t",15,"\t",16,"\t",17,"\tID=",18,";gene_id=",19,";gene_type=",20,";gene_status=",21,
";gene_name=",22,";level=",23,";havana_gene=",24]
final_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",18,"|",20,"|",22,"\t",8,"\t",9]
h = "# Data cleaned, converted to BED6, standardized and coordinates converted to hg38 using liftOver\n"
h+= "# Overlaping gene annotation with gencodev24\n"
h+= "# Adrien Leger (aleg@ebi.ac.uk) {}\n".format(str (datetime.datetime.today()))
h+= "# chrom\tchromstart\tchromend\tmodif|cell_type|method|PMID|ensembl_id|gene_type|gene_name\tscore\tstrand\n"
print("Post processing results")
reformat_table(
input_file=intersection.fn,
output_file=output_file,
init_template=init_template,
final_template=final_template,
replace_internal_space='_',
replace_null_val="-",
header = h,
keep_original_header = False,
predicate = lambda v: v[12] == "gene"
)
# Count the number of lines in the initial and final peak files
i, j = simplecount(peak_file, ignore_hashtag_line=True), simplecount(output_file, ignore_hashtag_line=True)
print("Total initial positions: {}\tTotal final positions: {}".format(i, j))
# Count uniq gene id and uniq positions found in the dataset
geneid_dict = OrderedDict()
coord_dict = OrderedDict()
with open (output_file, "r") as fp:
for line in fp:
if line[0] != "#":
sl= supersplit(line, separator=["\t", "|"])
# write gene id, gene_type,
gene_id = "{}\t{}\t{}".format(sl[7],sl[9],sl[8])
if gene_id not in geneid_dict:
geneid_dict[gene_id] = 0
geneid_dict[gene_id] += 1
coord = "{}:{}-{}".format(sl[0],sl[1],sl[2])
if coord not in coord_dict:
coord_dict[coord] = 0
coord_dict[coord] += 1
print ("Uniq genes found\t{}\nUniq position found\t{}\n".format(
len(geneid_dict.values()), len(coord_dict.values()) ))
# Write each gene id found with the number of time found
with open (genecount_file, "w") as fp:
fp.write(dict_to_report (geneid_dict, max_items=0, sep="\t"))
# Write each gene id found with the number of time found
with open (site_file, "w") as fp:
fp.write(dict_to_report (coord_dict, max_items=0, sep="\t"))
annotation_file = "../../Reference_Annotation/gencode_v24.gff3"
peak_file = "./PTM_Clean_Datasets/DARNED_human_hg38_inosine_cleaned.bed"
outdir = "./"
output_file = "./DARNED_human_hg38_inosine_cleaned_gencode_v24.bed"
genecount_file = "./DARNED_human_hg38_inosine_cleaned_gencode_v24_uniq-gene.csv"
site_file = "./DARNED_human_hg38_inosine_cleaned_gencode_v24_uniq-sites.csv"
intersect_extract_genecodeID (annotation_file, peak_file, outdir)
file_summary(output_file, separator=["\t","|"])
head (genecount_file)
head (site_file)
remove(output_file)
remove(genecount_file)
remove(site_file)
# Annotation vs gencodev23 lncRNA genes
annotation_file = '/home/aleg/Data/Reference_Annotation/gencode_v24_lncRNAs.gff3'
for peak_file in sorted(glob("./PTM_Clean_Datasets/*.bed")):
outdir = "./PTM_Annotated_Datasets"
intersect_extract_genecodeID (annotation_file, peak_file, outdir)
# Annotation vs gencodev23 All genes
annotation_file = "/home/aleg/Data/Reference_Annotation/gencode_v24.gff3"
for peak_file in sorted(glob("./PTM_Clean_Datasets/*.bed")):
outdir = "./PTM_Annotated_Datasets"
intersect_extract_genecodeID (annotation_file, peak_file, outdir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overview of the datasets
Step2: DARNED is a little messy and hard to convert since the position with the same PMID/OR sample types where fused in the same Site. It makes it difficult to parse. I think it would be better if I duplicate the site with the several PMID and sample type. I just need to verify if the number of fields in PMID and cell type is similar and if they correspond to each other, ie first in cell type = first in PMID
Step3: I reformated and filtered the database with reformat_table to be BED6 compatible. I removed the fields lacking either chromosome, position, tissue, and PID as well as with unknown strand. In addition I also selected only A>I and A>G transitions (same thing). This filtering step eliminated 47965 sites
Step4: I tried to see if the PMID and the tissue field always had the same lengh, so I could de multiplex the fused positions. The answer is no, the number of PMID and tissues could be different. However 272551 positions have only 1 tissue and 1 PMID. These are maybe not the more relialable positions but they might be more easy to interprete. The sites with only 1 PMID but several tissues can also be used. Same think for several PMID, 1 tissue. I will demultiplex them so as to have only 1 PMID and 1 tissue by site. Concerning the site with more than 1 PMID and 1 tissue, I will extract then in another backup file.
Step5: I only lost 4270 sites with several PMID and several tissue. Some sites where demultiplexed and I now have 290018 sites with 1 PMID and 1 tissue
Step6: The conversion resulted in the lost of 16 sites, which is neglectable compared with the 290002 sites in the database
Step7: I am not interested by the conservation fields, the annotation and the repetitive nature, but I will keep chromosome, position, gene and strand.
Step8: The operation is quite complex since I will have to fuse the 2 files and extract only specific values. I need to code a specific parser. The main RADAR file will be parsed and organised as a simple embed dict. The secondary file will be the more important since I will start parsing from it to find the complementary information in the main database file. Each site will be added to a list of Site objects, that will be subsequently iterated to combine with the main database before writing in a new Bed formated file.
Step9: Read the original file, reformat the field and write a new file BED6 compliant.
Step10: After combining information, out of the 1343463 sites in the RADAR secondary file and 2576460 in the RADAR primary file, 1342423 consistant sites were found in both files, ie half of the database site were filtered out because they where not in the 2 database files.
Step11: Around 10000 additional sites were lost during the conversion from hg19 to hg38
Step12: editing_Peng_hg38
Step13: The column 9 contains more than just Inosine transitions(A>G) but also all the other editing sites they found. Here I will focuss on the Inosine only. I need to filter out all the other values
Step14: Contains a lot of fields, some of which I don't even have an idea of what the contains. The dataset was not filtered and contains not only A>G A>I transitions. There is a total of 22686 sites but only 21111 are A>G transitions. I filtered out all the other modifications and retained only the A>G transition.
Step15: No problem with this dataset, I kept the gene loci name for future comparison after reannotation with GENCODE
Step16: No problem with this dataset, Since there is no gene loci, I just filed the field with a dash to indicate that it is empty
Step17: No problem with this dataset. It seems to be focussed on tRNA gene that are clearly over-represented in the gene list
Step18: 1 nt wide peaks = No problem with this dataset. Since there are no gene loci, I just filed the field with a dash to indicate that it is empty
Step19: Apparently there is a problem in the data since some of the peaks can be up to 3 500 000 with is much longer than initially described in the paper. I dowloaded the original peak calling file from the sup data of the paper to compare with this datafile.
Step20: The same problem is also found in the original data, though the values only goes up to 2 500 000... I think that long peaks were improperly called... Looking at the data in detail we can see that most of the peaks are found in the 1 to 130 range. There is also a second smaller peak around 1000. Looking at the original article the mapping was done on the human transcriptome and not the genome. Apparently the coordinates were converted to the genome after and that might explain this decrepancy. I have 2 options => Starting from scratch with a recent genome build, but the dataset seems quite tricky and I am not sure I could do it as well as the original authors. The second option is to be retrictive and keep only the small peaks ie > 1000 pb. I think that I will start by this alternative and go back to the data again if needed. To be sure of the quality of the data I will start from the original data and do the liftover conversion myself.
Step21: The filtering based on the peak length size remove a lot of peaks = nearly 90 % of the dataset
Step22: It is much better but I only have 2894 sites out of the 25000 initially in the dataset. For this first exploratory study it should be OK, but I will probably have to go back to the original data again later
Step23: The original dataset was OK and clean. Contrary to the previous dataset the width of the peaks are between 100 and 220 pb wich is clearly better. It is however interesting to notice that the peak lengths are not randomly distributed
Step24: The name field is unusable since it contains a random number of fieds... I cannot parse it easily. That is not a big problem since I will reannotate the data based on the last gencode annotation realease, I did not save any of the informations contained in the original name field. With miCLIP data the peak are 1 nt wide
Step25: I don't know understand some of the categories for HEPG2 cells
Step26: pseudoU_Carlile_hg38
Step27: Dataset OK but only contains only 8 peaks in lncRNA. 1 of the peaks is really wide = 40000 pb The dataset contains only the 8 peaks identify in the lncRNA... The coordinates correspond to the gen coordinates rather than the peaks themselves.. Use the dataset?? Is it really worthy to remap everything for such a low number of eventual peaks?
Step28: No problem with this dataset
Step29: No problem with this dataset
Step30: OK for all the datasets
Step31: Gene annotation of the PTM datasets
Step32: I found a python wrapper package for bedtools to manipulate bed files. I will use it to intersect my bed files containing the positions of the PTM (or peaks) and the gff3 annotation files. This will allow me to get gene names for each positions
Step33: Add the gene name and ensembl gene ID to the bed name field
Step34: It is working ok > looping over all the cleaned PTM files
Step35: Between 1 and 16% (excluding the weird carlile dataset) of the peaks are found in lncRNA annotated in gencode v24.
|
8,948
|
<ASSISTANT_TASK:>
Python Code:
import os, sys, inspect, io
cmd_folder = os.path.realpath(
os.path.dirname(
os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0])))
if cmd_folder not in sys.path:
sys.path.insert(0, cmd_folder)
from transitions import *
from transitions.extensions import GraphMachine
from IPython.display import Image, display, display_png
class Model():
# graph object is created by the machine
def show_graph(self, **kwargs):
stream = io.BytesIO()
self.get_graph(**kwargs).draw(stream, prog='dot', format='png')
display(Image(stream.getvalue()))
class Matter(Model):
def alert(self):
pass
def resume(self):
pass
def notify(self):
pass
def is_valid(self):
return True
def is_not_valid(self):
return False
def is_also_valid(self):
return True
extra_args = dict(initial='solid', title='Matter is Fun!',
show_conditions=True, show_state_attributes=True)
transitions = [
{ 'trigger': 'melt', 'source': 'solid', 'dest': 'liquid' },
{ 'trigger': 'evaporate', 'source': 'liquid', 'dest': 'gas', 'conditions':'is_valid' },
{ 'trigger': 'sublimate', 'source': 'solid', 'dest': 'gas', 'unless':'is_not_valid' },
{ 'trigger': 'ionize', 'source': 'gas', 'dest': 'plasma',
'conditions':['is_valid','is_also_valid'] }
]
states=['solid', 'liquid', {'name': 'gas', 'on_exit': ['resume', 'notify']},
{'name': 'plasma', 'on_enter': 'alert', 'on_exit': 'resume'}]
model = Matter()
machine = GraphMachine(model=model, states=states, transitions=transitions,
show_auto_transitions=True, **extra_args)
model.show_graph()
machine.auto_transitions_markup = False # hide auto transitions
model.show_graph(force_new=True) # rerender graph
model.melt()
model.show_graph()
model.evaporate()
model.show_graph()
model.ionize()
model.show_graph()
# multimodel test
model1 = Matter()
model2 = Matter()
machine = GraphMachine(model=[model1, model2], states=states, transitions=transitions, **extra_args)
model1.melt()
model1.show_graph()
model2.sublimate()
model2.show_graph()
# show only region of interest which is previous state, active state and all reachable states
model2.show_graph(show_roi=True)
from transitions.extensions.states import Timeout, Tags, add_state_features
@add_state_features(Timeout, Tags)
class CustomMachine(GraphMachine):
pass
states = ['new', 'approved', 'ready', 'finished', 'provisioned',
{'name': 'failed', 'on_enter': 'notify', 'on_exit': 'reset',
'tags': ['error', 'urgent'], 'timeout': 10, 'on_timeout': 'shutdown'},
'in_iv', 'initializing', 'booting', 'os_ready', {'name': 'testing', 'on_exit': 'create_report'},
'provisioning']
transitions = [{'trigger': 'approve', 'source': ['new', 'testing'], 'dest':'approved',
'conditions': 'is_valid', 'unless': 'abort_triggered'},
['fail', '*', 'failed'],
['add_to_iv', ['approved', 'failed'], 'in_iv'],
['create', ['failed','in_iv'], 'initializing'],
['init', 'in_iv', 'initializing'],
['finish', 'approved', 'finished'],
['boot', ['booting', 'initializing'], 'booting'],
['ready', ['booting', 'initializing'], 'os_ready'],
['run_checks', ['failed', 'os_ready'], 'testing'],
['provision', ['os_ready', 'failed'], 'provisioning'],
['provisioning_done', 'provisioning', 'os_ready']]
class CustomModel(Model):
def is_valid(self):
return True
def abort_triggered(self):
return False
extra_args['title'] = "System State"
extra_args['initial'] = "new"
model = CustomModel()
machine = CustomMachine(model=model, states=states, transitions=transitions, **extra_args)
model.approve()
model.show_graph()
# thanks to @dan-bar-dov (https://github.com/pytransitions/transitions/issues/367)
model = Model()
transient_states = ['T1', 'T2', 'T3']
target_states = ['G1', 'G2']
fail_states = ['F1', 'F2']
transitions = [['eventA', 'INITIAL', 'T1'], ['eventB', 'INITIAL', 'T2'], ['eventC', 'INITIAL', 'T3'],
['success', ['T1', 'T2'], 'G1'], ['defered', 'T3', 'G2'], ['fallback', ['T1', 'T2'], 'T3'],
['error', ['T1', 'T2'], 'F1'], ['error', 'T3', 'F2']]
machine = GraphMachine(model, states=transient_states + target_states + fail_states,
transitions=transitions, initial='INITIAL', show_conditions=True,
show_state_attributes=True)
machine.machine_attributes['ratio'] = '0.471'
machine.style_attributes['node']['fail'] = {'fillcolor': 'brown1'}
machine.style_attributes['node']['transient'] = {'fillcolor': 'gold'}
machine.style_attributes['node']['target'] = {'fillcolor': 'chartreuse'}
model.eventC()
# customize node styling
for s in transient_states:
machine.model_graphs[model].set_node_style(s, 'transient')
for s in target_states:
machine.model_graphs[model].set_node_style(s, 'target')
for s in fail_states:
machine.model_graphs[model].set_node_style(s, 'fail')
# draw the whole graph ...
model.show_graph()
from enum import Enum, auto, unique
@unique
class States(Enum):
ONE = auto()
TWO = auto()
THREE = auto()
model = Model()
machine = GraphMachine(model, states=States, auto_transitions=False, ordered_transitions=True, initial=States.THREE)
model.next_state()
model.show_graph()
from transitions.extensions.diagrams import GraphMachine
states = ['A', 'B', 'C', 'D']
state_translations = {
'A': 'Start',
'B': 'Error',
'C': 'Pending',
'D': 'Done'
}
transitions = [['go', 'A', 'B'], ['process', 'A', 'C'], ['go', 'C', 'D']]
model = Model()
m = GraphMachine(model, states=states, transitions=transitions, initial='A')
graph = model.get_graph()
for node in graph.iternodes():
node.attr['label'] = state_translations[node.attr['label']]
model.show_graph()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Matter graph
Step2: Hide auto transitions
Step3: Previous state and transition notation
Step4: One Machine and multiple models
Step5: Show only the current region of interest
Step6: Example graph from Readme.md
Step7: Custom styling
Step8: Enum states
Step9: Editing the graph object directly
|
8,949
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
def bathymetry_index(df, m0 = 1, m1 = 0):
return m0*(np.log(df.blue)/np.log(df.green))+m1
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
#List the products available on this server/device
dc.list_products()
#create a list of the desired platforms
platform = 'LANDSAT_8'
product = 'ls8_level1_usgs'
# East Coast of Australia
lat_subsect = (-31.7, -32.2)
lon_subsect = (152.4, 152.9)
print('''
Latitude:\t{0}\t\tRange:\t{2} degrees
Longitude:\t{1}\t\tRange:\t{3} degrees
'''.format(lat_subsect,
lon_subsect,
max(lat_subsect)-min(lat_subsect),
max(lon_subsect)-min(lon_subsect)))
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = lat_subsect,longitude = lon_subsect)
%%time
ds = dc.load(lat = lat_subsect,
lon = lon_subsect,
platform = platform,
product = product,
output_crs = "EPSG:32756",
measurements = ["red","blue","green","nir","quality"],
resolution = (-30,30))
ds
from utils.data_cube_utilities.dc_rgb import rgb
rgb(ds.isel(time=6), x_coord='x', y_coord='y')
plt.show()
# Create Bathemtry Index column
ds["bathymetry"] = bathymetry_index(ds)
from utils.data_cube_utilities.dc_water_classifier import NDWI
# (green - nir) / (green + nir)
ds["ndwi"] = NDWI(ds, band_pair=1)
ds
import os
from utils.data_cube_utilities.import_export import export_xarray_to_multiple_geotiffs
unmasked_dir = "geotiffs/landsat8/unmasked"
if not os.path.exists(unmasked_dir):
os.makedirs(unmasked_dir)
export_xarray_to_multiple_geotiffs(ds, unmasked_dir + "/unmasked.tif",
x_coord='x', y_coord='y')
# preview values
np.unique(ds["quality"])
# Tunable threshold for masking the land out
threshold = .05
water = (ds.ndwi>threshold).values
#preview one time slice to determine the effectiveness of the NDWI masking
rgb(ds.where(water).isel(time=6), x_coord='x', y_coord='y')
plt.show()
from utils.data_cube_utilities.dc_mosaic import ls8_oli_unpack_qa
clear_xarray = ls8_oli_unpack_qa(ds.quality, "clear")
full_mask = np.logical_and(clear_xarray, water)
ds = ds.where(full_mask)
plt.figure(figsize=[15,5])
#Visualize the distribution of the remaining data
sns.boxplot(ds['bathymetry'])
plt.show()
#set the quantile range in either direction from the median value
def get_quantile_range(col, quantile_range = .25):
low = ds[col].quantile(.5 - quantile_range,["time","y","x"]).values
high = ds[col].quantile(.5 + quantile_range,["time","y","x"]).values
return low,high
#Custom function for a color mapping object
from matplotlib.colors import LinearSegmentedColormap
def custom_color_mapper(name = "custom", val_range = (1.96,1.96), colors = "RdGnBu"):
custom_cmap = LinearSegmentedColormap.from_list(name,colors=colors)
min, max = val_range
step = max/10.0
Z = [min,0],[0,max]
levels = np.arange(min,max+step,step)
cust_map = plt.contourf(Z, 100, cmap=custom_cmap)
plt.clf()
return cust_map.cmap
def mean_value_visual(ds, col, figsize = [15,15], cmap = "GnBu", low=None, high=None):
if low is None: low = np.min(ds[col]).values
if high is None: high = np.max(ds[col]).values
ds.reduce(np.nanmean,dim=["time"])[col].plot.imshow(figsize = figsize, cmap=cmap,
vmin=low, vmax=high)
mean_value_visual(ds, "bathymetry", cmap="GnBu")
# create range using the 10th and 90th quantile
low, high = get_quantile_range("bathymetry", .40)
custom = custom_color_mapper(val_range=(low,high),
colors=["darkred","red","orange","yellow","green",
"blue","darkblue","black"])
mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high)
# create range using the 5th and 95th quantile
low, high = get_quantile_range("bathymetry", .45)
custom = custom_color_mapper(val_range=(low,high),
colors=["darkred","red","orange","yellow","green",
"blue","darkblue","black"])
mean_value_visual(ds, "bathymetry", cmap = custom, low=low, high = high)
# create range using the 2nd and 98th quantile
low, high = get_quantile_range("bathymetry", .48)
custom = custom_color_mapper(val_range=(low,high),
colors=["darkred","red","orange","yellow","green",
"blue","darkblue","black"])
mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high)
# create range using the 1st and 99th quantile
low, high = get_quantile_range("bathymetry", .49)
custom = custom_color_mapper(val_range=(low,high),
colors=["darkred","red","orange","yellow","green",
"blue","darkblue","black"])
mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <span id="Shallow_Water_Bathymetry_import">Import Dependencies and Connect to the Data Cube ▴</span>
Step2: <span id="Shallow_Water_Bathymetry_plat_prod">Choose the Platform and Product ▴</span>
Step3: <span id="Shallow_Water_Bathymetry_define_extents">Define the Extents of the Analysis ▴</span>
Step4: Display
Step5: <span id="Shallow_Water_Bathymetry_retrieve_data">Retrieve the Data ▴</span>
Step6: Preview the Data
Step7: <span id="Shallow_Water_Bathymetry_bathymetry">Calculate the Bathymetry and NDWI Indices ▴</span>
Step8: <hr>
Step9: <span id="Shallow_Water_Bathymetry_export_unmasked">Export Unmasked GeoTIFF ▴</span>
Step10: <span id="Shallow_Water_Bathymetry_mask">Mask the Dataset Using the Quality Column and NDWI ▴</span>
Step11: Use NDWI to Mask Out Land
Step12: <span id="Shallow_Water_Bathymetry_vis_func">Create a Visualization Function ▴</span>
Step13: <b>Interpretation
Step14: <span id="Shallow_Water_Bathymetry_bath_vis">Visualize the Bathymetry ▴</span>
Step15: <span id="Shallow_Water_Bathymetry_bath_vis_better">Visualize the Bathymetry With Adjusted Contrast ▴</span>
|
8,950
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
poeme =
A noir, E blanc, I rouge, U vert, O bleu, voyelles,
Je dirai quelque jour vos naissances latentes.
A, noir corset velu des mouches éclatantes
Qui bombillent autour des puanteurs cruelles,
Golfe d'ombre; E, candeur des vapeurs et des tentes,
Lance des glaciers fiers, rois blancs, frissons d'ombelles;
I, pourpres, sang craché, rire des lèvres belles
Dans la colère ou les ivresses pénitentes;
U, cycles, vibrements divins des mers virides,
Paix des pâtis semés d'animaux, paix des rides
Que l'alchimie imprime aux grands fronts studieux;
O, suprême clairon plein de strideurs étranges,
Silences traversés des Mondes et des Anges:
—O l'Oméga, rayon violet de Ses Yeux!
def extract_words(text):
# ce n'est pas la plus efficace des fonctions mais ça fait ce qu'on veut
spl = text.lower().replace("!", "").replace(",", "").replace(
";", "").replace(".", "").replace(":", "").replace("'", " ").split()
return(spl)
print(extract_words(poeme))
def plus_grand_suffix_commun(mots):
longueur_max = max([len(m) for m in mots])
meilleure_paire = None
meilleur_suffix = None
# On peut parcourir les tailles de suffixe dans un sens croissant
# mais c'est plus efficace dans un sens décroissant dans la mesure
# où le premier suffixe trouvé est alors nécessairement le plus long.
for i in range(longueur_max - 1, 0, -1):
for m1 in mots:
for m2 in mots: # ici, on pourrait ne parcourir qu'une partie des mots
# car m1,m2 ou m2,m1, c'est pareil.
if m1 == m2:
continue
if len(m1) < i or len(m2) < i:
continue
suffixe = m1[-i:]
if m2[-i:] == suffixe:
meilleur_suffix = suffixe
meilleure_paire = m1, m2
return meilleur_suffix, meilleure_paire
mots = extract_words(poeme)
plus_grand_suffix_commun(mots)
mots = extract_words(poeme)
suffix_map = {}
for mot in mots:
lettre = mot[-1]
if lettre in suffix_map:
suffix_map[lettre].append(mot)
else:
suffix_map[lettre] = [mot]
suffix_map
def plus_grand_suffix_commun_dictionnaire(mots):
suffix_map = {}
for mot in mots:
lettre = mot[-1]
if lettre in suffix_map:
suffix_map[lettre].append(mot)
else:
suffix_map[lettre] = [mot]
tout = []
for cle, valeur in suffix_map.items():
suffix = plus_grand_suffix_commun(valeur)
if suffix is None:
continue
tout.append((len(suffix[0]), suffix[0], suffix[1]))
return max(tout)
mots = extract_words(poeme)
plus_grand_suffix_commun_dictionnaire(mots)
from time import perf_counter
mots = extract_words(poeme)
debut = perf_counter()
for i in range(100):
plus_grand_suffix_commun(mots)
perf_counter() - debut
debut = perf_counter()
for i in range(100):
plus_grand_suffix_commun_dictionnaire(mots)
perf_counter() - debut
def build_trie(liste):
trie = {}
for mot in liste:
noeud = trie
for i in range(0, len(mot)):
lettre = mot[len(mot) - i - 1]
if lettre not in noeud:
noeud[lettre] = {}
noeud = noeud[lettre]
noeud['FIN'] = 0
return trie
liste = ['zabc', 'abc']
t = build_trie(liste)
t
mots = extract_words(poeme)
trie = build_trie(mots)
trie
trie['s']['e']['t']
def build_dot(trie, predecessor=None, root_name=None, depth=0):
rows = []
root = trie
if predecessor is None:
rows.append('digraph{')
rows.append('%s%d [label="%s"];' % (
root_name or 'ROOT', id(trie), root_name or 'ROOT'))
rows.append(build_dot(trie, root_name or 'ROOT', depth=depth))
rows.append("}")
elif isinstance(trie, dict):
for k, v in trie.items():
rows.append('%s%d [label="%s"];' % (k, id(v), k))
rows.append("%s%d -> %s%d;" % (predecessor, id(trie), k, id(v)))
rows.append(build_dot(v, k, depth=depth+1))
return "\n".join(rows)
text = build_dot(trie['s']['e']['t'], root_name='set')
print(text)
from jyquickhelper import RenderJsDot
RenderJsDot(text, width="100%")
def plus_grand_suffix_commun_dictionnaire_trie(mots):
whole_trie = build_trie(mots)
def walk(trie):
best = []
for k, v in trie.items():
if isinstance(v, int):
continue
r = walk(v)
if len(r) > 0 and len(r) + 1 > len(best):
best = [k] + r
if len(best) > 0:
return best
if len(trie) >= 2:
return ['FIN']
return []
return walk(whole_trie)
res = plus_grand_suffix_commun_dictionnaire_trie(mots)
res
res = plus_grand_suffix_commun_dictionnaire(mots)
res
debut = perf_counter()
for i in range(100):
plus_grand_suffix_commun_dictionnaire(mots)
perf_counter() - debut
debut = perf_counter()
for i in range(100):
plus_grand_suffix_commun_dictionnaire_trie(mots)
perf_counter() - debut
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Enoncé
Step3: Exercice 1
Step4: Exercice 2
Step5: Exercice 3
Step6: Exercice 4
Step7: Exercice 5
Step8: C'est illisible. On ne montre que les mots se terminant par tes.
Step9: Toujours pas très partique. On veut représenter l'arbre visuellement ou tout du moins une sous-partie. On utilise le langage DOT.
Step10: Le résultat est différent car le dictionnaire ne garantit pas que les éléments seront parcourus dans l'ordre alphabétique.
|
8,951
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
out = tf.zeros([2, 3])
print(out.eval())
assert np.allclose(out.eval(), np.zeros([2, 3]))
# tf.zeros == np.zeros
_X = np.array([[1,2,3], [4,5,6]])
X = tf.convert_to_tensor(_X)
out = tf.zeros_like(X)
print(out.eval())
assert np.allclose(out.eval(), np.zeros_like(_X))
# tf.zeros_like == np.zeros_like
out = tf.ones([2, 3])
print(out.eval())
assert np.allclose(out.eval(), np.ones([2, 3]))
# tf.ones == np.ones
_X = np.array([[1,2,3], [4,5,6]])
X = tf.convert_to_tensor(_X)
out = tf.ones_like(X)
print(out.eval())
assert np.allclose(out.eval(), np.ones_like(_X))
# tf.ones_like == np.ones_like
out1 = tf.fill([3, 2], 5)
out2 = tf.ones([3, 2]) * 5
out3 = tf.constant(5, shape=[3, 2])
assert np.allclose(out1.eval(), out2.eval())
assert np.allclose(out1.eval(), out3.eval())
assert np.allclose(out1.eval(), np.full([3, 2], 5))
print(out1.eval())
out = tf.constant([[1, 3, 5], [4, 6, 8]], dtype=tf.float32)
print(out.eval())
assert np.allclose(out.eval(), np.array([[1, 3, 5], [4, 6, 8]], dtype=np.float32))
out = tf.constant(4, shape=[2, 3])
print(out.eval())
assert np.allclose(out.eval(), np.full([2, 3], 4))
out = tf.linspace(5., 10., 50)
print(out.eval())
assert np.allclose(out.eval(), np.linspace(5., 10., 50))
# tf.linspace == np.linspace
out = tf.range(10, 101, 2)
print(out.eval())
assert np.allclose(out.eval(), np.arange(10, 101, 2))
# tf.range == np.arange
# Note that the end is exlcuded unlike tf.linspace
X = tf.random_normal([3, 2], 0, 2.)
print(X.eval())
# tf.random_normal is almost equivalent to np.random.normal
# But the order of the arguments is differnt.
# _X = np.random.normal(0, 2., [3, 2])
out = tf.truncated_normal([3, 2])
print(out.eval())
out = tf.random_uniform([3, 2], 0, 2)
print(out.eval())
# tf.random_uniform is almost equivalent to np.random.uniform
# But the order of the arguments is differnt.
# _X = np.random.uniform(0, 2., [3, 2])
_X = np.array([[1, 2], [3, 4], [5, 6]])
X = tf.constant(_X)
out = tf.random_shuffle(X)
print(out.eval())
# tf.random_shuffle() is not a in-place function unlike np.random_shuffle().
# np.random.shuffle(_X)
# print(_X)
X = tf.random_normal([10, 10, 3])
out = tf.random_crop(X, [5, 5, 3])
print(out.eval())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE on notation
Step2: Q2. Let X be a tensor of [[1,2,3], [4,5,6]]. <br />Create a tensor of the same shape and dtype as X with all elements set to zero.
Step3: Q3. Create a tensor of shape [2, 3] with all elements set to one.
Step4: Q4. Let X be a tensor of [[1,2,3], [4,5,6]]. <br />Create a tensor of the same shape and dtype as X with all elements set to one.
Step5: Q5. Create a tensor of the shape [3, 2], with all elements of 5.
Step6: Q6. Create a constant tensor of [[1, 3, 5], [4, 6, 8]], with dtype=float32
Step7: Q7. Create a constant tensor of the shape [2, 3], with all elements set to 4.
Step8: Sequences
Step9: Q9. Create a tensor which looks like [10, 12, 14, 16, ..., 100].
Step10: Random Tensors
Step11: Q11. Create a random tensor of the shape [3, 2], with elements from a normal distribution of mean=0, standard deviation=1 such that any values don't exceed 2 standard deviations from the mean.
Step12: Q12. Create a random tensor of the shape [3, 2], with all elements from a uniform distribution that ranges from 0 to 2 (exclusive).
Step13: Q13. Let X be a tensor of [[1, 2], [3, 4], [5, 6]]. Shuffle X along its first dimension.
Step14: Q14. Let X be a random tensor of the shape [10, 10, 3], with elements from a unit normal distribution. Crop X with the shape of [5, 5, 3].
|
8,952
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
kn=np.eye((n),dtype=np.int)
kn=kn*(n-1)
return kn
# a=np.eye(5)
# a=a*5
# a
# a=complete_deg(5)
# a
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
kn=np.ones((n),dtype=np.int)-np.eye((n),dtype=np.int)
return kn
kn=complete_adj(5)
kn
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
# YOUR CODE HERE
# kn=complete_adj(1)
n=3
D = complete_deg(n)
A= complete_adj(n) #Kn
L=D-A
x=np.linalg.eigvals(L)
# print(kn)
# print(D)
# print(A)
print(L)
print(x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Complete graph Laplacian
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
|
8,953
|
<ASSISTANT_TASK:>
Python Code:
setup_sum='sum=0'
run_sum=
for i in range(1,1000):
if i % 3 ==0:
sum = sum + i
print(timeit.Timer(run_sum, setup="sum=0").repeat(1,10000))
t=timeit.timeit(run_sum,setup_sum,number=10000)
print("Time for built-in sum(): {}".format(t))
start=time.time()
sum=0
for i in range(1,10000):
if i % 3==0:
sum+=i
end=time.time()
print("Time for trading way to count the time is %f"%(end-start))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: repeat(1,10000)重复一次,每次10000遍
Step2: 这个就没有重复多少次,就一次,一次10000遍
|
8,954
|
<ASSISTANT_TASK:>
Python Code:
import urllib3
import pandas as pd
url = "https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat"
#load the csv
airports = pd.read_csv(url,header=None)
print("Check DataFrame types")
display(airports.dtypes)
import numpy as np
print("-> Original DF")
display(airports.head())
#we can add a name to each variable
h = ["airport_id","name","city","country","IATA","ICAO","lat","lon","alt","tz","DST","tz_db"]
airports = airports.iloc[:,:12]
airports.columns = h
print("-> Original DF with proper names")
display(airports.head())
print("-> With the proper names it is easier to check correctness")
display(airports.dtypes)
airports.alt.describe()
airports.alt = airports.alt * 0.3048
airports.dtypes
airports.isnull().sum(axis=0)
# we can create a new label whoch corresponds to not having data
airports.IATA.fillna("Blank", inplace=True)
airports.ICAO = airports.ICAO.fillna("Blank")
airports.isnull().sum(axis=0)
((airports.lat > 90) & (airports.lat < -90)).any()
((airports.lon > 180) & (airports.lon < -180)).any()
airports.alt.describe()
qtls = airports.alt.quantile([.05,.5,.95],interpolation="higher")
qtls
# check how many of them are below the median
(airports.alt <= qtls[0.5]).sum()
#check how many of them are above of the median
(airports.alt >= qtls[0.5]).sum()
#check how many of them are below the .05 percentile
(airports.alt <= qtls[0.05]).sum()
#check how many of them are above the .95 percentile
(airports.alt >= qtls[0.95]).sum()
airports.shape[0]*.05
print("-> Check which airports are out of 5% range")
display(airports[(airports.alt < qtls[0.05])].head(10))
print("-> Showing a sample of ten values")
airports.sample(n=10)
print("-> Showing the airports in higher positions")
airports.sort_values(by="alt",ascending=True)[:10]
airports.tz_db
airports["continent"] = airports.tz_db.str.split("/").str[0]
airports.continent.unique()
airports.continent.value_counts()
(airports.continent.value_counts()/airports.continent.value_counts().sum())*100
airports[airports.continent == "\\N"].shape
airports.continent = airports.continent.replace('\\N',"unknown")
airports.tz_db = airports.tz_db.replace('\\N',"unknown")
airports.continent.unique()
airports[airports.continent == "unknown"].head()
hem_select = lambda x: "South" if x < 0 else "North"
airports["hemisphere"] = airports.lat.apply(hem_select)
(airports.hemisphere.value_counts() / airports.shape[0]) * 100
(airports.continent.value_counts() / airports.shape[0]) * 100
((airports.country.value_counts() / airports.shape[0]) * 100).sample(10)
((airports.country.value_counts() / airports.shape[0]) * 100).head(10)
type(airports.country.value_counts())
airports["alt_type"] = pd.cut(airports.alt,bins=3,labels=["low","med","high"])
airports.head()
airp_group = airports.groupby(["continent","alt_type"])
airp_group.groups.keys()
airp_group.size()
airp_group["alt"].agg({"max":np.max,"min":np.min,"mean":np.mean}).head()
airports.alt.hist(bins=100)
airp_group["alt"].sum().unstack()
airports.pivot_table(index="hemisphere",values="alt",aggfunc=np.mean)
airports.groupby("hemisphere").alt.mean()
my_df = pd.DataFrame(np.ones(100),columns=["y"])
my_df.head(10)
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
matplotlib.style.use('ggplot')
plt.rcParams['figure.figsize'] = [10, 8]
my_df.plot()
my_df["z"] = my_df.y.cumsum()
my_df.plot()
my_df.y = my_df.z ** 2
my_df.plot()
my_df.z = np.log(my_df.y)
my_df.z.plot()
airports.groupby("continent").size().plot.bar()
airports.groupby("continent").alt.agg({"max":np.max,"min":np.min,"mean":np.mean}).plot(kind="bar")
airports.groupby("continent").alt.agg({"max":np.max,"min":np.min,"mean":np.mean}).plot(kind="bar",stacked=True)
airports.groupby("continent").alt.agg({"max":np.max,"min":np.min,"mean":np.mean}).plot(kind="barh",stacked=True)
airports.alt.plot(kind="hist",bins=100)
airports.loc[:,["alt"]].plot(kind="hist")
airports.loc[:,["lat"]].plot(kind="hist",bins=100)
airports.loc[:,["lon"]].plot(kind="hist",bins=100)
airports.plot.box()
airports.alt.plot.box()
airports.pivot(columns="continent").alt.plot.box()
sp_airp = airports[airports.country=="Spain"]
spain_alt = sp_airp.sort_values(by="alt").alt
spain_alt.index = range(spain_alt.size)
spain_alt.plot.area()
airports.plot.scatter(y="lat",x="lon")
airports.plot.scatter(y="lat",x="lon",c="alt")
airports.plot.scatter(y="lat",x="lon",s=airports["alt"]/20)
airports.plot.hexbin(x="lon",y="lat",C="alt",gridsize=20)
airports.alt.plot.kde()
airports.lat.plot.kde()
airports.lon.plot.kde()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here you can find an explanation of each variable
Step2: Convert alt to m
Step3: Check if we have nans.
Step4: Let's check errors.
Step5: We can chech outliers in the altitude
Step6: let's explore 5 and 95 percentiles
Step7: Additionaly to what we have seen, we have extra functions to see how shaped and what values our data has.
Step8: We can create new variables
Step9: We can place hemisfere
Step10: We can calculate percentages.
Step11: Let's transformate alt into qualitative
Step12: Let's group data
Step13: The groups attribute is a dict whose keys are the computed unique groups and corresponding values being the axis labels belonging to each group. In the above example we have
Step14: Once the GroupBy object has been created, several methods are available to perform a computation on the grouped data.
Step15: Pandas has a handy .unstack() method—use it to convert the results into a more readable format and store that as a new variable
Step16: Remember that we also saw how to pivot table
Step17: Visualizing data
Step18: We can plot with different plot types
Step19: Multiple Bars
Step20: Histogram
Step21: Box Plots
Step22: Area Plots
Step23: Scatter Plot
Step24: Hex Bins
Step25: Density Plot
|
8,955
|
<ASSISTANT_TASK:>
Python Code:
print('hello world!')
import json
# hit Tab at end of this to see all methods
json.
# hit Shift-Tab within parenthesis of method to see full docstring
json.loads()
?sum()
import json
?json
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this figure are a few labels of notebook parts I will refer to
|
8,956
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot as plt #plotting library (lets us draw graphs)
%matplotlib inline
from sklearn import datasets #the datasets from sklearn
digits = datasets.load_digits() #load the digits into the variable 'digits'
digits.data.shape
digits.data[35,:]
#code to reshape the 64 numbers into an 8x8 matrix and then draw it
plt.matshow(digits.data[35,:].reshape(8,8),cmap='gray')
#Exercise 1: Your code here!
digits.target[35]
#Exercise 2: Your code here!
#Your code here
training_data = digits.data[0:-10,:] #this means all but the last 10 rows should be put in training_data
training_target = digits.target[0:-10] #this puts all but the last 10 elements of the labels (targets) into training_target
#similarly this takes the last digit and puts that in test_data and test_target
test_data = digits.data[-10:,:]
test_target = digits.target[-10:]
from sklearn import neighbors #import the library that we need
nn = neighbors.KNeighborsClassifier(n_neighbors=1) #this is our model (with just one nearest neighbour)
nn.fit(training_data,training_target); #fit our model to the training data
nn.predict(test_data)
test_target
plt.matshow(test_data[3].reshape(8,8),cmap='gray')
#Exercise 4: Answer here
import numpy as np
#classification libraries
from sklearn import neighbors
from sklearn import svm
from sklearn import naive_bayes
from sklearn import tree
from sklearn import ensemble
from sklearn.cross_validation import KFold
#prepare k-fold cross validation
kf = KFold(len(digits.target), n_folds=5)
KFold(n=4, n_folds=2, shuffle=False, random_state=None)
#variables to count up how many we got right
tally_correct = 0
tally_total = 0
for train_index, test_index in kf:
#here we split the dataset up into training and test sets, these change each iteration
training_data = digits.data[train_index,:]
training_target = digits.target[train_index]
test_data = digits.data[test_index,:]
test_target = digits.target[test_index]
#TODO: Uncomment one of these classifiers to see how it does
#csf = tree.DecisionTreeClassifier()
#csf = ensemble.RandomForestClassifier(n_estimators=50, min_samples_split=1, max_depth=None, max_features=16)
#csf = ensemble.ExtraTreesClassifier(n_estimators=100, min_samples_split=1, max_depth=None, max_features=8)
csf = neighbors.KNeighborsClassifier(n_neighbors=1)
#csf= svm.LinearSVC(C=0.05) #Linear Support Vector Machine classifier
#csf = naive_bayes.GaussianNB()
csf.fit(training_data,training_target)
predictions = csf.predict(test_data)
number_correct = np.sum(predictions==test_target)
total_number = len(predictions)
print("%d of %d correct" % (number_correct,total_number))
tally_correct += number_correct
tally_total += total_number
print " "
print "Total: %d of %d correct (%0.2f%%)" % (tally_correct, tally_total, 100.0*tally_correct/tally_total)
bc = datasets.load_breast_cancer()
bc.data[4,:] #data from row number four.
#print bc['DESCR'] #uncomment and run to print a description of the dataset
bc = datasets.load_breast_cancer()
import numpy as np
#classification libraries
from sklearn import neighbors
from sklearn import svm
from sklearn import naive_bayes
from sklearn import tree
from sklearn import ensemble
from sklearn.cross_validation import KFold
#prepare k-fold cross validation
kf = KFold(len(bc.target), n_folds=5)
KFold(n=4, n_folds=2, shuffle=False, random_state=None)
#variables to count up how many we got right
tally_correct = 0
tally_total = 0
for train_index, test_index in kf:
#here we split the dataset up into training and test sets, these change each iteration
training_data = bc.data[train_index,:]
training_target = bc.target[train_index]
test_data = bc.data[test_index,:]
test_target = bc.target[test_index]
#TODO: Uncomment one of these classifiers to see how it does
#csf = tree.DecisionTreeClassifier()
#csf = ensemble.RandomForestClassifier(n_estimators=10, min_samples_split=1, max_depth=None, max_features=5)
#csf = ensemble.ExtraTreesClassifier(n_estimators=100, min_samples_split=1, max_depth=None, max_features=2)
csf = neighbors.KNeighborsClassifier(n_neighbors=1)
#csf= svm.LinearSVC(C=1)
#csf = naive_bayes.GaussianNB()
csf.fit(training_data,training_target)
predictions = csf.predict(test_data)
number_correct = np.sum(predictions==test_target)
total_number = len(predictions)
print("%d of %d correct" % (number_correct,total_number))
tally_correct += number_correct
tally_total += total_number
print " "
print "Total: %d of %d correct (%0.2f%%)" % (tally_correct, tally_total, 100.0*tally_correct/tally_total)
zscore=np.array([-1.59,-0.06,-2.11,0.57,1.35,0.03,0.11,-0.37,2.66,-1.24,-0.03,0.03,-0.53,3.06,1.97,1.01,0.51,-1.36,-1.44,1.45,2.55,0.4,1.03,1.72,1.,0.67,1.19,0.59,0.86,-2.16,0.87,-2.27,0.04,1.14,-0.78,1.76,-1.05,-0.7,1.58,0.11,-0.34,-2.89,0.37,0.77,0.61,-0.68,0.,-1.33])
muac=np.array([84.5,86.6,87.2,88.5,91.3,92.4,92.4,92.8,93.3,94.4,95.2,97.4,101.4,101.5,106.1,109.5,110.8,110.9,113.3,113.6,113.6,114.2,114.8,116.,116.8,117.9,119.1,119.8,122.,122.7,123.7,124.5,124.8,125.7,126.3,129.5,130.3,131.,132.5,132.5,136.5,138.,140.,140.4,143.6,146.5,146.7,146.9])
ok=np.array([False,False,False,False,False,False,False,False,True,False,False,False,False,True,True,True,True,False,False,True,True,False,True,True,True,True,True,True,True,False,True,False,True,True,False,True,True,True,True,True,True,False,True,True,True,True,True,True])
#data for later exercise...
edema=np.array([True,True,True,True,True,True,True,True,False,True,False,True,True,False,False,False,False,True,True,False,False,True,False,False,False,False,False,False,False,True,False,True,False,False,True,False,True,True,False,False,True,True,False,False,False,True,False,False])
#Your code here!
#Your code here
data = np.vstack([zscore,muac]).T #Here I combine the zscores and MUAC. # <<< Modify for exercise 7
target = np.array([1 if k else 0 for k in ok])
data.shape
import numpy as np
#classification libraries
from sklearn import neighbors
from sklearn import svm
from sklearn import naive_bayes
from sklearn import tree
from sklearn import ensemble
from sklearn.cross_validation import KFold
#prepare k-fold cross validation
kf = KFold(len(target), n_folds=5)
KFold(n=4, n_folds=2, shuffle=False, random_state=None)
#variables to count up how many we got right
tally_correct = 0
tally_total = 0
for train_index, test_index in kf:
#here we split the dataset up into training and test sets, these change each iteration
training_data = data[train_index,:]
training_target = target[train_index]
test_data = data[test_index,:]
test_target = target[test_index]
#TODO: Uncomment one of these classifiers to see how it does
#csf = tree.DecisionTreeClassifier()
#csf = ensemble.RandomForestClassifier(n_estimators=10, min_samples_split=1, max_depth=None, max_features=5)
#csf = ensemble.ExtraTreesClassifier(n_estimators=100, min_samples_split=1, max_depth=None, max_features=2)
#csf = neighbors.KNeighborsClassifier(n_neighbors=1)
csf= svm.LinearSVC(C=1)
#csf = naive_bayes.GaussianNB()
csf.fit(training_data,training_target)
predictions = csf.predict(test_data)
number_correct = np.sum(predictions==test_target)
total_number = len(predictions)
print("%d of %d correct" % (number_correct,total_number))
tally_correct += number_correct
tally_total += total_number
print " "
print "Total: %d of %d correct (%0.2f%%)" % (tally_correct, tally_total, 100.0*tally_correct/tally_total)
edema
from sklearn.metrics import confusion_matrix
confusion_matrix(ok,edema)
#Modify code above
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To get an idea of the data we are going to be classifying we'll ask what shape the 'data' matrix is
Step2: This tells us that it has 1797 rows (which are the samples) and 64 columns (which are the 8x8 pixels in the data, and make up the 64 dimensions of the data set).
Step3: Each of these numbers is one of the pixels in the image.
Step4: It looks like a five!
Step5: The problem is a supervised learning problem, which means we need to provide labels for our data points.
Step6: As suspected image 35 is of the digit '5'.
Step7: Exercise 3
Step8: Training
Step9: The training step is quite simple. Here we fit the model to the data.
Step10: We can then predict the results using the predict method
Step11: How many of these were correct?
Step12: Remarkably the classifier has mostly got them correct.
Step13: Exercise 4
Step14: Cross-validation
Step15: The nearest neighbour classifier did particularly well on the digits dataset.
Step16: You can find out more by running this code
Step17: Exercise 6
Step18: The Nutrition (simulated) Dataset and Munging Data
Step19: Exercise 7
Step20: Exercise 8
Step21: Finally we want to try classifying the data. First we need to get it into the matrix form that we used earlier in the notebook.
Step22: We can ask for the shape of the data matrix, so we can confirm we've got it in the correct shape
Step23: We have also been given data about whether the child has edema (fluid build-up). Can we make use of this additional data to improve our predictions?
Step24: To get a quick idea of if it's useful we can as for the confusion matrix, this counts the number of times both are true, one is true and one is false, vis-versa and when they are both false.
Step25: The top row is the number of children that don't need help who have edema or not (only one child in this category has edema).
|
8,957
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
def bscall(strike=100,mat=1,fwd=100,sig=0.1,df=1):
lnfs = log(1.0*fwd/strike)
sig2t = sig*sig*mat
sigsqrt = sig*sqrt(mat)
d1 = (lnfs + 0.5 * sig2t) / sigsqrt
d2 = (lnfs - 0.5 * sig2t) / sigsqrt
fv = fwd * norm.cdf (d1) - strike * norm.cdf (d2)
return df * fv
bscall(fwd=100, strike=100, sig=0.1, mat=1, df=1)
from scipy.stats import norm
def bsput(strike=100,mat=1,fwd=100,sig=0.1,df=1):
lnfs = log(1.0*fwd/strike)
sig2t = sig*sig*mat
sigsqrt = sig*sqrt(mat)
d1 = (lnfs + 0.5 * sig2t) / sigsqrt
d2 = (lnfs - 0.5 * sig2t) / sigsqrt
fv = strike * norm.cdf (-d2) - fwd * norm.cdf (-d1)
return df * fv
bsput(fwd=100, strike=100, sig=0.1, mat=1, df=1)
from scipy.stats import norm
def bsdcall(strike=100,mat=1,fwd=100,sig=0.1,df=1):
lnfs = log(1.0*fwd/strike)
sig2t = sig*sig*mat
sigsqrt = sig*sqrt(mat)
d2 = (lnfs - 0.5 * sig2t) / sigsqrt
fv = norm.cdf(d2)
return df * fv
def bsdput(strike=100,mat=1,fwd=100,sig=0.1,df=1):
lnfs = log(1.0*fwd/strike)
sig2t = sig*sig*mat
sigsqrt = sig*sqrt(mat)
d2 = (lnfs - 0.5 * sig2t) / sigsqrt
fv = 1.0 - norm.cdf(d2)
return df * fv
bsdcall(fwd=100, strike=100, sig=0.1, mat=1, df=1), bsdput(fwd=100, strike=100, sig=0.1, mat=1, df=1)
from scipy.stats import norm
def bsdrcall(strike=100,mat=1,fwd=100,sig=0.1,df=1):
lnfs = log(1.0*fwd/strike)
sig2t = sig*sig*mat
sigsqrt = sig*sqrt(mat)
d1 = (lnfs + 0.5 * sig2t) / sigsqrt
fv = fwd * norm.cdf (d1)
return df * fv
def bsdrput(strike=100,mat=1,fwd=100,sig=0.1,df=1):
lnfs = log(1.0*fwd/strike)
sig2t = sig*sig*mat
sigsqrt = sig*sqrt(mat)
d1 = (lnfs + 0.5 * sig2t) / sigsqrt
fv = fwd * (1.0 - norm.cdf (d1))
return df * fv
bsdrcall(fwd=100, strike=100, sig=0.1, mat=1, df=1), bsdrput(fwd=100, strike=100, sig=0.1, mat=1, df=1)
import sys
print(sys.version)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Black Scholes Pricing - Simple
Step2: European Put Option
Step3: European Digital
Step4: The reverse European digital call option pays one unit of the underlying asset if and only if the spot price at expiry as above the strike $K$. The price is given by the following formula ($d_1$ is defined as above)
Step5: Licence and version
|
8,958
|
<ASSISTANT_TASK:>
Python Code:
import sys,os
%matplotlib inline
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from numpy.fft import fft2
f = np.ones((100, 200))
print("Constant image:\n",f)
F = fft2(f)
print("\nDFT of a constant image:\n",F.round(4))
plt.title('Constant image')
plt.imshow(f,cmap='gray')
plt.colorbar()
plt.title('DFT of a constant image')
plt.imshow(ia.dftview(F),cmap='gray')
plt.colorbar()
f = np.zeros((128, 128))
f[63:63+5,63:63+5] = 1
F = fft2(f)
plt.title('Square image')
plt.imshow(f,cmap='gray')
plt.colorbar()
plt.title('DFT of a square image')
plt.imshow(ia.dftview(F),cmap='gray')
plt.colorbar()
f = np.zeros((128, 128))
k = np.array([[1,2,3,4,5,6,5,4,3,2,1]])
k2 = np.dot(k.T, k)
f[63:63+k2.shape[0], 63:63+k2.shape[1]] = k2
F = fft2(f)
plt.title('Pyramid image')
plt.imshow(f,cmap='gray')
plt.colorbar()
plt.title('DFT of a pyramid image')
plt.imshow(ia.dftview(F),cmap='gray')
plt.colorbar()
import numpy as np
def gaussian(s, mu, cov):
d = len(s) # dimension
n = np.prod(s) # n. of samples (pixels)
x = np.indices(s).reshape( (d, n))
xc = x - mu
k = 1. * xc * np.dot(np.linalg.inv(cov), xc)
k = np.sum(k,axis=0) #the sum is only applied to the rows
g = (1./((2 * np.pi)**(d/2.) * np.sqrt(np.linalg.det(cov)))) * np.exp(-1./2 * k)
return g.reshape(s)
f = gaussian((128,128),np.transpose([[65,65]]),[[3*3,0],[0,5*5]])
plt.title('Gaussian image')
plt.imshow(ia.normalize(f),cmap='gray')
plt.colorbar()
F = fft2(f)
plt.title('DFT of a gaussian image')
plt.imshow(ia.dftview(F),cmap='gray')
plt.colorbar()
f = ia.comb((128,128),(4,4),(0,0))
plt.title('Impulse image')
plt.imshow(ia.normalize(f),cmap='gray')
plt.colorbar()
F = fft2(f)
plt.title('DFT of a impulse image')
plt.imshow(ia.dftview(F),cmap='gray')
plt.colorbar()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constant image
Step2: Square image
Step3: Pyramid image
Step4: Gaussian image
Step5: Impulse image
|
8,959
|
<ASSISTANT_TASK:>
Python Code:
name = "YOUR NAME HERE"
print("Hello {0}!".format(name))
%matplotlib inline
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100 # This makes all the plots a little bigger.
import numpy as np
import matplotlib.pyplot as plt
# Load the data from the CSV file.
x, y, yerr = np.loadtxt("linear.csv", delimiter=",", unpack=True)
# Plot the data with error bars.
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlim(0, 5);
A = np.vander(x, 2) # Take a look at the documentation to see what this function does!
ATA = np.dot(A.T, A / yerr[:, None]**2)
w = np.linalg.solve(ATA, np.dot(A.T, y / yerr**2))
V = np.linalg.inv(ATA)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in np.random.multivariate_normal(w, V, size=50):
plt.plot(x, m*x + b, "g", alpha=0.1)
plt.xlim(0, 5);
def lnlike_linear((m, b)):
raise NotImplementedError("Delete this placeholder and implement this function")
p_1, p_2 = (0.0, 0.0), (0.01, 0.01)
ll_1, ll_2 = lnlike_linear(p_1), lnlike_linear(p_2)
if not np.allclose(ll_2 - ll_1, 535.8707738280209):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
def lnprior_linear((m, b)):
if not (-10 < m < 10):
return -np.inf
if not (-10 < b < 10):
return -np.inf
return 0.0
def lnpost_linear(theta):
return lnprior_linear(theta) + lnlike_linear(theta)
def metropolis_step(lnpost_function, theta_t, lnpost_t, step_cov):
raise NotImplementedError("Delete this placeholder and implement this function")
lptest = lambda x: -0.5 * np.sum(x**2)
th = np.array([0.0])
lp = 0.0
chain = np.array([th for th, lp in (metropolis_step(lptest, th, lp, [[0.3]])
for _ in range(10000))])
if np.abs(np.mean(chain)) > 0.1 or np.abs(np.std(chain) - 1.0) > 0.1:
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
# Edit these guesses.
m_initial = 2.
b_initial = 0.45
# You shouldn't need to change this plotting code.
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in np.random.multivariate_normal(w, V, size=24):
plt.plot(x, m_initial*x + b_initial, "g", alpha=0.1)
plt.xlim(0, 5);
# Edit this line to specify the proposal covariance:
step = np.diag([1e-6, 1e-6])
# Edit this line to choose the number of steps you want to take:
nstep = 50000
# Edit this line to set the number steps to discard as burn-in.
nburn = 1000
# You shouldn't need to change any of the lines below here.
p0 = np.array([m_initial, b_initial])
lp0 = lnpost_linear(p0)
chain = np.empty((nstep, len(p0)))
for i in range(len(chain)):
p0, lp0 = metropolis_step(lnpost_linear, p0, lp0, step)
chain[i] = p0
# Compute the acceptance fraction.
acc = float(np.any(np.diff(chain, axis=0), axis=1).sum()) / (len(chain)-1)
print("The acceptance fraction was: {0:.3f}".format(acc))
# Plot the traces.
fig, axes = plt.subplots(2, 1, figsize=(8, 5), sharex=True)
axes[0].plot(chain[:, 0], "k")
axes[0].axhline(w[0], color="g", lw=1.5)
axes[0].set_ylabel("m")
axes[0].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].plot(chain[:, 1], "k")
axes[1].axhline(w[1], color="g", lw=1.5)
axes[1].set_ylabel("b")
axes[1].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].set_xlabel("step number")
axes[0].set_title("acceptance: {0:.3f}".format(acc));
if np.any(np.abs(np.mean(chain, axis=0)-w)>0.01) or np.any(np.abs(np.cov(chain, rowvar=0)-V)>1e-4):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
import triangle
triangle.corner(chain[nburn:, :], labels=["m", "b"], truths=w);
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in chain[nburn+np.random.randint(len(chain)-nburn, size=50)]:
plt.plot(x, m*x + b, "g", alpha=0.1)
plt.xlim(0, 5);
# Edit these guesses.
alpha_initial = 100
beta_initial = -1
# These are the edges of the distribution (don't change this).
a, b = 1.0, 5.0
# Load the data.
events = np.loadtxt("poisson.csv")
# Make a correctly normalized histogram of the samples.
bins = np.linspace(a, b, 12)
weights = 1.0 / (bins[1] - bins[0]) + np.zeros(len(events))
plt.hist(events, bins, range=(a, b), histtype="step", color="k", lw=2, weights=weights)
# Plot the guess at the rate.
xx = np.linspace(a, b, 500)
plt.plot(xx, alpha_initial * xx ** beta_initial, "g", lw=2)
# Format the figure.
plt.ylabel("number")
plt.xlabel("x");
def lnlike_poisson((alpha, beta)):
raise NotImplementedError("Delete this placeholder and implement this function")
p_1, p_2 = (1000.0, -1.), (1500., -2.)
ll_1, ll_2 = lnlike_poisson(p_1), lnlike_poisson(p_2)
if not np.allclose(ll_2 - ll_1, 337.039175916):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
def lnprior_poisson((alpha, beta)):
if not (0 < alpha < 1000):
return -np.inf
if not (-10 < beta < 10):
return -np.inf
return 0.0
def lnpost_poisson(theta):
return lnprior_poisson(theta) + lnlike_poisson(theta)
# Edit this line to specify the proposal covariance:
step = np.diag([1000., 4.])
# Edit this line to choose the number of steps you want to take:
nstep = 50000
# Edit this line to set the number steps to discard as burn-in.
nburn = 1000
# You shouldn't need to change any of the lines below here.
p0 = np.array([alpha_initial, beta_initial])
lp0 = lnpost_poisson(p0)
chain = np.empty((nstep, len(p0)))
for i in range(len(chain)):
p0, lp0 = metropolis_step(lnpost_poisson, p0, lp0, step)
chain[i] = p0
# Compute the acceptance fraction.
acc = float(np.any(np.diff(chain, axis=0), axis=1).sum()) / (len(chain)-1)
print("The acceptance fraction was: {0:.3f}".format(acc))
# Plot the traces.
fig, axes = plt.subplots(2, 1, figsize=(8, 5), sharex=True)
axes[0].plot(chain[:, 0], "k")
axes[0].set_ylabel("alpha")
axes[0].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].plot(chain[:, 1], "k")
axes[1].set_ylabel("beta")
axes[1].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].set_xlabel("step number")
axes[0].set_title("acceptance: {0:.3f}".format(acc));
triangle.corner(chain[nburn:], labels=["alpha", "beta"], truths=[500, -2]);
plt.hist(events, bins, range=(a, b), histtype="step", color="k", lw=2, weights=weights)
# Plot the guess at the rate.
xx = np.linspace(a, b, 500)
for alpha, beta in chain[nburn+np.random.randint(len(chain)-nburn, size=50)]:
plt.plot(xx, alpha * xx ** beta, "g", alpha=0.1)
# Format the figure.
plt.ylabel("number")
plt.xlabel("x");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If this works, the output should greet you without throwing any errors. If so, that's pretty much all we need so let's get started with some MCMC!
Step2: Now we'll load the datapoints and plot them. When you execute the following cell, you should see a plot of the data. If not, make sure that you run the import cell from above first.
Step3: As I mentioned previously, it is pretty silly to use MCMC to solve this problem because the maximum likelihood and full posterior probability distribution (under infinitely broad priors) for the slope and intercept of the line are known analytically. Therefore, let's compute what the right answer should be before we even start. The analytic result for the posterior probability distribution is a 2-d Gaussian with mean
Step4: We'll save these results for later to compare them to the result computed using MCMC but for now, it's nice to take a look and see what this prediction looks like. To do this, we'll sample 24 slopes and intercepts from this 2d Gaussian and overplot them on the data.
Step5: This plot is a visualization of our posterior expectations for the true underlying line that generated these data. We'll reuse this plot a few times later to test the results of our code.
Step6: After you're satisfied with your implementation, run the following cell. In this cell, we're checking to see if your code is right. If it is, you'll see a smiling face (☺︎) but if not, you'll get an error message.
Step7: If you don't get the ☺︎, go back and try to debug your model. Iterate until your result is correct.
Step8: Metropolis(–Hastings) MCMC
Step9: As before, here's a simple test for this function. When you run the following cell it will either print a smile or throw an exception. Since the algorithm is random, it might occasionally fail this test so if it fails once, try running it again. If it fails a second time, edit your implementation until the test consistently passes.
Step10: Running the Markov Chain
Step11: In the next cell, we'll start from this initial guess for the slope and intercept and walk through parameter space (using the transition probability from above) to generate a Markov Chain of samples from the posterior probability.
Step12: The results of the MCMC run are stored in the array called chain with dimensions (nstep, 2). These are samples from the posterior probability density for the parameters. We know from above that this should be a Gaussian with mean $\mathbf{w}$ and covariance $\mathbf{V}$ so let's compare the sample mean and covariance to the analytic result that we computed above
Step13: If you don't get a smile here, that could mean a few things
Step14: This plot is a representation of our contraints on the posterior probability for the slope and intercept conditioned on the data. The 2-D plot shows the full posterior and the two 1-D plots show the constraints for each parameter marginalized over the other.
Step15: It is always useful to make a plot like this because it lets you see if your model is capable of describing your data or if there is anything catasrophically wrong.
Step16: In the following cell, you need to implement the log-likelihood function for the problem (same as above)
Step17: As before, edit your implementation until the following test passes.
Step18: Once you're happy with this implementation, we'll define the full probabilistic model including a prior. As before, I've chosen a broad flat prior on alpha and beta but you should feel free to change this.
Step19: Now let's run the MCMC for this model. As before, you should tune the parameters of the algorithm until you get a reasonable acceptance fraction ($\sim 25- 40\%$) and the chains seem converged.
Step20: Once you're happy with to convergence of your chain, plot the results as a corner plot (compared to the values that I used to generate the dataset; $\alpha = 500$ and $\beta = -2$) and plot the posterior predictive distribution.
|
8,960
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
#Como vamos a trabajar con imagenes blanco y negro, tomamos una imagen a color y la convertimos a BW.
im = Image.open("C:/Users/Zuraya/Pictures/Rossum.jpg", 'r').convert('LA')
mat = np.array(list(im.getdata(band=0)), float)
mat.shape = (im.size[1], im.size[0])
mat = np.matrix(mat)
#Usamos la SVD
U, s, V = np.linalg.svd(mat)
#Definimos el grado k de la aproximación
for k in range (10, 50, 10):
rec = np.matrix(U[:, :k]) * np.diag(s[:k]) * np.matrix(V[:k, :])
plt.imshow(rec, cmap='gray')
plt.savefig("C:/Users/Zuraya/Pictures/Rossum_comp_" + str(k) + ".jpg")
import numpy as np
def pseudoinv(A):
U, s, V = np.linalg.svd(A)
Ut = np.mat(U.T)
ss=np.zeros([U.shape[0],V.shape[0]])
for i in range(U.shape[0]):
for j in range(V.shape[0]):
if (i==j and s[i]!=0):
ss[i,j]=1/s[i]
else:
pass
sst=np.mat(ss.T)
Vt =np.mat(V.T)
pseudo=(Vt*sst)*Ut
return pseudo
def solve_pseudo(A,b):
return pseudoinv(A)*np.mat(b)
M=np.array([[1,1], [0,0]])
pseudoinv(M)
b0=np.array([[1],[0]])
solve_pseudo(M,b0)
b=np.array([[1],[1]])
solve_pseudo(M,b)
A=np.array([[1,1],[0,1e-32]])
solve_pseudo(A,b)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La imagen original en blanco y negro
Step2: (a) Observar que pasa si b esta en la imagen de A (contestar cuál es la imagen) y si no está (ej. b = [1,1]).
|
8,961
|
<ASSISTANT_TASK:>
Python Code:
from enoslib import *
import logging
import sys
log = logging.getLogger()
log.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fileHandler = logging.FileHandler("debug.log", 'a')
fileHandler.setLevel(logging.DEBUG)
fileHandler.setFormatter(formatter)
log.addHandler(fileHandler)
cformat = logging.Formatter("[%(levelname)8s] : %(message)s")
consoleHandler = logging.StreamHandler(sys.stdout)
consoleHandler.setFormatter(cformat)
consoleHandler.setLevel(logging.INFO)
log.addHandler(consoleHandler)
job_name="iotlab_g5k-ipv6"
iotlab_dict = {
"walltime": "01:00",
"job_name": job_name,
"resources": {
"machines": [
{
"roles": ["border_router"],
"archi": "m3:at86rf231",
"site": "saclay",
"number": 1,
"image": "border-router.iotlab-m3",
},
{
"roles": ["sensor"],
"archi": "m3:at86rf231",
"site": "saclay",
"number": 2,
"image": "er-example-server.iotlab-m3",
},
]
},
}
iotlab_conf = IotlabConf.from_dictionary(iotlab_dict)
g5k_dict = {
"job_type": "allow_classic_ssh",
"job_name": job_name,
"resources": {
"machines": [
{
"roles": ["client"],
"cluster": "yeti",
"nodes": 1,
"primary_network": "default",
"secondary_networks": [],
},
],
"networks": [
{"id": "default", "type": "prod", "roles": ["my_network"], "site": "grenoble"}
],
},
}
g5k_conf = G5kConf.from_dictionnary(g5k_dict)
import iotlabcli.auth
iotlab_user, _ = iotlabcli.auth.get_user_credentials()
iotlab_frontend_conf = (
StaticConf()
.add_machine(
roles=["frontend"],
address="saclay.iot-lab.info",
alias="saclay",
user=iotlab_user
)
.finalize()
)
iotlab_provider = Iotlab(iotlab_conf)
iotlab_roles, _ = iotlab_provider.init()
print(iotlab_roles)
g5k_provider = G5k(g5k_conf)
g5k_roles, g5knetworks = g5k_provider.init()
print(g5k_roles)
frontend_provider = Static(iotlab_frontend_conf)
frontend_roles, _ = frontend_provider.init()
print(frontend_roles)
result=run_command("dhclient -6 br0", roles=g5k_roles)
result = run_command("ip address show dev br0", roles=g5k_roles)
print(result['ok'])
iotlab_ipv6_net="2001:660:3207:4c0::"
tun_cmd = "sudo tunslip6.py -v2 -L -a %s -p 20000 %s1/64 > tunslip.output 2>&1" % (iotlab_roles["border_router"][0].alias, iotlab_ipv6_net)
result=run_command(tun_cmd, roles=frontend_roles, asynch=3600, poll=0)
iotlab_roles["border_router"][0].reset()
result = run_command("cat tunslip.output", roles=frontend_roles)
print(result['ok'])
import re
out = result['ok']['saclay']['stdout']
print(out)
match = re.search(rf'Server IPv6 addresses:\n.+({iotlab_ipv6_net}\w{{4}})', out, re.MULTILINE|re.DOTALL)
br_ipv6 = match.groups()[0]
print("Border Router IPv6 address from tunslip output: %s" % br_ipv6)
result = run_command("ping6 -c3 %s" % br_ipv6, pattern_hosts="client*", roles=g5k_roles)
print(result['ok'])
with play_on(roles=g5k_roles) as p:
p.apt(name=["python3-aiocoap", "lynx"], state="present")
result = run_command("lynx -dump http://[%s]" % br_ipv6, roles=g5k_roles)
print(result['ok'])
out = result['ok'][g5k_roles["client"][0].address]['stdout']
print(out)
match = re.search(r'fe80::(\w{4})', out, re.MULTILINE|re.DOTALL)
node_uid = match.groups()[0]
print(node_uid)
result = run_command("aiocoap-client coap://[%s%s]:5683/sensors/light" % (iotlab_ipv6_net, node_uid), roles=g5k_roles)
print(result['ok'])
result = run_command("aiocoap-client coap://[%s%s]:5683/sensors/pressure" % (iotlab_ipv6_net, node_uid), roles=g5k_roles)
print(result['ok'])
result = run_command("pgrep tunslip6 | xargs kill", roles=frontend_roles)
g5k_provider.destroy()
iotlab_provider.destroy()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configuring logging
Step2: Getting resources
Step3: Grid'5000 provider configuration
Step4: We still need a Static provider to interact with the IoT-LAB frontend machine
Step5: IoT-LAB
Step6: Grid'5000
Step7: Static
Step8: Configuring network connectivity
Step9: Starting tunslip command in frontend.
Step10: Reseting border router
Step11: Get the Border Router IPv6 address from tunslip output
Step12: Checking ping from Grid'5000 to border router node
Step13: Installing and using CoAP clients
Step14: Grab the CoAP server node’s IPv6 address from the BR’s web interface
Step15: For a CoAP server, GET light sensor
Step16: GET pressure for the same sensor
Step17: Clean-up phase
Step18: Destroy jobs in testbeds
|
8,962
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import Image, HTML, SVG, display
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
# Draw an SVG circle.
# Parameters
# ----------
# width : int
# The width of the svg drawing area in px.
# height : int
# The height of the svg drawing area in px.
# cx : int
# The x position of the center of the circle in px.
# cy : int
# The y position of the center of the circle in px.
# r : int
# The radius of the circle in px.
# fill : str
# The fill color of the circle.
#
C =
<svg width="%d" height="%d" >
<circle cx="%d" cy="%d" r="%d" fill="%s" />
</svg>
% (width, height, cx, cy, r, fill)
display(SVG(C))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
w = interactive(draw_circle, width=fixed(300), height=fixed(300), cx=(0,300), cy=(0,300), r=(0,50), fill='red')
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
display(w)
assert True # leave this to grade the display of the widget
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Interact with SVG display
Step5: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step6: Use interactive to build a user interface for exploing the draw_circle function
Step7: Use the display function to show the widgets created by interactive
|
8,963
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import sys
from sklearn import linear_model
import matplotlib.pyplot as plt
%matplotlib inline
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':str, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}
sales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict)
train_data = pd.read_csv('kc_house_train_data.csv', dtype=dtype_dict)
test_data = pd.read_csv('kc_house_test_data.csv', dtype=dtype_dict)
print(sales['sqft_living'].values.dtype)
def get_numpy_data(dataset, features, output_name):
dataset['constant'] = 1
output = dataset[[output_name]].values
return (dataset[['constant'] + features].values.reshape((len(output), len(features) + 1)), output.reshape((len(output), 1)))
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
print(example_features[:5])
print(example_output[:5])
def predict_output(X, w):
return X.dot(w)
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
# If feature_is_constant is True, derivative is twice the dot product of errors and feature
derivative = 2*feature.T.dot(errors)
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
if not feature_is_constant:
derivative += 2*l2_penalty*weight
return derivative
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.], dtype=np.float16).reshape((2,1))
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print(feature_derivative_ridge(errors, example_features[:,1].reshape((len(example_features[:,1]), 1)), my_weights[1], 1, False))
print(np.sum(errors*example_features[:,1].reshape((len(example_features[:,1]), 1)))*2+20.)
# next two lines should print the same values
print(feature_derivative_ridge(errors, example_features[:,0].reshape((len(example_features[:,0]), 1)), my_weights[0], 1, True))
print(np.sum(errors)*2.)
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):
weights = np.array(initial_weights).reshape((len(initial_weights), 1)) # make sure it's a numpy array
iteration = 0
while iteration < max_iterations:
#while not reached maximum number of iterations:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
old_weights = np.copy(weights)
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
derivative = feature_derivative_ridge(errors, feature_matrix[:, i], old_weights[i,0], l2_penalty, i == 0)
# subtract the step size times the derivative from the current weight
weights[i,0] -= step_size * derivative
iteration += 1
return weights
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
initial_weights = np.array([0., 0.])
step_size = 1e-12
max_iterations=1000
l2_penalty = 0
simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
l2_penalty = 1e11
simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
print(simple_weights_0_penalty)
print(simple_weights_high_penalty)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(simple_feature_matrix,output,'k.',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')
(test_features, test_output) = get_numpy_data(test_data, ['sqft_living'], 'price')
no_regularization_prediction = predict_output(test_features, simple_weights_0_penalty)
test_errors = no_regularization_prediction - test_output
RSS_no_penalty = test_errors.T.dot(test_errors)
print(RSS_no_penalty)
high_regularization_prediction = predict_output(test_features, simple_weights_high_penalty)
test_errors = high_regularization_prediction - test_output
RSS_high_penalty = test_errors.T.dot(test_errors)
print(RSS_high_penalty)
print(simple_weights_0_penalty[1,0])
print(simple_weights_high_penalty[1,0])
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
initial_weights = np.array([0.0,0.0,0.0])
step_size = 1e-12
max_iterations = 1000
l2_penalty=0.0
multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
l2_penalty=1e11
multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
all_zeros_weights = np.array([[0],[0],[0]])
test_predictions_all_zeros = predict_output(test_feature_matrix, all_zeros_weights)
test_errors = test_predictions_all_zeros - test_output
RSS_all_zeros_penalty = test_errors.T.dot(test_errors)
print(RSS_all_zeros_penalty)
test_predictions_no = predict_output(test_feature_matrix, multiple_weights_0_penalty)
test_errors = test_predictions_no - test_output
RSS_no_penalty = test_errors.T.dot(test_errors)
print(RSS_no_penalty)
test_predictions_high = predict_output(test_feature_matrix, multiple_weights_high_penalty)
test_errors = test_predictions_high - test_output
RSS_high_penalty = test_errors.T.dot(test_errors)
print(RSS_high_penalty)
print(test_predictions_no[0] - test_output[0])
print(test_predictions_high[0] - test_output[0])
print(multiple_weights_0_penalty[1])
print(multiple_weights_high_penalty[1])
RSS_no_penalty[0][0]
RSS_high_penalty[0,0]
sales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict)
sales = sales.sort(['sqft_living','price'])
l2_small_penalty = 1e-5
def polynomial_sframe(feature, degree):
poly_dataset = pd.DataFrame()
poly_dataset['power_1'] = feature
if degree > 1:
for power in range(2, degree + 1):
column = 'power_' + str(power)
poly_dataset[column] = feature**power
features = poly_dataset.columns.values.tolist()
poly_dataset['constant'] = 1
return (poly_dataset, ['constant'] + features)
poly_data, features = polynomial_sframe(sales['sqft_living'], 15)
initial_weights = np.zeros((16,1))
step_size = 1e-12
max_iterations=1000
output = sales['price'].reshape((len(sales['price']), 1))
ridge_regression_gradient_descent(poly_data[features].values, output, initial_weights, step_size, l2_small_penalty, max_iterations=100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step4: Computing the Derivative
Step5: To test your feature derivartive run the following
Step6: Gradient Descent
Step7: Visualizing effect of L2 penalty
Step8: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
Step9: Let's set the parameters for our optimization
Step10: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step11: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step12: This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
Step13: Compute the RSS on the TEST data for the following three sets of weights
Step14: QUIZ QUESTIONS
Step15: Running a multiple regression with L2 penalty
Step16: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
Step17: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step18: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step19: Compute the RSS on the TEST data for the following three sets of weights
Step20: Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
Step21: QUIZ QUESTIONS
Step22: Estimating 1 assignment
|
8,964
|
<ASSISTANT_TASK:>
Python Code:
from math import pi
def mult_dec_pi(a, b):
# Add the solution here
result = ''
return result
mult_dec_pi(a=2, b=4)
# 20.0
mult_dec_pi(a=5, b=10)
# 45.0
mult_dec_pi(a=14, b=1)
# 9.0
mult_dec_pi(a=6, b=8)
# 10.0
# Bonus
mult_dec_pi(a=16, b=4)
# 'Error'
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Import data
raw_data =
Year,Employed,GNP
1947,60.323,234.289
1948,61.122,259.426
1949,60.171,258.054
1950,61.187,284.599
1951,63.221,328.975
1952,63.639,346.999
1953,64.989,365.385
1954,63.761,363.112
1955,66.019,397.469
1956,67.857,419.18
1957,68.169,442.769
1958,66.513,444.546
1959,68.655,482.704
1960,69.564,502.601
1961,69.331,518.173
1962,70.551,554.894
data = []
for line in raw_data.splitlines()[2:]:
words = line.split(',')
data.append(words)
data = np.array(data, dtype=np.float)
n_obs = data.shape[0]
plt.plot(data[:, 2], data[:, 1], 'bo')
plt.xlabel("GNP")
plt.ylabel("Employed")
import pandas as pd
# Load dataset
import zipfile
with zipfile.ZipFile('../datasets/baby-names2.csv.zip', 'r') as z:
f = z.open('baby-names2.csv')
names = pd.io.parsers.read_table(f, sep=',')
names.head()
names[names.year == 1993].head()
boys = names[names.sex == 'boy'].copy()
girls = names[names.sex == 'girl'].copy()
william = boys[boys['name']=='William']
plt.plot(range(william.shape[0]), william['prop'])
plt.xticks(range(william.shape[0])[::5], william['year'].values[::5], rotation='vertical')
plt.ylim([0, 0.1])
plt.show()
Daniel = boys[boys['name']=='Daniel']
plt.plot(range(Daniel.shape[0]), Daniel['prop'])
plt.xticks(range(Daniel.shape[0])[::5], Daniel['year'].values[::5], rotation='vertical')
plt.ylim([0, 0.1])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Exercise 02.2
Step3: Exercise 02.3
Step4: segment the data into boy and girl names
Step5: Analyzing the popularity of a name over time
|
8,965
|
<ASSISTANT_TASK:>
Python Code:
# Import NumPy and seed random number generator to make generated matrices deterministic
import numpy as np
np.random.seed(1)
# Create a matrix with random entries
A = np.random.rand(4, 4)
# Use QR factorisation of A to create an orthogonal matrix Q (QR is covered in IB)
Q, R = np.linalg.qr(A, mode='complete')
print(Q.dot(Q.T))
import itertools
# Build pairs (0,0), (0,1), . . . (0, n-1), (1, 2), (1, 3), . . .
pairs = itertools.combinations_with_replacement(range(len(Q)), 2)
# Compute dot product of column vectors q_{i} \cdot q_{j}
for p in pairs:
col0, col1 = p[0], p[1]
print ("Dot product of column vectors {}, {}: {}".format(col0, col1, Q[:, col0].dot(Q[:, col1])))
# Compute dot product of row vectors q_{i} \cdot q_{j}
pairs = itertools.combinations_with_replacement(range(len(Q)), 2)
for p in pairs:
row0, row1 = p[0], p[1]
print ("Dot product of row vectors {}, {}: {}".format(row0, row1, Q[row0, :].dot(Q[row1, :])))
print("Determinant of Q: {}".format(np.linalg.det(Q)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can now verify that Q is an orthognal matrix. We first check that $\boldsymbol{Q}^{-1} = \boldsymbol{Q}^{T}$ by computing $\boldsymbol{Q}\boldsymbol{Q}^{-1}$
Step2: We can see that $\boldsymbol{Q}\boldsymbol{Q}^{-1} = \boldsymbol{I}$ (within numerical precision). We can check that the colums of $\boldsymbol{Q}$ are orthonormal
Step3: The columns of $\boldsymbol{Q}$ are orthonormal, and $\boldsymbol{Q}^{T}$ is also a rotation matrix and has orthonormal columns. Therefore, the rows of $\boldsymbol{Q}$ are also orthonormal.
Step4: Finally, we check the determinant of $\boldsymbol{Q}$
|
8,966
|
<ASSISTANT_TASK:>
Python Code:
# import stuffs
%matplotlib inline
import numpy as np
import pandas as pd
from pyplotthemes import get_savefig, classictheme as plt
plt.latex = True
from datasets import get_pbc
d = get_pbc(prints=True, norm_in=True, norm_out=False)
durcol = d.columns[0]
eventcol = d.columns[1]
if np.any(d[durcol] < 0):
raise ValueError("Negative times encountered")
# Sort the data before training - handled by ensemble
#d.sort(d.columns[0], inplace=True)
# Example: d.iloc[:, :2] for times, events
d
import ann
from classensemble import ClassEnsemble
mingroup = int(0.25 * d.shape[0])
def get_net(func=ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN):
hidden_count = 10
outcount = 2
l = (d.shape[1] - 2) + hidden_count + outcount + 1
net = ann.geneticnetwork((d.shape[1] - 2), hidden_count, outcount)
net.fitness_function = func
net.mingroup = mingroup
# Be explicit here even though I changed the defaults
net.connection_mutation_chance = 0.0
net.activation_mutation_chance = 0
# Some other values
net.crossover_method = net.CROSSOVER_UNIFORM
net.selection_method = net.SELECTION_TOURNAMENT
net.population_size = 100
net.generations = 1000
net.weight_mutation_chance = 0.15
net.dropout_hidden_probability = 0.5
net.dropout_input_probability = 0.8
ann.utils.connect_feedforward(net, [5, 5], hidden_act=net.TANH, out_act=net.SOFTMAX)
#c = net.connections.reshape((l, l))
#c[-outcount:, :((d.shape[1] - 2) + hidden_count)] = 1
#net.connections = c.ravel()
return net
net = get_net()
l = (d.shape[1] - 2) + net.hidden_count + 2 + 1
print(net.connections.reshape((l, l)))
hnets = []
lnets = []
netcount = 2
for i in range(netcount):
if i % 2:
n = get_net(ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN)
hnets.append(n)
else:
n = get_net(ann.geneticnetwork.FITNESS_SURV_KAPLAN_MAX)
lnets.append(n)
e = ClassEnsemble(hnets, lnets)
e.fit(d, durcol, eventcol)
# grouplabels = e.predict_classes
grouplabels, mems = e.label_data(d)
for l, m in mems.items():
print("Group", l, "has", len(m), "members")
from lifelines.plotting import add_at_risk_counts
from lifelines.estimation import KaplanMeierFitter
from lifelines.estimation import median_survival_times
plt.figure()
fitters = []
for g in ['high', 'mid', 'low']:
kmf = KaplanMeierFitter()
fitters.append(kmf)
members = grouplabels == g
kmf.fit(d.loc[members, durcol],
d.loc[members, eventcol],
label='{}'.format(g))
kmf.plot(ax=plt.gca())#, color=plt.colors[mi])
print("End survival rate for", g, ":",kmf.survival_function_.iloc[-1, 0])
if kmf.survival_function_.iloc[-1, 0] <= 0.5:
print("Median survival for", g, ":",
median_survival_times(kmf.survival_function_))
plt.legend(loc='best', framealpha=0.1)
plt.ylim((0, 1))
add_at_risk_counts(*fitters)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load some data
Step2: Create an ANN model
Step3: Train the ANNs
Step4: Plot grouping
|
8,967
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_star()
b.add_spot(radius=30, colat=80, long=0, relteff=0.9)
print(b['spot'])
times = np.linspace(0, 10, 11)
b.set_value('period', 10)
b.add_dataset('mesh', times=times, columns=['teffs'])
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_1.gif', save_kwargs={'writer': 'imagemagick'})
b.set_value('t0', 5)
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_2.gif', save_kwargs={'writer': 'imagemagick'})
b.set_value('incl', 0)
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_3.gif', save_kwargs={'writer': 'imagemagick'})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Spots
Step3: Spot Parameters
Step4: The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the observer at t0 for a single star. See the spots tutorial for more details.
Step5: If we set t0 to 5 instead of zero, then the spot will cross the line-of-sight at t=5 (since the spot's longitude is 0).
Step6: And if we change the inclination to 0, we'll be looking at the north pole of the star. This clearly illustrates the right-handed rotation of the star. At time=t0=5 the spot will now be pointing in the negative y-direction.
|
8,968
|
<ASSISTANT_TASK:>
Python Code:
import moldesign as mdt
from moldesign import units as u
%matplotlib inline
from matplotlib.pyplot import *
# seaborn is optional -- it makes plots nicer
try: import seaborn
except ImportError: pass
dna_structure = mdt.build_dna_helix('ACTGACTG', helix_type='b')
dna_structure.draw()
dna_structure
ff = mdt.forcefields.DefaultAmber()
ff.assign(dna_structure)
rs = mdt.widgets.ResidueSelector(dna_structure)
rs
if len(rs.selected_residues) == 0:
raise ValueError("You didn't click on anything!")
rs.selected_residues
for residue in rs.selected_residues:
print('Constraining position for residue %s' % residue)
for atom in residue.atoms:
dna_structure.constrain_atom(atom)
dna_structure.set_energy_model(mdt.models.OpenMMPotential,
implicit_solvent='obc')
dna_structure.set_integrator(mdt.integrators.OpenMMLangevin,
timestep=2.0*u.fs,
temperature=300.0*u.kelvin,
frame_interval=1.0*u.ps)
dna_structure.configure_methods()
trajectory = dna_structure.minimize(nsteps=200)
trajectory.draw()
plot(trajectory.potential_energy)
xlabel('steps');ylabel('energy / %s' % trajectory.unit_system.energy)
title('Energy relaxation'); grid('on')
traj = dna_structure.run(run_for=25.0*u.ps)
traj.draw()
plot(traj.time, traj.kinetic_energy, label='kinetic energy')
plot(traj.time, traj.potential_energy - traj.potential_energy[0], label='potential_energy')
xlabel('time / {time.units}'.format(time=traj.time))
ylabel('energy / {energy.units}'.format(energy=traj.kinetic_energy))
title('Energy vs. time'); legend(); grid('on')
# Using the trajectory's 'plot' method will autogenerate axes labels with the appropriate units
traj.plot('time','kinetic_temperature')
title('Temperature'); grid('on')
from ipywidgets import interact_manual
from IPython.display import display
rs = mdt.widgets.ResidueSelector(dna_structure)
def plot_rmsd():
plot(traj.time, traj.rmsd(rs.selected_atoms))
xlabel('time / fs'); ylabel(u'RMSD / Å')
interact_manual(plot_rmsd, description='plot rmsd')
rs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: 2. Forcefield
Step3: 3. Constraints
Step4: Of course, fixing the positions of the terminal base pairs is a fairly extreme step. For extra credit, see if you can find a less heavy-handed keep the terminal base pairs bonded. (Try using tab-completion to see what other constraint methods are available)
Step5: You can interactively configure these methods
Step6: 5. Minimization
Step7: 6. Dynamics
Step8: 7. Analysis
Step9: This cell sets up an widget that plots the RMSDs of any selected group of atoms.
|
8,969
|
<ASSISTANT_TASK:>
Python Code:
import astropy.table as at
from astropy.time import Time
import astropy.units as u
from astropy.visualization.units import quantity_support
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import pymc3 as pm
import exoplanet.units as xu
import thejoker as tj
# set up a random number generator to ensure reproducibility
rnd = np.random.default_rng(seed=42)
with pm.Model() as model:
P = xu.with_unit(pm.Normal('P', 50., 1),
u.day)
prior = tj.JokerPrior.default(
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s,
pars={'P': P})
samples1 = prior.sample(size=100_000, random_state=rnd)
plt.hist(samples1['P'].to_value(u.day), bins=64);
plt.xlabel('$P$ [day]')
with pm.Model() as model:
P = xu.with_unit(pm.Normal('P', 50., 1),
u.day)
K = xu.with_unit(pm.Normal('K', 0., 15),
u.km/u.s)
prior = tj.JokerPrior.default(
sigma_v=100*u.km/u.s,
pars={'P': P, 'K': K})
samples2 = prior.sample(size=100_000, random_state=rnd)
samples2
samples3 = prior.sample(size=100_000, generate_linear=True,
random_state=rnd)
samples3
default_prior = tj.JokerPrior.default(
P_min=1e1*u.day,
P_max=1e3*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=75*u.km/u.s)
default_samples = default_prior.sample(size=20, generate_linear=True,
random_state=rnd,
t_ref=Time('J2000')) # set arbitrary time zero-point
with pm.Model() as model:
K = xu.with_unit(pm.Normal('K', 0., 30),
u.km/u.s)
custom_prior = tj.JokerPrior.default(
P_min=1e1*u.day,
P_max=1e3*u.day,
sigma_v=75*u.km/u.s,
pars={'K': K})
custom_samples = custom_prior.sample(size=len(default_samples),
generate_linear=True,
random_state=rnd,
t_ref=Time('J2000')) # set arbitrary time zero-point
now_mjd = Time.now().mjd
t_grid = Time(np.linspace(now_mjd - 1000, now_mjd + 1000, 16384),
format='mjd')
fig, axes = plt.subplots(2, 1, sharex=True, sharey=True, figsize=(8, 8))
_ = tj.plot_rv_curves(default_samples, t_grid=t_grid,
ax=axes[0], add_labels=False)
_ = tj.plot_rv_curves(custom_samples, t_grid=t_grid,
ax=axes[1])
axes[0].set_ylim(-200, 200)
fig.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Changing one or a few priors from the default prior
Step2: Let's now plot the period samples to make sure they look Gaussian
Step3: Indeed, it looks like the samples were generated by a Gaussian centered on 50 days, as we specified.
Step4: By default, prior.sample() only generates the nonlinear parameters, so you will notice that K does not appear in the returned samples above (variable
Step5: Note that now the samples3 object contains K and v0, the two linear parameters of the default version of The Joker. Next, we will generate full parameter samples (nonliner and linear parameters) for two different priors and compare orbits computed from these samples.
|
8,970
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import tensorflow as tf
import numpy as np
# Let's use the seaborn library to easily get some data and plot it
import seaborn as sns
sns.set()
# Load 'tips' dataset, only plot 'total_bill', and 'tip' and features (there's a bunch more features in this dataset)
tips = sns.load_dataset("tips")
plt = sns.relplot(x="total_bill", y="tip", data=tips);
# When we're solving a linear regression problem using ML is basically trying to find the slope/gradient of the
# equation `y = W*X+b`.
# Or put differently: given a bunch of correlated datapoints for `y` and `X`,
# try to find the right values for `W` and `b` that minimize a give loss function (e.g. mean squared error).
# We know the values of our inputs X (=total_bill) and what we want to predict Y (=tip).
# Since we know the values from our dataset, we should use tf.placeholder
# We will feed the actual data in when we do training (in session.run)
X = tf.placeholder(dtype=np.float64)
Y = tf.placeholder(dtype=np.float64)
# W and b are the things we're trying to find, so we need to define them as tf.Variable
# We assign these random numbers as default values (=general best practice)
W = tf.Variable(np.random.randn(), dtype=np.float64)
b = tf.Variable(np.random.randn(), dtype=np.float64)
# The trend line we are trying to find.
# We could've also used tf shorthand notation: `pred= W * X + b`
pred = tf.add(tf.multiply(W, X), b)
# The loss function gives us an idea of how close our predicted model is to the actual data
# This is the metric that we're going to try and minimize
# Joris: I took this loss function from https://github.com/aymericdamien/TensorFlow-Examples/blob/84c99e3de1114c3b67c00b897eb9bbc1f7c618fc/examples/2_BasicModels/linear_regression.py#L39
# I'm a bit confused here about 2 things:
# 1) Why does `loss = tf.reduce_mean(tf.pow(pred-Y, 2))` not work (it gives NaN results).
# I think it's mathematically identical? tf.mean was used in a TF course I followed, but doesn't seem to work here
# Note: When changing the optimizer below to the AdamOptimizer, this problem resolves itself,
# so this must have something to do with how the optimizer does its calculations.
# 2) Why the denominator is 2N and not just N (which I believe is the definition of the mean).
n_samples = len(tips['total_bill'])
loss = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) # MSQE
# loss = tf.reduce_mean(tf.pow(pred-Y, 2)) # doesn't work with SGD optimizer
# Define optimizer (=Stochastic Gradient Descent = SGD) and training function
optimizer = tf.train.GradientDescentOptimizer(0.05) # 0.01 is a hyper-parameter
# Note, minimize() knows to modify W and b because Variable objects are trainable=True by default
train = optimizer.minimize(loss)
# Start TF session, init variables, print current values of W and b
sess = tf.Session()
sess.run(tf.global_variables_initializer()) # Assign variables with their default values
EPOCHS=100
for i in range(EPOCHS):
# Feed every observation into the trainer, one by one
for (x, y) in zip(tips['total_bill'], tips['tip']):
sess.run(train, {X: x, Y: y})
if i % (EPOCHS/10) == 0:
l = sess.run(loss, {X: tips['total_bill'], Y: tips['tip']})
w_val = sess.run(W)
b_val = sess.run(b)
print "{}% LOSS={} W={}, b={}".format(i, l, w_val, b_val)
print "100% LOSS={} W={}, b={}".format(l, w_val, b_val)
fit['total_bill'] = tips['total_bill']
fit['res'] = w_val * tips['total_bill'] + b_val
plt = sns.relplot(x="total_bill", y="tip", data=tips);
plt = sns.lineplot(x='total_bill', y ="res", data=fit, color="darkred");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Clearly there's a linear relationship between the amount of top and the total bill. Let's try to find the trend line here using linear regression in Tensorflow.
Step2: Let's plot the line defined by W and b
|
8,971
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# execute dummy code here
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
RFclf = RandomForestClassifier().fit(iris.data, iris.target)
type(iris)
iris.keys()
print(np.shape(iris.data))
print(iris.data)
print(np.shape(iris.target))
print(iris.target)
print(iris.feature_names) # shows that sepal length is first feature and sepal width is second feature
plt.scatter(iris.data[:,0], iris.data[:,1], c = iris.target, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
from sklearn.cluster import KMeans
Kcluster = KMeans(n_clusters = 2)
Kcluster.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
Kcluster = KMeans(n_clusters = 3)
Kcluster.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
rs = 14
Kcluster1 = KMeans(n_clusters = 3, n_init = 1, init = 'random', random_state = rs)
Kcluster1.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster1.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
print("feature\t\t\tmean\tstd\tmin\tmax")
for featnum, feat in enumerate(iris.feature_names):
print("{:s}\t{:.2f}\t{:.2f}\t{:.2f}\t{:.2f}".format(feat, np.mean(iris.data[:,featnum]),
np.std(iris.data[:,featnum]), np.min(iris.data[:,featnum]),
np.max(iris.data[:,featnum])))
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(iris.data)
Kcluster = KMeans(n_clusters = 3)
Kcluster.fit(scaler.transform(iris.data))
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
# execute this cell
from sklearn.cluster import DBSCAN
dbs = DBSCAN(eps = 0.7, min_samples = 7)
dbs.fit(scaler.transform(iris.data)) # best to use re-scaled data since eps is in absolute units
dbs_outliers = dbs.labels_ == -1
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = dbs.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.scatter(iris.data[:,0][dbs_outliers], iris.data[:,1][dbs_outliers], s = 30, c = 'k')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
from astroquery.sdss import SDSS # enables direct queries to the SDSS database
GALquery = SELECT TOP 10000
p.dered_u - p.dered_g as ug, p.dered_g - p.dered_r as gr,
p.dered_g - p.dered_i as gi, p.dered_g - p.dered_z as gz,
p.petroRad_i, p.petroR50_i, p.deVAB_i
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND p.type = 3
SDSSgals = SDSS.query_sql(GALquery)
SDSSgals
Xgal = np.array(SDSSgals.to_pandas())
galScaler = StandardScaler().fit(Xgal)
dbs = DBSCAN(eps = .25, min_samples=55)
dbs.fit(galScaler.transform(Xgal))
cluster_members = dbs.labels_ != -1
outliers = dbs.labels_ == -1
plt.figure(figsize = (10,8))
plt.scatter(Xgal[:,0][outliers], Xgal[:,3][outliers],
c = "k",
s = 4, alpha = 0.1)
plt.scatter(Xgal[:,0][cluster_members], Xgal[:,3][cluster_members],
c = dbs.labels_[cluster_members],
alpha = 0.4, edgecolor = "None", cmap = "viridis")
plt.xlim(-1,5)
plt.ylim(-0,3.5)
from sklearn.neighbors import KNeighborsClassifier
KNNclf = KNeighborsClassifier(n_neighbors = 3).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
KNNclf = KNeighborsClassifier(n_neighbors = 10).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
from sklearn.cross_validation import cross_val_predict
CVpreds = cross_val_predict(KNeighborsClassifier(n_neighbors=5), iris.data, iris.target)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
print("The accuracy of the kNN = 5 model is ~{:.4}".format( sum(CVpreds == iris.target)/len(CVpreds) ))
CVpreds50 = cross_val_predict(KNeighborsClassifier(n_neighbors=50), iris.data, iris.target)
print("The accuracy of the kNN = 50 model is ~{:.4}".format( sum(CVpreds50 == iris.target)/len(CVpreds50) ))
for iris_type in range(3):
iris_acc = sum( (CVpreds50 == iris_type) & (iris.target == iris_type)) / sum(iris.target == iris_type)
print("The accuracy for class {:s} is ~{:.4f}".format(iris.target_names[iris_type], iris_acc))
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(iris.target, CVpreds50)
print(cm)
normalized_cm = cm.astype('float')/cm.sum(axis = 1)[:,np.newaxis]
normalized_cm
plt.imshow(normalized_cm, interpolation = 'nearest', cmap = 'bone_r')# complete
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.ylabel( 'True')# complete
plt.xlabel( 'Predicted' )# complete
plt.colorbar()
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1) Introduction to scikit-learn
Step2: Generally speaking, the procedure for scikit-learn is uniform across all machine-learning algorithms. Models are accessed via the various modules (ensemble, SVM, neighbors, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, X, with rows representing individual sources and columns representing the corresponding feature values. [In a minority of cases, X, represents a similarity or distance matrix where each entry represents the distance to every other source in the data set.] In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array y.
Step3: You likely haven't encountered a scikit-learn Bunch before. It's functionality is essentially the same as a dictionary.
Step4: Most importantly, iris contains data and target values. These are all you need for scikit-learn, though the feature and target names and description are useful.
Step5: Problem 1d What is the shape and content of the iris target?
Step6: Finally, as a baseline for the exercises that follow, we will now make a simple 2D plot showing the separation of the 3 classes in the iris dataset. This plot will serve as the reference for examining the quality of the clustering algorithms.
Step7: Problem 2) Unsupervised Machine Learning
Step8: With 3 clusters the algorithm does a good job of separating the three classes. However, without the a priori knowledge that there are 3 different types of iris, the 2 cluster solution would appear to be superior.
Step9: A random aside that is not particularly relevant here
Step10: Petal length has the largest range and standard deviation, thus, it will have the most "weight" when determining the $k$ clusters.
Step11: These results are almost identical to those obtained without scaling. This is due to the simplicity of the iris data set.
Step13: I was unable to obtain 3 clusters with DBSCAN. While these results are, on the surface, worse than what we got with $k$-means, my suspicion is that the 4 features do not adequately separate the 3 classes. [See - a nayseyer can always make that argument.] This is not a problem for DBSCAN as an algorithm, but rather, evidence that no single algorithm works well in all cases.
Step14: I have used my own domain knowledge to specifically choose features that may be useful when clustering galaxies. If you know a bit about SDSS and can think of other features that may be useful feel free to add them to the query.
Step15: Note - I was unable to get the galaxies to clusster using DBSCAN.
Step16: These results are almost identical to the training classifications. However, we have cheated! In this case we are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error. The relevant parameter, however, is the generalization error
Step17: While it is useful to understand the overall accuracy of the model, it is even more useful to understand the nature of the misclassifications that occur.
Step18: We just found that the classifier does a much better job classifying setosa and versicolor than it does for virginica. The main reason for this is some viginica flowers lie far outside the main virginica locus, and within predominantly versicolor "neighborhoods". In addition to knowing the accuracy for the individual classes, it is also useful to know class predictions for the misclassified sources, or in other words where there is "confusion" for the classifier. The best way to summarize this information is with a confusion matrix. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal.
Step19: From this representation, we see right away that most of the virginica that are being misclassifed are being scattered into the versicolor class. However, this representation could still be improved
Step20: The normalization makes it easier to compare the classes, since each class has a different number of sources. Now we can procede with a visual representation of the confusion matrix. This is best done using imshow() within pyplot. You will also need to plot a colorbar, and labeling the axes will also be helpful.
|
8,972
|
<ASSISTANT_TASK:>
Python Code:
from lingpy import *
seq1, seq2, seq3, seq4, seq5 = "th o x t a", "thoxta", "apfəl", "tʰoxtɐ", "dɔːtər"
print(seq1, "\t->\t", '\t'.join(ipa2tokens(seq1)))
print(seq2, " \t->\t", '\t'.join(ipa2tokens(seq2)))
print(seq2, " \t->\t", '\t'.join(ipa2tokens(seq2, semi_diacritics="h")))
print(seq3, " \t->\t", '\t'.join(ipa2tokens(seq3)))
print(seq3, " \t->\t", '\t'.join(ipa2tokens(seq3, semi_diacritics="f")))
print(seq4, " \t->\t", '\t'.join(ipa2tokens(seq4)))
print(seq5, " \t->\t", '\t'.join(ipa2tokens(seq5)))
word = "θiɣatɛra"
segs = ipa2tokens(word)
# iterate over sound class models and write them in converted version
for m in ['dolgo', 'sca', 'asjp', 'art']:
print(word, ' -> ', ''.join(tokens2class(segs, m)), '({0})'.format(m))
from collections import defaultdict
def check_sequence(seq):
Takes a segmented string as input and returns erroneously converted segments.
cls = tokens2class(seq, 'dolgo') # doesn't matter which model to take, all cover the same character range
errors = defaultdict(int)
for t, c in zip(seq, cls):
if c == '0':
errors[t] += 1
return errors
word = "θiɣatEra"
seq = ipa2tokens(word)
for error, count in check_sequence(seq).items():
print("The symbol <{0}> occurs {1} times and is not recognized.".format(error, count))
from lingpy import *
# load the wordlist
wl = Wordlist('polynesian.tsv')
# count number of languages, number of rows, number of concepts
print("Wordlist has {0} languages and {1} concepts across {2} rows.".format(wl.width, wl.height, len(wl)))
# get all indices for concept "hand", `row` refers to the concepts here, while `col` refers to languages
eight = wl.get_dict(row='Eight', entry='value')
for taxon in ['Emae_1030', 'RennellBellona_206', 'Tuvalu_753', 'Sikaiana_243', 'Penrhyn_235', 'Kapingamarangi_217']:
print('{0:20}'.format(taxon), ' \t', ', '.join(eight[taxon]))
from lingpy.compare.sanity import mutual_coverage_check, mutual_coverage_subset
for i in range(210, 0, -1):
if mutual_coverage_check(wl, i):
print("Minimal mutual coverage is at {0} concept pairs.".format(i))
break
count, results = mutual_coverage_subset(wl, 200)
coverage, languages = results[0]
print('Found {0} languages with an average mutual coverage of {1}.'.format(count, coverage))
# write word list to file
wl.output("tsv", filename="mikronesian", subset=True, rows=dict(doculect = "in "+str(languages)),
ignore='all', prettify=False)
# load the smaller word list
wl = Wordlist('mikronesian.tsv')
# print basic characteristics
print("The new word list has {0} languages and {1} concepts across {2} words.".format(
wl.width, wl.height, len(wl)))
msa = Multiple(['β a r u', 'v a ŋ g u', 'v a l u', 'v a l u', 'v a r u', 'w a l u'])
msa.prog_align()
print(msa)
msa.lib_align()
print(msa)
words = ['j a b l o k o', 'j a b ə l k o', 'j a b l k o', 'j a p k o']
msa = Multiple(words)
msa.prog_align()
print(msa)
print('There is {0} swap in the alignment.'.format('no' if not msa.swap_check(swap_penalty=-1) else 'a'))
lex = LexStat('mikronesian.tsv', check=True, segments='tokens')
wl = Wordlist('mikronesian.tsv')
# add new column "segments" and replace data from column "tokens"
wl.add_entries('segments', 'tokens', lambda x: ['A' if y == 'e' else y for y in x])
lex = LexStat(wl, segments='segments', check=True)
lex = LexStat('mikronesian.tsv', segments='tokens', check=True)
# run the dolgopolsky (turchin) analysis, which is threshold-free
lex.cluster(method='turchin')
# show the cognate sets, stored in "turchinid" for the words for "Eight"
eight = lex.get_dict(row='Eight') # get a dictionary with language as key for concept "eight"
for k, v in eight.items():
idx = v[0] # index of the word, it gives us access to all data
print("{0:20} \t {1} \t{2}".format(lex[idx, 'doculect'], lex[idx, 'value'], lex[idx, 'turchinid']))
lex.cluster(method="sca", threshold=0.45)
for k, v in eight.items():
idx = v[0]
print("{0:20} \t {1} \t{2} \t {3} ".format(
lex[idx, 'doculect'],
lex[idx, 'value'],
lex[idx, 'turchinid'],
lex[idx, 'scaid']))
lex.get_scorer(runs=10000)
lex.output('tsv', filename='mikronesian.bin')
lex.cluster(method='lexstat', threshold=0.60)
for k, v in eight.items():
idx = v[0]
print("{0:20} \t {1} \t{2} \t {3} \t {4}".format(
lex[idx, 'doculect'],
lex[idx, 'value'],
lex[idx, 'turchinid'],
lex[idx, 'scaid'],
lex[idx, 'lexstatid']
))
lex.cluster(method="lexstat", threshold=0.55, ref="infomap", cluster_method='infomap')
for k, v in eight.items():
idx = v[0]
print("{0:20} \t {1} \t{2} \t {3} \t {4} \t {5}".format(
lex[idx, 'doculect'],
lex[idx, 'value'],
lex[idx, 'turchinid'],
lex[idx, 'scaid'],
lex[idx, 'lexstatid'],
lex[idx, 'infomap']
))
lex.output('tsv', filename='mikronesian-lexstat', ignore='all', prettify=False)
lex = LexStat('mikronesian.bin.tsv')
alm = Alignments('mikronesian-lexstat.tsv', ref='infomap', segments='tokens') # `ref` indicates the column with the cognate sets
alm.align(method='progressive', scoredict=lex.cscorer)
for cog in ['1']:
msa = alm.msa['infomap'][cog]
for i, idx in enumerate(msa['ID']):
print(
'{0:20}'.format(msa['taxa'][i]),
'\t',
alm[idx, 'concept'],
'\t',
'\t'.join(msa['alignment'][i])
)
alm.output('tsv', filename='mikronesian-aligned', ignore='all', prettify=False)
from IPython.display import YouTubeVideo
YouTubeVideo('IyZuf6SmQM4')
from lingpy.evaluate.acd import bcubes, diff
wl = Wordlist('mikronesian-lexstat.tsv')
for res in ['turchinid', 'scaid', 'lexstatid', 'infomap']:
print('{0:10}\t{1[0]:.2f}\t{1[1]:.2f}\t{1[2]:.2f}'.format(
res,
bcubes(wl, 'cogid', res, pprint=False)
))
diff(wl, 'cogid', 'infomap', pprint=False)
wl = Wordlist('mikronesian-lexstat.tsv')
wl.output('paps.nex', filename='mikronesian', ref='infomap', missing='?')
wl.calculate('dst', ref='infomap', mode='swadesh')
wl.output('dst', filename='mikronesian')
wl.calculate('tree', tree_calc='upgma')
wl.output('tre', filename='mikronesian')
print(wl.tree.asciiArt())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can see from these examples, that LingPy's ipa2tokens function automatically identifies diacritics and the like, but that you can also tweak it to some extent. If the sequence contains white space, as in the first example, ipa2tokens will split by white space and assume that the data is already segmented. We won't go into the details of this and other functions here, but you should consider giving the documentation a proper read before you start spending time on segmenting your data manually. At the same time, when trusting LingPy's default algorithm for segmentation, you should always make sure after using it that the segmentations make sense. If they are largely wrong or problematic, you should refine them before running any automatic cognate detection method.
Step3: Note as a last point that the conversion to sound classes is the major check whether LingPy has "understood" your input. If LingPy does not find a class symbol corresponding to a given segment, it will use the default character "0" to indicate this failure of converting a given sound sequence. This zero will be treated as an uninvited guest in most comparisons. It won't be aligned with other elements and will score negatively in the automatic cognate detection routines. You should thus try to avoid this by making sure that your sequences do not contain any errors. When carrying out cognate the detection analysis, we have a specific keyword check which you can set to True to make sure that all sequences with zeros in sound classes are excluded before the analysis is carried out. But you can easily write a Python function to check yourself in only a few lines (write $ python3 autocogs.py errors in commandline)
Step4: Loading the Data into a Wordlist Object
Step5: So by accessing the attributes width we retrieve the number of languages and with height, we retrieve the number of concepts. This follows the logic inherent in the classical format in which linguists prepare their spreadsheets, namely by placing concepts in the first column and languages in the rest of the columns. Classical linguists would thus represent the data from the table above as follows
Step6: Checking Coverage
Step7: This value is definitely good enough for our purpose, given the rule of thumb which says that below a minimal mutual coverage of 100 one should not do language-specific cognate detection analyses. If the coverage is lower, this does not mean you need to give up automatic cognate detection, but it means you should not use the language-specific LexStat method but rather a language-independent method, which does not require the information on potential sound correspondences (but will also tend to identify more false positives).
Step8: Phonetic Alignment
Step9: There are more complicated algorithms available, for example, library-based alignment, following the T-Coffee algorithm (Notredame et al. 2000, based on a so-called "library" which is created before the tree is built. This algorithm, you can use like this ($ python3 autocogs.py alignments2 in commandline)
Step10: The results are still the same, but it was shown in List (2014) that this algorithm largely enhances more complex alignments.
Step11: Unfortunately, we do not have enought time to go into all the details of alignment analyses, and especially more complex aspects of different alignment modes (global, local, semi-global, etc.), different basic algorithms (extended algorithm for secondary alignments), but also how sound classes, the internal representation format in LingPy are integrated into the algorithms, cannot be treated here in full. I can only refer the interested users to both the extensive online documentation at lingpy.org, as well as my aforementioned book on sequence comparison.
Step12: If you have problems in your code, you will be asked if you want to exclude the sequences automatically. As a result, a logfile, called errors.log will be created and point you to all erroneous sequences which contain segments which LingPy does not recognize. Let us quickly introduce some bad sequences by just converting randomly all [e] sounds to the latter A (capitals are never accepted in the normal sound class models of LingPy) and see what we get then. For this, we even do not need to re-write the data, we just add another row where we change the content, give it a random name (we call it "tokens", as this also signals LingPy that the input should be treated as a sequence and not as a string), and specify this for the LexStat instance method as the column in the file where the segments are. We first load the data as Wordlist and then pass that data directly to LexStat ($ python3 autocogs.py trigger-errors in commandline)
Step13: If you now check the file errors.log, you will find a long file with the following first ten lines
Step14: We now do the same for the "sca" method, but since this methods are not threshold free, we will need to define a threshold. We follow the default value we know from experience, which is 0.45. We then print out the same data, but this time including the cognate jugdments by all three methods ($ python3 autocogs.py cognates-sca in commandline)
Step15: We are now ready to do the same analysis with the "lexstat" method. This will take some time due to the permutation test. In order to make sure we do not need to run this all the time, we will save the data immediately after running the permutation to a file which we give the extension "bin.tsv", and which we can load in case we want to carry out further tests, or which we can otherwise also share when publishing results, as it contains all the data needed to re-run the analyses on a different machine. LingPy writes a lot of data into the wordlist objects as a default. If you want to have the plain text file, add the keywords prettify=False and ignore='all' to the output-statement. But in this case, if we want to store the results of the permutation test, we need to store the whole file, including the language-specific scorer ($ python3 autocogs.py cognates-lexstat in commandline)
Step16: You can see that there is not much difference in the results for this very item, but you should not underestimate the different power of the methods, as we will see later on when running an evaluatin analysis. For now, trust me that in general the results are quite different.
Step17: Well, no improvement for "eight", but we will see later in detail, and for now, we just write the data to file, this time in plain text, without the additional information, but with the additional columns with our analyses.
Step18: Aligning the Results
Step19: This was not very spectacular, as we have not yet seen what happened. We can visualize the alignments from the commandline by picking a particular cognate set and printing the alignments on screen. The alignments are added in a specific column called alignments as a default (but which can be modified by specifying another value with the keyword alignments passed to the initialization method for the Alignments class). Additionally, they are in a class attribute, which is called Alignments.msa and which can store multiple different alignment analyses. This code is a bit hacky, due to the nested dictionary, so we won't go into the details right now but just illustrate how we can print a couple of the aligned cognate sets (python3 autocogs.py alignments5 in commandline)
Step20: Again the eight, although this was not planned. But now let's quickly save the data to file, so that we can go on and inspect the findings further (commandline covers this command via python3 autocogs.py alignments4)
Step21: Further Use
Step22: Evaluation with LingPy
Step23: You can see, that the "infomap" method is in fact working one point better than the normal "lexstat" method, and you can also see how deep the difference between the correspondence-informed methods and the other methods is. As a last way to inspect the data, we will now use the diff function to create a file that contrasts the expert cognate sets with the ones inferred by Infomap ($ python3 autocogs.py diff in terminal).
Step24: This will create a file called "mikronesian-lexstat.tsv.diff" in your folder. If you open this file, you will see that it contrasts the "cogid" with the "infomap" numbers by putting them with words and languages in two columns
Step25: The file mikronesian.paps.nex then looks like this
Step26: If you open the file, you will see that it follows strictly the Phylip distance format which also cuts off all language names longer than 10 characters (but there are ways to modify this, I can't show them now)
|
8,973
|
<ASSISTANT_TASK:>
Python Code:
import essentia.streaming as ess
import essentia
audio_file = '../../../test/audio/recorded/dubstep.flac'
# Initialize algorithms we will use.
loader = ess.MonoLoader(filename=audio_file)
framecutter = ess.FrameCutter(frameSize=4096, hopSize=2048, silentFrames='noise')
windowing = ess.Windowing(type='blackmanharris62')
spectrum = ess.Spectrum()
spectralpeaks = ess.SpectralPeaks(orderBy='magnitude',
magnitudeThreshold=0.00001,
minFrequency=20,
maxFrequency=3500,
maxPeaks=60)
# Use default HPCP parameters for plots.
# However we will need higher resolution and custom parameters for better Key estimation.
hpcp = ess.HPCP()
hpcp_key = ess.HPCP(size=36, # We will need higher resolution for Key estimation.
referenceFrequency=440, # Assume tuning frequency is 44100.
bandPreset=False,
minFrequency=20,
maxFrequency=3500,
weightType='cosine',
nonLinear=False,
windowSize=1.)
key = ess.Key(profileType='edma', # Use profile for electronic music.
numHarmonics=4,
pcpSize=36,
slope=0.6,
usePolyphony=True,
useThreeChords=True)
# Use pool to store data.
pool = essentia.Pool()
# Connect streaming algorithms.
loader.audio >> framecutter.signal
framecutter.frame >> windowing.frame >> spectrum.frame
spectrum.spectrum >> spectralpeaks.spectrum
spectralpeaks.magnitudes >> hpcp.magnitudes
spectralpeaks.frequencies >> hpcp.frequencies
spectralpeaks.magnitudes >> hpcp_key.magnitudes
spectralpeaks.frequencies >> hpcp_key.frequencies
hpcp_key.hpcp >> key.pcp
hpcp.hpcp >> (pool, 'tonal.hpcp')
key.key >> (pool, 'tonal.key_key')
key.scale >> (pool, 'tonal.key_scale')
key.strength >> (pool, 'tonal.key_strength')
# Run streaming network.
essentia.run(loader)
print("Estimated key and scale:", pool['tonal.key_key'] + " " + pool['tonal.key_scale'])
import IPython
IPython.display.Audio(audio_file)
# Plots configuration.
import matplotlib.pyplot as plt
from pylab import plot, show, figure, imshow
plt.rcParams['figure.figsize'] = (15, 6)
# Plot HPCP.
imshow(pool['tonal.hpcp'].T, aspect='auto', origin='lower', interpolation='none')
plt.title("HPCPs in frames (the 0-th HPCP coefficient corresponds to A)")
show()
print("Estimated key and scale:", pool['tonal.key_key'] + " " + pool['tonal.key_scale'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The audio we have just analyzed
Step2: Let's plot the resulting HPCP
Step3: Here we have plotted a 12-bin HPCPgram with default parameters and bins corresponding to semitones from A to G#.
|
8,974
|
<ASSISTANT_TASK:>
Python Code:
#example
example_data_do_not_use = [4,3,6,3]
print(sum(example_data_do_not_use))
data=[13,13,11,11,12,10,14,14,8,11,14,10,16,11,11,15,12,13,12,11,13,12,14,10,9,12,13,14,14,10,15,13,12,12,13,10,12,10,13,13,14,8,14,11,9,13,10,11,9,9,15,12,14,10,16,14,9,10,12,13,8,11,16,13,10,10,13,10,11,11,14,7,12,14,13,13,9,9,13,10,12,12,13,12,10,10,13,11,15,13,13,17,9,12,12,9,12,9,10,12]
np.mean(data)
np.std(data, ddof=1)
data2=[16,15,14,13,16,12,15,15,9,13,17,13,19,14,16,18,15,14,14,14,14,14,15,14,13,14,16,18,15,13,17,16,14,16,17,13,16,13,17,16,16,11,18,12,12,16,13,15,14,11,15,17,17,15,20,16,11,14,14,15,11,14,19,16,13,11,13,11,13,15,16,9,13,15,15,15,10,11,17,11,15,15,16,15,12,12,16,13,17,17,15,18,11,16,15,11,15,12,14,16]
### BEGIN SOLUTION
np.corrcoef(data, data2)
### END SOLUTION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Problem 1
Step3: Problem 2
Step4: Problem 3
|
8,975
|
<ASSISTANT_TASK:>
Python Code:
import math
import matplotlib.pyplot as plt
import numpy as np
import scipy
import scipy.stats
TRUE_MEAN = 40
TRUE_STD = 10
X = numpy.random.normal(TRUE_MEAN, TRUE_STD, 1000)
def normal_mu_MLE(X):
# Get the number of observations
T = len(data)
# Sum the observations
s = sum(X)
return 1.0/T * s
def normal_sigma_MLE(X):
T = len(X)
# Get the mu MLE
mu = normal_mu_mle(X)
# Sum the square of the differences
s = sum( math.pow((X - mu), 2) )
# Compute sigma^2
sigma_squared = 1.0/T * s
return math.sqrt(sigma_squared)
print "Mean Estimation"
print normal_mu_mle(X)
print np.mean(X)
print "Standard Deviation Estimation"
print normal_sigma_mle(X)
print np.std(X)
mu, std = scipy.stats.norm.fit(X)
print "mu estimate: " + str(mu)
print "std estimate: " + str(std)
pdf = scipy.stats.norm.pdf
# We would like to plot our data along an x-axis ranging from 0-80 with 80 intervals
# (increments of 1)
x = np.linspace(0, 80, 80)
h = plt.hist(X, bins=x, normed='true')
l = plt.plot(pdf(x, loc=mu, scale=std))
TRUE_LAMBDA = 5
X = np.random.exponential(TRUE_LAMBDA, 1000)
def exp_lamda_MLE(X):
T = len(X)
s = sum(X)
return s/T
print "lambda estimate: " + str(exp_lamda_MLE(X))
# The scipy version of the exponential distribution has a location parameter
# that can skew the distribution. We ignore this by fixing the location
# parameter to 0 with floc=0
_, l = scipy.stats.expon.fit(X, floc=0)
pdf = scipy.stats.expon.pdf
x = range(0, 80)
h = plt.hist(X, bins=x, normed='true')
l = plt.plot(pdf(x, scale=l))
prices = get_pricing('TSLA', fields='price', start_date='2014-01-01', end_date='2015-01-01')
# This will give us the number of dollars returned each day
absolute_returns = np.diff(prices)
# This will give us the percentage return over the last day's value
# the [:-1] notation gives us all but the last item in the array
# We do this because there are no returns on the final price in the array.
returns = absolute_returns/prices[:-1]
mu, std = scipy.stats.norm.fit(returns)
pdf = scipy.stats.norm.pdf
x = np.linspace(-1,1, num=100)
h = plt.hist(returns, bins=x, normed='true')
l = plt.plot(x, pdf(x, loc=mu, scale=std))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Normal Distribution
Step2: Now we'll define functions that given our data, will compute the MLE for the $\mu$ and $\sigma$ parameters of the normal distribution.
Step3: Now let's try our functions out on our sample data and see how they compare to the built-in np.mean and np.std
Step4: Now let's estimate both parameters at once with scipy's built in fit() function.
Step5: Now let's plot the distribution PDF along with the data to see how well it fits. We can do that by accessing the pdf provided in scipy.stats.norm.pdf.
Step6: Exponential Distribution
Step7: numpy defines the exponential distribution as
Step8: MLE for Asset Returns
Step9: Let's use scipy's fit function to get the $\mu$ and $\sigma$ MLEs.
|
8,976
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt, numpy as np
import dismod_mr
models = {}
#iter=101; burn=0; thin=1 # use these settings to run faster
iter=10_000; burn=5_000; thin=5 # use these settings to make sure MCMC converges
model = dismod_mr.load('pd_sim_data/')
model.keep(areas=['GBR'], sexes=['female', 'total'])
model.setup_model()
%time model.fit(iter=iter, burn=burn, thin=thin)
models['p, i, r, smr'] = model
model.plot()
models
model = dismod_mr.load('pd_sim_data/')
model.keep(areas=['GBR'], sexes=['female', 'total'])
model.input_data = model.input_data[model.input_data.data_type != 'i']
print('kept %d rows' % len(model.input_data.index))
model.setup_model()
%time model.fit(iter=iter, burn=burn, thin=thin)
models['p, r, smr'] = model
model.plot()
model = dismod_mr.load('pd_sim_data/')
model.keep(areas=['GBR'], sexes=['female', 'total'])
model.input_data = model.input_data[model.input_data.data_type == 'p']
print('kept %d rows' % len(model.input_data.index))
model.setup_model()
%time model.fit(iter=iter, burn=burn, thin=thin)
# the above took 20 minutes in 2013
models['p, r'] = model
model.plot()
model = dismod_mr.load('pd_sim_data/')
model.keep(areas=['GBR'], sexes=['female', 'total'])
model.input_data = model.input_data[model.input_data.data_type == 'p']
print('kept %d rows' % len(model.input_data.index))
model.set_level_bounds('r', 0., 1.)
model.set_level_value('r', age_before=0., age_after=101., value=0)
model.setup_model()
%time model.fit(iter=iter, burn=burn, thin=thin)
models['p'] = model
model.plot()
for i, (label, model) in enumerate(models.items()):
plt.hist(model.vars['p']['mu_age'].trace().mean(1), density=True, histtype='step',
color=dismod_mr.plot.colors[i%4], linewidth=3, linestyle=['solid','dashed'][i//4],
label=label)
plt.legend(loc=(1.1,.1))
plt.title('Posterior Distribution Comparison\nCrude Prevalence');
for i, (label, model) in enumerate(models.items()):
plt.hist(model.vars['i']['mu_age'].trace().mean(1), density=True, histtype='step',
color=dismod_mr.plot.colors[i%4], linewidth=3, linestyle=['solid','dashed'][i//4],
label=label)
plt.legend(loc=(1.1,.1))
plt.title('Posterior Distribution Comparison\nCrude Incidence');
model = dismod_mr.load('pd_sim_data/')
model.keep(areas=['GBR'], sexes=['female', 'total'])
model.input_data = model.input_data[model.input_data.data_type != 'p']
print('kept %d rows' % len(model.input_data.index))
model.setup_model()
%time model.fit(iter=iter, burn=burn, thin=thin)
models['i, r, smr'] = model
model.plot()
model = dismod_mr.load('pd_sim_data/')
model.keep(areas=['GBR'], sexes=['female', 'total'])
model.input_data = model.input_data[model.input_data.data_type == 'i']
print('kept %d rows' % len(model.input_data.index))
model.set_level_bounds('r', 0., 1.)
model.setup_model()
%time model.fit(iter=iter, burn=burn, thin=thin)
models['i'] = model
model.plot()
model = dismod_mr.load('pd_sim_data/')
model.keep(areas=['GBR'], sexes=['female', 'total'])
model.input_data = model.input_data[model.input_data.data_type == 'smr']
print('kept %d rows' % len(model.input_data.index))
model.setup_model()
%time model.fit(iter=iter, burn=burn, thin=thin)
models['r, smr'] = model
model.plot()
for i, label in enumerate(['p', 'i, r, smr', 'i', 'r, smr']):
try:
plt.hist(models[label].vars['p']['mu_age'].trace().mean(1), density=False, histtype='step',
color=dismod_mr.plot.colors[i%4], linewidth=3, linestyle=['solid','dashed'][i//4],
label=label)
except AttributeError as e:
print(e)
plt.legend(loc=(1.1,.1))
plt.title('Posterior Distribution Comparison\nCrude Prevalence')
plt.axis(xmin=-.001);
!date
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Consistent fit with all data
Step2: Consistent fit without incidence
Step3: Consistent fit without incidence or mortality
Step4: Consistent fit with only prevalence
Step5: Comparison of alternative models
Step6: Consistent fit without prevalence
Step7: Consistent fit with incidence only
Step8: Consistent fit without prevalence or incidence
|
8,977
|
<ASSISTANT_TASK:>
Python Code:
data = np.random.rand(3)
fig = plt.figure(animation_duration=1000)
pie = plt.pie(data, display_labels="outside", labels=list(string.ascii_uppercase))
fig
n = np.random.randint(1, 10)
pie.sizes = np.random.rand(n)
with pie.hold_sync():
pie.display_values = True
pie.values_format = ".1f"
pie.sort = True
pie.selected_style = {"opacity": 1, "stroke": "white", "stroke-width": 2}
pie.unselected_style = {"opacity": 0.2}
pie.selected = [1]
pie.selected = None
pie.label_color = "Red"
pie.font_size = "20px"
pie.font_weight = "bold"
fig1 = plt.figure(animation_duration=1000)
pie1 = plt.pie(np.random.rand(6), inner_radius=0.05)
fig1
# As of now, the radius sizes are absolute, in pixels
with pie1.hold_sync():
pie1.radius = 150
pie1.inner_radius = 100
# Angles are in radians, 0 being the top vertical
with pie1.hold_sync():
pie1.start_angle = -90
pie1.end_angle = 90
pie1.y = 0.1
pie1.x = 0.6
pie1.radius = 180
pie1.stroke = "brown"
pie1.colors = ["orange", "darkviolet"]
pie1.opacities = [0.1, 1]
fig1
from bqplot import ColorScale, ColorAxis
n = 7
size_data = np.random.rand(n)
color_data = np.random.randn(n)
fig2 = plt.figure()
plt.scales(scales={"color": ColorScale(scheme="Reds")})
pie2 = plt.pie(size_data, color=color_data)
fig2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Update Data
Step2: Display Values
Step3: Enable sort
Step4: Set different styles for selected slices
Step5: For more on piechart interactions, see the Mark Interactions notebook
Step6: Update pie shape and style
Step7: Change pie dimensions
Step8: Move the pie around
Step9: Change slice styles
Step10: Represent an additional dimension using Color
|
8,978
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-2', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
8,979
|
<ASSISTANT_TASK:>
Python Code:
## don't forget to
import numpy as np
## Q10 code
def change(item):
item = 100
print("before", list1)
change(list1[0])
print("after", list1)
## Q11 code
def change_first(collection):
collection[0] = 100
print("before", list1)
change_first(list1)
print("after", list1)
## Q12 code
x = 0
y = x
y = 50
print(x)
## Q13 code
list1 = list(range(5))
list2 = list1
list2[0] = 50
print(list1)
## Q14 code
list3 = list(list1)
list3[0] = 100
print(list1)
## Q15 code
array1 = np.array(range(5))
array2 = array1
array2[0] = 50
print(array1)
## Q16 code
array3 = np.array(array1)
array3[0] = 100
print(array1)
## Q17 code
dictionary1 = {"A":"alpha", "B":"beta", "C":"gamma"}
dictionary2 = dictionary1
dictionary2["A"] = "first letter"
print(dictionary1)
## Q18 code
dictionary3 = dict(dictionary1)
dictionary3["A"] = "T"
print(dictionary1)
## run this code to make a list
original = list(range(5))
print(original)
## run this code to make a new list of squares of original
## this version uses list comprehension
squares_lc = [x**2 for x in original]
print(squares_lc)
## run this code to make a new list of even numbers from original
## this version uses list comprehension
evens_lc = [x for x in original if x%2==0]
print(evens_lc)
## run this code
## list comprehension of loops below
[(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]
## run this code
## loop version of list comprehension above
combs = []
for x in [1,2,3]:
for y in [3,1,4]:
if x != y:
combs.append((x, y))
combs
## run this code
## list comprehension of loops below
[[x+y for x in ['A', 'B']] for y in ['C', 'D']]
## run this code
## loop version of list comprehension above
list_a = ['A', 'B']
list_b = ['C', 'D']
lcombs = []
for y in list_b:
for x in list_a:
lcombs.append([x+y])
lcombs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 4b. Repeat Question 2a and 2b for a NumPy array
Step2: 10b. Now run the code, did the contents of list1 change?
Step3: 11b. Now run the code, Did the contents of list1 change?
Step4: 12b. Analysis
Step5: 13b. Analysis
Step6: 14b. Analysis
Step7: 15b. Analysis
Step8: 16b. Analysis
Step9: 17b. Analysis
Step10: 18b. Analysis
Step11: Write and run code that makes a new list, called squares that contains the squares of the values in original using a for loop.
Step12: 24a. Explain the results of the list comprehension above.
Step13: 25a. Explain the results of the list comprehension above.
|
8,980
|
<ASSISTANT_TASK:>
Python Code:
!conda install -c conda-forge google-cloud-bigquery google-cloud-bigquery-storage pyarrow pandas numpy matplotlib bokeh -y
!gradle -p ../../timeseries-java-applications forex_example --args='--resampleSec=5 --windowSec=60 --runner=DataflowRunner --workerMachineType=n1-standard-4 --project=<GCPPROJECT> --region=<REGION> --bigQueryTableForTSAccumOutputLocation=<GCPPROJECT>:<DATASET>.<TABLE> --gcpTempLocation=<GCPTMPLOCATION> --tempLocation=<TMPLOCATION> --inputPath=<INPUTDATASETLOCATION>'
import google.auth
from google.cloud import bigquery
from google.cloud import bigquery_storage
import pandas as pd
# Explicitly create a credentials object. This allows you to use the same
# credentials for both the BigQuery and BigQuery Storage clients, avoiding
# unnecessary API calls to fetch duplicate authentication tokens.
credentials, your_project_id = google.auth.default(
scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
# Make clients.
bqclient = bigquery.Client(credentials=credentials, project=your_project_id,)
bqstorageclient = bigquery_storage.BigQueryReadClient(credentials=credentials)
# Download results from BigQuery after the Beam pipeline finishes processing the time series data
query_string_60m =
WITH BID_MA60_T AS (
SELECT lower_window_boundary, upper_window_boundary,
(SELECT dbl_data from UNNEST(data) WHERE metric = 'SIMPLE_MOVING_AVERAGE') AS BID_MA60_BQ
FROM `<GCPPROJECT>.<DATASET>.<TABLE>`
WHERE timeseries_minor_key = 'BID'
)
SELECT * from BID_MA60_T
bid_ma60_df = (
bqclient.query(query_string_60m)
.result()
.to_dataframe(bqstorage_client=bqstorageclient)
).sort_values(by=['upper_window_boundary'])
bid_ma60_df.index = pd.to_datetime(bid_ma60_df['upper_window_boundary'])
bid_ma60_df.drop(columns=['upper_window_boundary', 'lower_window_boundary'],inplace=True)
query_string_stddev =
WITH BID_STDDEV_T AS (
SELECT lower_window_boundary, upper_window_boundary,
(SELECT dbl_data from UNNEST(data) WHERE metric = 'STANDARD_DEVIATION') AS BID_STDDEV_BQ
FROM `<GCPPROJECT>.<DATASET>.<TABLE>`
WHERE timeseries_minor_key = 'BID'
)
SELECT * from BID_STDDEV_T
bid_std_dev_df = (
bqclient.query(query_string_stddev)
.result()
.to_dataframe(bqstorage_client=bqstorageclient)
).sort_values(by=['upper_window_boundary'])
bid_std_dev_df.index = pd.to_datetime(bid_std_dev_df['upper_window_boundary'])
bid_std_dev_df.drop(columns=['upper_window_boundary', 'lower_window_boundary'],inplace=True)
# Showing Moving Average sample
bid_ma60_df
# Showing Moving Average sample
bid_std_dev_df
import pandas as pd
eurusd = pd.read_csv("../../timeseries-java-applications/Examples/src/main/resources/EURUSD-2020-05-11_2020-05-11.csv",
index_col=1, names=["Pair", "Timestamp", "Ask", "Bid", "Ask Volume", "Bid Volume"], header=None)
eurusd.index = pd.to_datetime(eurusd.index)
eurusd_bid_ma60 = eurusd.rename(columns={"Bid": "Bid MA60 Pandas"})\
.resample("5S")\
.fillna(method='ffill')['Bid MA60 Pandas']\
.rolling(window=12, min_periods=1)\
.apply(lambda x: np.mean(x))
eurusd_bid_stddev = eurusd.rename(columns={"Bid": "Bid StdDev Pandas"})\
.resample("5S")\
.fillna(method='ffill')['Bid StdDev Pandas']\
.rolling(window=12, min_periods=1)\
.apply(lambda x: np.std(x))
bid_ma60_df_utc = bid_ma60_df.tz_convert(None)
bid_ma60_df_utc_join = bid_ma60_df_utc.join(eurusd_bid_ma60)
bid_ma60_df_utc_join['Delta'] = bid_ma60_df_utc_join["Bid MA60 Pandas"] - bid_ma60_df_utc_join["BID_MA60_BQ"]
bid_ma60_df_utc_join[['BID_MA60_BQ','Bid MA60 Pandas']].plot(figsize=(15,6))
bid_std_dev_df_utc = bid_std_dev_df.tz_convert(None)
bid_std_dev_df_utc_join = bid_std_dev_df_utc.join(eurusd_bid_stddev)
bid_std_dev_df_utc_join['Delta'] = bid_std_dev_df_utc_join["Bid StdDev Pandas"] - bid_std_dev_df_utc_join["BID_STDDEV_BQ"]
bid_std_dev_df_utc_join[['BID_STDDEV_BQ','Bid StdDev Pandas']].plot(figsize=(15,6))
bid_ma60_df_utc_join
bid_std_dev_df_utc_join
import numpy as np
from bokeh.io import show, output_notebook
from bokeh.layouts import column
from bokeh.models import RangeTool
from bokeh.plotting import figure
dates = np.array(bid_ma60_df_utc_join.index, dtype=np.datetime64)
p1 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=(dates[1500], dates[2500]))
p1.line(dates, bid_ma60_df_utc_join['BID_MA60_BQ'], color='#A6CEE3', legend_label='Bid MA60 BQ')
p1.line(dates, bid_ma60_df_utc_join['Bid MA60 Pandas'], color='#FB9A99', legend_label='Bid MA60 Pandas')
p1.yaxis.axis_label = 'Price MA60'
p2 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=p1.x_range)
p2.line(dates, bid_ma60_df_utc_join['Delta'], color='#A6CEE3', legend_label='Delta')
p2.yaxis.axis_label = 'Delta'
select = figure(title="Drag the middle and edges of the selection box to change the range above",
plot_height=130, plot_width=800, y_range=p1.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
range_tool = RangeTool(x_range=p1.x_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.line(dates, bid_ma60_df_utc_join['BID_MA60_BQ'], color='#A6CEE3')
select.line(dates, bid_ma60_df_utc_join['Bid MA60 Pandas'], color='#FB9A99')
select.ygrid.grid_line_color = None
select.add_tools(range_tool)
select.toolbar.active_multi = range_tool
output_notebook()
show(column(p1, p2, select))
import numpy as np
from bokeh.io import show, output_notebook
from bokeh.layouts import column
from bokeh.models import RangeTool
from bokeh.plotting import figure
dates = np.array(bid_std_dev_df_utc_join.index, dtype=np.datetime64)
p1 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=(dates[1500], dates[2500]))
p1.line(dates, bid_std_dev_df_utc_join['BID_STDDEV_BQ'], color='#A6CEE3', legend_label='Bid STDDEV BQ')
p1.line(dates, bid_std_dev_df_utc_join['Bid StdDev Pandas'], color='#FB9A99', legend_label='Bid StdDev Pandas')
p1.yaxis.axis_label = 'Standard Deviation'
p2 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=p1.x_range)
p2.line(dates, bid_std_dev_df_utc_join['Delta'], color='#A6CEE3', legend_label='Delta')
p2.yaxis.axis_label = 'Delta'
select = figure(title="Drag the middle and edges of the selection box to change the range above",
plot_height=130, plot_width=800, y_range=p1.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
range_tool = RangeTool(x_range=p1.x_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.line(dates, bid_std_dev_df_utc_join['BID_STDDEV_BQ'], color='#A6CEE3')
select.line(dates, bid_std_dev_df_utc_join['Bid StdDev Pandas'], color='#FB9A99')
select.ygrid.grid_line_color = None
select.add_tools(range_tool)
select.toolbar.active_multi = range_tool
output_notebook()
show(column(p1, p2, select))
query_string_last =
WITH LAST AS (
SELECT lower_window_boundary, upper_window_boundary,
(SELECT dbl_data from UNNEST(data) WHERE metric = 'LAST') AS LAST
FROM `<GCPPROJECT>.<DATASET>.<TABLE>`
WHERE timeseries_minor_key = 'BID'
)
SELECT * from LAST
last_bq = (
bqclient.query(query_string_last)
.result()
.to_dataframe(bqstorage_client=bqstorageclient)
).sort_values(by=['upper_window_boundary'])
last_bq.index = pd.to_datetime(last_bq['upper_window_boundary'])
last_bq.drop(columns=['upper_window_boundary', 'lower_window_boundary'],inplace=True)
last_bq
import numpy as np
from bokeh.io import show, output_notebook
from bokeh.layouts import column
from bokeh.models import RangeTool
from bokeh.plotting import figure
dates_unsampled = np.array(eurusd.index, dtype=np.datetime64)
dates_resampled = np.array(last_bq.index, dtype=np.datetime64)
# We position the graph towards the end of day to show a quiet period when forward filling is necessary
p1 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=(dates_unsampled[dates_unsampled.size-100], dates_unsampled[dates_unsampled.size-1]))
p1.circle(dates_unsampled, eurusd['Bid'], color='#A6CEE3', legend_label='Bid unsampled')
p1.circle(dates_resampled, last_bq['LAST'], color='#FB9A99', legend_label='Bid downsampled')
p1.yaxis.axis_label = 'Downsampling'
select = figure(title="Drag the middle and edges of the selection box to change the range above",
plot_height=130, plot_width=800, y_range=p1.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
range_tool = RangeTool(x_range=p1.x_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.line(dates_unsampled, eurusd['Bid'], color='#A6CEE3')
select.ygrid.grid_line_color = None
select.add_tools(range_tool)
select.toolbar.active_multi = range_tool
output_notebook()
show(column(p1, select))
# from bokeh.io import export_png
# export_png(column(p1, select), filename="img/FILLINGBQ.png")
last_pandas = eurusd.resample("5S").fillna(method='ffill')['Bid']
import numpy as np
from bokeh.io import show, output_notebook
from bokeh.layouts import column
from bokeh.models import RangeTool
from bokeh.plotting import figure
dates_unsampled = np.array(eurusd.index, dtype=np.datetime64)
dates_resampled = np.array(last_pandas.index, dtype=np.datetime64)
# We position the graph towards the end of day to show a quiet period when forward filling is necessary
p1 = figure(plot_height=300, plot_width=800, tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", x_range=(dates_unsampled[dates_unsampled.size-100], dates_unsampled[dates_unsampled.size-1]))
p1.circle(dates_unsampled, eurusd['Bid'], color='#A6CEE3', legend_label='Bid unsampled')
p1.circle(dates_resampled, last_pandas, color='#FB9A99', legend_label='Bid downsampled')
p1.yaxis.axis_label = 'Downsampling'
select = figure(title="Drag the middle and edges of the selection box to change the range above",
plot_height=130, plot_width=800, y_range=p1.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
range_tool = RangeTool(x_range=p1.x_range)
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.line(dates_unsampled, eurusd['Bid'], color='#A6CEE3')
select.ygrid.grid_line_color = None
select.add_tools(range_tool)
select.toolbar.active_multi = range_tool
output_notebook()
show(column(p1, select))
# from bokeh.io import export_png
# export_png(column(p1, select), filename="img/FILLINGPANDAS.png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the new Apache Beam time series java framework in Java to compute metrics at scale in GCP Dataflow
Step4: --resampleSec and --windowSec parameters indicate the sample period in seconds, e.g., 5, and the rolling window period to use to compute the metric in seconds, e.g., 60.
Step5: Using the Pandas Dataframe and Numpy APIs to compute the metrics
Step6: In order to calculate the metrics in Pandas using the same constraints we need to
Step7: Once we have the moving average and standard deviation calculated in both frameworks we can do
Step8: We can also sample the joined dataframes to compare the two
Step9: We can see at a first glance that the Standard Deviation metric is almost identical between the two frameworks, the moving average deviates slightly more.
Step10:
Step12: Filling the gaps
Step13: We can see how the unsampled data points are less frequent, so the resampled data points are forward filled correctly, in this case using the LAST measure from the samples.
|
8,981
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.options.display.max_rows = 8
import datetime
dt = datetime.datetime(year=2016, month=12, day=19, hour=13, minute=30)
dt
print(dt) # .day,...
print(dt.strftime("%d %B %Y"))
ts = pd.Timestamp('2016-12-19')
ts
ts.month
ts + pd.Timedelta('5 days')
pd.to_datetime("2016-12-09")
pd.to_datetime("09/12/2016")
pd.to_datetime("09/12/2016", dayfirst=True)
pd.to_datetime("09/12/2016", format="%d/%m/%Y")
s = pd.Series(['2016-12-09 10:00:00', '2016-12-09, 11:00:00', '2016-12-09 12:00:00'])
ts = pd.to_datetime(s)
ts
ts.dt.hour
ts.dt.weekday
pd.Series(pd.date_range(start="2016-01-01", periods=10, freq='3H'))
data = pd.read_csv("data/flowdata.csv")
data.head()
data['Time'] = pd.to_datetime(data['Time'])
data = data.set_index("Time")
data
data = pd.read_csv("data/flowdata.csv", index_col=0, parse_dates=True)
data.index
data.index.day
data.index.dayofyear
data.index.year
data.plot()
data[pd.Timestamp("2012-01-01 09:00"):pd.Timestamp("2012-01-01 19:00")]
data["2012-01-01 09:00":"2012-01-01 19:00"]
data['2013']
data['2012-01':'2012-03']
# %load snippets/05 - Time series data36.py
# %load snippets/05 - Time series data37.py
# %load snippets/05 - Time series data38.py
data = data.drop("months", axis=1)
# %load snippets/05 - Time series data40.py
# %load snippets/05 - Time series data41.py
data.resample('D').mean().head()
data.resample('D').max().head()
data.resample('A').mean().plot() # 10D
# %load snippets/05 - Time series data45.py
# %load snippets/05 - Time series data46.py
# %load snippets/05 - Time series data47.py
# %load snippets/05 - Time series data48.py
# %load snippets/05 - Time series data49.py
# %load snippets/05 - Time series data50.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Dates and times in pandas
Step3: Like with datetime.datetime objects, there are several useful attributes available on the Timestamp. For example, we can get the month
Step4: Parsing datetime strings
Step5: A detailed overview of how to specify the format string, see the table in the python documentation
Step6: The to_datetime function can also be used to convert a full series of strings
Step7: Notice the data type of this series
Step8: To quickly construct some regular time series data, the pd.date_range function comes in handy
Step9: Time series data
Step10: We already know how to parse a date column with Pandas
Step11: With set_index('datetime'), we set the column with datetime values as the index, which can be done by both Series and DataFrame.
Step12: The steps above are provided as built-in functionality of read_csv
Step13: <div class="alert alert-info">
Step14: Similar to a Series with datetime data, there are some attributes of the timestamp values available
Step15: The plot method will also adapt it's labels (when you zoom in, you can see the different levels of detail of the datetime labels)
Step16: We have to much data to sensibly plot on one figure. Let's see how we can easily select part of the data or aggregate the data to other time resolutions in the next sections.
Step17: But, for convenience, indexing a time series also works with strings
Step18: A nice feature is "partial string" indexing, where we can do implicit slicing by providing a partial datetime string.
Step19: Normally you would expect this to access a column named '2013', but as for a DatetimeIndex, pandas also tries to interprete it as a datetime slice.
Step20: <div class="alert alert-success">
Step21: <div class="alert alert-success">
Step22: <div class="alert alert-success">
Step23: <div class="alert alert-success">
Step24: The power of pandas
Step25: <div class="alert alert-danger">
Step26: <div class="alert alert-info">
Step27: <div class="alert alert-success">
Step28: <div class="alert alert-success">
Step29: <div class="alert alert-success">
Step30: <div class="alert alert-success">
|
8,982
|
<ASSISTANT_TASK:>
Python Code:
#Import libraries
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
import time
import csv
import sys
# Create a streamer object
class StdOutListener(StreamListener):
# Define a function that is initialized when the miner is called
def __init__(self, api = None):
# That sets the api
self.api = api
# Create a file with 'data_' and the current time
self.filename = 'data'+'_'+time.strftime('%Y%m%d-%H%M%S')+'.csv'
# Create a new file with that filename
csvFile = open(self.filename, 'w')
# Create a csv writer
csvWriter = csv.writer(csvFile)
# Write a single row with the headers of the columns
csvWriter.writerow(['text',
'created_at',
'geo',
'lang',
'place',
'coordinates',
'user.favourites_count',
'user.statuses_count',
'user.description',
'user.location',
'user.id',
'user.created_at',
'user.verified',
'user.following',
'user.url',
'user.listed_count',
'user.followers_count',
'user.default_profile_image',
'user.utc_offset',
'user.friends_count',
'user.default_profile',
'user.name',
'user.lang',
'user.screen_name',
'user.geo_enabled',
'user.profile_background_color',
'user.profile_image_url',
'user.time_zone',
'id',
'favorite_count',
'retweeted',
'source',
'favorited',
'retweet_count'])
# When a tweet appears
def on_status(self, status):
# Open the csv file created previously
csvFile = open(self.filename, 'a')
# Create a csv writer
csvWriter = csv.writer(csvFile)
# If the tweet is not a retweet
if not 'RT @' in status.text:
# Try to
try:
# Write the tweet's information to the csv file
csvWriter.writerow([status.text,
status.created_at,
status.geo,
status.lang,
status.place,
status.coordinates,
status.user.favourites_count,
status.user.statuses_count,
status.user.description,
status.user.location,
status.user.id,
status.user.created_at,
status.user.verified,
status.user.following,
status.user.url,
status.user.listed_count,
status.user.followers_count,
status.user.default_profile_image,
status.user.utc_offset,
status.user.friends_count,
status.user.default_profile,
status.user.name,
status.user.lang,
status.user.screen_name,
status.user.geo_enabled,
status.user.profile_background_color,
status.user.profile_image_url,
status.user.time_zone,
status.id,
status.favorite_count,
status.retweeted,
status.source,
status.favorited,
status.retweet_count])
# If some error occurs
except Exception as e:
# Print the error
print(e)
# and continue
pass
# Close the csv file
csvFile.close()
# Return nothing
return
# When an error occurs
def on_error(self, status_code):
# Print the error code
print('Encountered error with status code:', status_code)
# If the error code is 401, which is the error for bad credentials
if status_code == 401:
# End the stream
return False
# When a deleted tweet appears
def on_delete(self, status_id, user_id):
# Print message
print("Delete notice")
# Return nothing
return
# When reach the rate limit
def on_limit(self, track):
# Print rate limiting error
print("Rate limited, continuing")
# Continue mining tweets
return True
# When timed out
def on_timeout(self):
# Print timeout message
print(sys.stderr, 'Timeout...')
# Wait 10 seconds
time.sleep(10)
# Return nothing
return
# Create a mining function
def start_mining(queries):
'''
Inputs list of strings. Returns tweets containing those strings.
'''
#Variables that contains the user credentials to access Twitter API
consumer_key = "YOUR_CREDENTIALS"
consumer_secret = "YOUR_CREDENTIALS"
access_token = "YOUR_CREDENTIALS"
access_token_secret = "YOUR_CREDENTIALS"
# Create a listener
l = StdOutListener()
# Create authorization info
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
# Create a stream object with listener and authorization
stream = Stream(auth, l)
# Run the stream object using the user defined queries
stream.filter(track=queries)
# Start the miner
start_mining(['python', '#Python'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create A Twitter Stream Miner
Step2: Create A Wrapper For The Miner
Step3: Run The Stream Miner
|
8,983
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator
from mne.viz import set_3d_view
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
fname_trans = meg_path / 'sample_audvis_raw-trans.fif'
inv_fname = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(inv_fname)
print("Method: %s" % inv['methods'])
print("fMRI prior: %s" % inv['fmri_prior'])
print("Number of sources: %s" % inv['nsource'])
print("Number of channels: %s" % inv['nchan'])
src = inv['src'] # get the source space
# Get access to the triangulation of the cortex
print("Number of vertices on the left hemisphere: %d" % len(src[0]['rr']))
print("Number of triangles on left hemisphere: %d" % len(src[0]['use_tris']))
print("Number of vertices on the right hemisphere: %d" % len(src[1]['rr']))
print("Number of triangles on right hemisphere: %d" % len(src[1]['use_tris']))
fig = mne.viz.plot_alignment(subject='sample', subjects_dir=subjects_dir,
trans=fname_trans, surfaces='white', src=src)
set_3d_view(fig, focalpoint=(0., 0., 0.06))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show the 3D source space
|
8,984
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-2', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
8,985
|
<ASSISTANT_TASK:>
Python Code:
#read data using pandas
import pandas as pd
import numpy as np
boston_df = pd.read_csv('boston.csv')
#verify whether ther exisits NaN
print np.sum(boston_df.isnull())
boston_df
boston_df.describe()
#get names
x_var_names = list(boston_df)[:-1]
print x_var_names
y_var_names = list(boston_df)[-1]
print y_var_names
#get matrix X and vector y
X = boston_df[x_var_names]
y = boston_df[y_var_names]
#some easy checks
print X.shape
print y.shape
#split
import numpy as np
import sklearn
print sklearn.__version__
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print sum(y)
print len(y)
print X_train.shape
print X_test.shape
from sklearn import linear_model
#build the model
rl_model = linear_model.LinearRegression()
#fit the model
rl_model.fit(X_train,y_train)
#Before we test, let's explore the model
coefs = [rl_model.intercept_]
coefs.extend(list(rl_model.coef_))
labels = ['bias']
labels.extend(x_var_names)
"----"
for n,c in zip(labels,coefs):
print
print n,str(round(c,3))
print "---------------"
from sklearn.linear_model import LassoCV
from sklearn.feature_selection import SelectFromModel
#Hay que elegir un hiperparametro que será lambda
print X_train.shape
lscv = LassoCV() #Hará cross validation por dentro
lscv.fit(X_train, y_train) #Por defecto hay 100 lambdas, ejeY = MSE, ejeX = lambda. Se elige una lambda que me de el minimo
# punto de la curva.
models = SelectFromModel(lscv, prefit = True) #Extrae las caracteristicas mejores para el modelo que le pase
#En este caso es LassoCV
X_new = models.transform(X_train)
print X_new.shape
print lscv.coef_
coefs = lscv.coef_
labels = ['bias']
labels.extend(x_var_names)
"----"
for n,c in zip(labels,coefs):
print
print n,str(round(c,3))
print "---------------"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ejercicio 1
|
8,986
|
<ASSISTANT_TASK:>
Python Code:
from phievo.Networks import mutation,deriv2
import random
g = random.Random(20160225) # This define a new random number generator
L = mutation.Mutable_Network(g) # Create an empty network
parameters=[['Degradable',0.5]] ## The species is degradable with a rate 0.5
parameters.append(['Input',0]) ## The species cannot serve as an input for the evolution algorithm
parameters.append(['Complexable']) ## The species can be involved in a complex
parameters.append(['Kinase']) ## The specise can phosphorilate another species.
parameters.append(['TF',1]) ## 1 for activator 0 for repressor
S0 = L.new_Species(parameters)
L = mutation.Mutable_Network(g) ## Clear the network
## Gene 0
parameters=[['Degradable',0.5]]
parameters.append(['TF',1])
parameters.append(['Complexable'])
TM0,prom0,S0 = L.new_gene(0.5,5,parameters) ## Adding a new gene creates a TModule, a CorePromoter and a species
# Gene 1
parameters=[['Degradable',0.5]]
parameters.append(['TF',0])
parameters.append(['Complexable'])
TM1,prom1,S1 = L.new_gene(0.5,5,parameters)
# Gene 2
parameters=[['Degradable',0.5]]
parameters.append(['TF',1])
parameters.append(['Phosphorylable'])
TM2,prom2,S2 = L.new_gene(0.5,5,parameters)
# Gene 3
parameters=[['Degradable',0.5]]
parameters.append(['TF',0])
TM3,prom3,S3 = L.new_gene(0.5,5,parameters)
parameters.append(['Kinase'])
ppi,S4 = L.new_PPI(S0 , S1 , 2.0 , 1.0 , parameters)
S5,phospho = L.new_Phosphorylation(S4,S2,2.0,0.5,1.0,3)
S5.change_type("TF",[1]) # Note this is already the default value for a phosphorilated species
tfhill1 = L.new_TFHill( S3, 1, 0.5, TM1,activity=1)
tfhill2 = L.new_TFHill( S5, 1, 0.5, TM1,activity=1)
L.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an empty network
Step2: Create a new species S0
Step3: Adding a gene
Step4: ### Add complexation between S0 and S1.
Step5: Add a phosphorylation of S2 by S4
Step6: Regulate the production of S1 by S3 and S5
Step7: Add a regulation of The production of S0 by S5 and S3
|
8,987
|
<ASSISTANT_TASK:>
Python Code:
import numpy
import os
import pydot
import graphviz
seed = 7
numpy.random.seed(seed)
dataset = numpy.genfromtxt('dataset.csv', delimiter=',', skip_header=1)
X = dataset[:,0:31]
Y = dataset[:,31]
mask = ~numpy.any(numpy.isnan(X), axis=1)
X = X[mask]
Y = Y[mask]
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(62, input_dim=31, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(15, kernel_initializer='uniform', activation='relu'))
model.add(Dense(31, kernel_initializer='uniform', activation='relu'))
model.add(Dense(5, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X, Y, validation_split=0.33, epochs=500, batch_size=10)
scores = model.evaluate(X, Y)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
model_json = model.to_json()
with open("model_structure.json", "w") as json_file:
json_file.write(model_json)
model.save_weights("model_weight.h5")
%matplotlib notebook
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
fig_accuracy = plt.figure()
plt.plot(epochs, acc, 'r.', label='training')
plt.plot(epochs, val_acc, 'r', label='validation')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.legend(loc='upper left')
# fig_accuracy.savefig('fine_tuning_plot_accuracy_%d_%d_%d.png' % (EPOCHS, BAT_SIZE, FROZEN_LAYERS))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fix seed for reproducibility
Step2: Load dataset
Step3: Split dataset into two variables, X for datas and Y for labels
Step4: Create model
Step5: Compile model
Step6: Train model using provided dataset
Step7: Test trained model
Step8: Save model structure and weight
Step9: Plot model training and validation accuracy and loss
|
8,988
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('LAB_3_large_data_set_cleaned.csv')
ax = sns.violinplot(data=df, palette="pastel")
plt.show()
fig = ax.get_figure()
fig.savefig('sns_violin_plot.png', dpi=300)
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('LAB_3_large_data_set_cleaned.csv')
ax = sns.violinplot(data=df, palette="pastel")
plt.show()
fig = ax.get_figure()
fig.savefig('sns_violin_plot.png', dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the data
Step2: Make the violin plot
Step3: Save the figure
Step4: The whole script
|
8,989
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
np.set_printoptions(precision=4, suppress=True)
sns.set_context('notebook')
%matplotlib inline
# True parameter
theta = .5
# Sample size
n = int(1e2)
# Independent variable, N(0,1)
X = np.random.normal(0, 1, n)
# Error term, N(0,1)
e = np.random.normal(0, 1, n)
# Sort data for nice plots
X = np.sort(X)
# Unobservable dependent variable
Ys = X * theta + e
# Generate observable binary variable
Y = np.zeros_like(Ys)
Y[Ys > 0] = 1
plt.figure(figsize=(16,8))
# Unobservables
plt.subplot(2, 1, 1)
plt.plot(X, X * theta, label='True model')
plt.scatter(X[Ys > 0], Ys[Ys > 0], c='red', label='Unobserved > 0')
plt.scatter(X[Ys < 0], Ys[Ys < 0], c='blue', label='Unobserved < 0')
plt.ylabel(r'$Y^*$')
plt.xlabel(r'$X$')
plt.legend()
# Observables
plt.subplot(2, 1, 2)
plt.scatter(X, Y, c=[], lw=2)
plt.ylabel(r'$Y$')
plt.xlabel(r'$X$')
plt.show()
import scipy.optimize as opt
from scipy.stats import norm
# Define objective function
def f(theta, X, Y):
Q = - np.sum(Y * np.log(1e-3 + norm.cdf(X * theta)) + (1 - Y) * np.log(1e-3 + 1 - norm.cdf(X * theta)))
return Q
# Run optimization routine
theta_hat = opt.fmin_bfgs(f, 0., args=(X, Y))
print(theta_hat)
# Generate data for objective function plot
th = np.linspace(-3., 3., 1e2)
Q = [f(z, X, Y) for z in th]
# Plot the data
plt.figure(figsize=(8, 4))
plt.plot(th, Q, label='Q')
plt.xlabel(r'$\theta$')
plt.axvline(x=theta_hat, c='red', label='Estimated')
plt.axvline(x=theta, c='black', label='True')
plt.legend()
plt.show()
from scipy.optimize import fsolve
# Define the first order condition
def df(theta, X, Y):
return - np.sum(X * norm.pdf(X * theta) * (Y - norm.cdf(X * theta)))
# Solve FOC
theta_hat = fsolve(df, 0., args=(X, Y))
print(theta_hat)
# Generate data for the plot
th = np.linspace(-3., 3., 1e2)
Q = np.array([df(z, X, Y) for z in th])
# Plot the data
plt.figure(figsize=(8, 4))
plt.plot(th, Q, label='Q')
plt.xlabel(r'$\beta$')
plt.axvline(x=theta_hat, c='red', label='Estimated')
plt.axvline(x=theta, c='black', label='True')
plt.axhline(y=0, c='green')
plt.legend()
plt.show()
plt.figure(figsize=(16, 8))
# Unobservables
plt.subplot(2, 1, 1)
plt.plot(X, X * theta, label='True model')
plt.plot(X, X * theta_hat, label='Fitted model')
plt.scatter(X[Ys > 0], Ys[Ys > 0], c='red', label='Unobserved > 0')
plt.scatter(X[Ys < 0], Ys[Ys < 0], c='blue', label='Unobserved < 0')
plt.ylabel(r'$Y^*$')
plt.xlabel(r'$X$')
plt.legend()
# Observables
plt.subplot(2, 1, 2)
plt.scatter(X, Y, c=[], label='Y')
plt.plot(X, norm.cdf(X * theta), label=r'$\Phi(X\theta)$')
plt.plot(X, norm.cdf(X * theta_hat), label=r'$\Phi(X\hat{\theta})$')
plt.ylabel(r'$Y$')
plt.xlabel(r'$X$')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate data
Step2: Plot the data and the model
Step3: Maximize log-likelihood
Step4: Plot objective function, true parameter, and the estimate
Step5: Solve first order conditions
Step6: Plot first order condition
Step7: Plot original data and fitted model
|
8,990
|
<ASSISTANT_TASK:>
Python Code:
# Učitaj osnovne biblioteke...
import numpy as np
import sklearn
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
%pylab inline
X = np.array([[0],[1],[2],[4]])
y = np.array([4,1,2,5])
X1 = X
y1 = y
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(1)
phi = poly.fit_transform(X)
print(phi)
# Vaš kôd ovdje
from numpy import linalg
pinverse1 = pinv(phi)
pinverse2 = matmul(inv(matmul(transpose(phi), phi)), transpose(phi))
#print(pinverse1)
#print(pinverse2)
w = matmul(pinverse2, y)
print(w)
# Vaš kôd ovdje
import sklearn.metrics as mt
wt = w #(np.array([w]))
print(wt)
print(phi)
hx = np.dot(phi, w)
E = mt.mean_squared_error(hx, y)
print(E)
# Vaš kôd ovdje
# Vaš kôd ovdje
try:
w = matmul(inv(phi), y)
print(w)
except LinAlgError as err:
print("Exception")
print(err)
from sklearn.linear_model import LinearRegression
# Vaš kôd ovdje
lr = LinearRegression().fit(X, y)
#print(lr.score(X, y))
#print(lr.coef_)
#print(lr.intercept_)
print([lr.intercept_, lr.coef_])
print(wt)
pr = lr.predict(X)
E = mt.mean_squared_error(pr, y)
print(E)
from numpy.random import normal
def make_labels(X, f, noise=0) :
# Vaš kôd ovdje
N = numpy.random.normal
fx = f(X)
#nois = [N(0, noise) for _ in range(X.shape[0])]
#print(nois)
#y = f(X) + nois
y = [ f(x) + N(0, noise) for x in X ]
return y
def make_instances(x1, x2, N) :
return np.array([np.array([x]) for x in np.linspace(x1,x2,N)])
# Vaš kôd ovdje
N = 50
def f(x):
return 5 + x - 2*x*x - 5*x*x*x
noise = 200
X2 = make_instances(-5, 5, N)
y2 = make_labels(X2, f, noise)
#print(X)
#print(y)
s = scatter(X2, y2)
# Vaš kôd ovdje
import sklearn.linear_model as lm
def polyX(d):
p3 = PolynomialFeatures(d).fit_transform(X2)
l2 = LinearRegression().fit(p3, y2)
h2 = l2.predict(p3)
E = mt.mean_squared_error(h2, y2)
print('d: ' + str(d) + ' E: ' + str(E))
#print(p3)
plot(X2, h2, label = str(d))
scatter(X2, y2)
polyX(3)
# Vaš kôd ovdje
figure(figsize=(15,10))
scatter(X2, y2)
polyX(1)
polyX(3)
polyX(5)
polyX(10)
polyX(20)
s = plt.legend(loc="center right")
from sklearn.model_selection import train_test_split
# Vaš kôd ovdje
xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)
testError = []
trainError = []
for d in range(1,33):
polyXTrain = PolynomialFeatures(d).fit_transform(xTr)
polyXTest = PolynomialFeatures(d).fit_transform(xTest)
l2 = LinearRegression().fit(polyXTrain, yTr)
h2 = l2.predict(polyXTest)
E = mt.mean_squared_error(h2, yTest)
#print('d: ' + str(d) + ' E: ' + str(E))
testError.append(E)
h2 = l2.predict(polyXTrain)
E = mt.mean_squared_error(h2, yTr)
#print('d: ' + str(d) + ' E: ' + str(E))
trainError.append(E)
#print(p3)
#plot(polyXTest, h2, label = str(d))
plot(numpy.log(numpy.array(testError)), label='test')
plot(numpy.log(numpy.array(trainError)), label='train')
legend()
# Vaš kôd ovdje
# Vaš kôd ovdje
figure(figsize=(15,15))
N = 1000
def f(x):
return 5 + x - 2*x*x - 5*x*x*x
X3 = make_instances(-5, 5, N)
xAllTrain, xAllTest = train_test_split(X3, test_size=0.5)
i = 0
j = 0
for N in [100, 200, 1000]:
for noise in [100, 200, 500]:
j += 1
xTrain = xAllTrain[:N]
xTest = xAllTest[:N]
yTrain = make_labels(xTrain, f, noise)
yTest = make_labels(xTest, f, noise)
trainError = []
testError = []
for d in range(1,21):
polyXTrain = PolynomialFeatures(d).fit_transform(xTrain)
polyXTest = PolynomialFeatures(d).fit_transform(xTest)
l2 = LinearRegression().fit(polyXTrain, yTrain)
h2 = l2.predict(polyXTest)
testE = mt.mean_squared_error(h2, yTest)
testError.append(testE)
h2 = l2.predict(polyXTrain)
trainE = mt.mean_squared_error(h2, yTrain)
trainError.append(trainE)
#print('d: ' + str(d) + ' E: ' + str(E))
#print(p3)
#plot(polyXTest, h2, label = str(d))
subplot(3,3,j, title = "N: " + str(N) + " noise: " + str(noise))
plot(numpy.log(numpy.array(trainError)), label = 'train')
plot(numpy.log(numpy.array(testError)), label = 'test')
plt.legend(loc="center right")
#print(X)
#print(y)
#s = scatter(X2, y2)
# Vaš kôd ovdje
phi4 = PolynomialFeatures(3).fit_transform(X1)
def reg2(lambd):
w = matmul( matmul(inv( matmul(transpose(phi4), phi4) + lambd * identity(len(phi4))), transpose(phi4)), y1)
print(w)
reg2(0)
reg2(1)
reg2(10)
from sklearn.linear_model import Ridge
#for s in ['auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']:
for l in [0, 1, 10]:
r = Ridge(l, fit_intercept = False).fit(phi4, y1)
print(r.coef_)
print(r.intercept_)
# Vaš kôd ovdje
# Vaš kôd ovdje
N = 50
figure(figsize = (15, 15))
x123 = scatter(X2, y2)
for lambd in [0, 100]:
for d in [2, 10]:
phi2 = PolynomialFeatures(d).fit_transform(X2)
r = Ridge(lambd).fit(phi2, y2)
h2 = r.predict(phi2)
#print(d)
plot(X2, h2, label="lambda " + str(lambd) + " d " + str(d))
x321 = plt.legend(loc="center right")
# Vaš kôd ovdje
xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)
figure(figsize=(10,10))
trainError = []
testError = []
#print(xTr)
for lambd in range(0,51):
polyXTrain = PolynomialFeatures(10).fit_transform(xTr)
polyXTest = PolynomialFeatures(10).fit_transform(xTest)
l2 = Ridge(lambd).fit(polyXTrain, yTr)
h2 = l2.predict(polyXTest)
E = mt.mean_squared_error(h2, yTest)
#print('d: ' + str(d) + ' E: ' + str(E))
testError.append(log( E))
h2 = l2.predict(polyXTrain)
E = mt.mean_squared_error(h2, yTr)
trainError.append(log(E))
#print(p3)
#plot(polyXTest, h2, label = str(d))
#print(numpy.log(numpy.array(testError)))
plot(numpy.log(numpy.array(testError)), label="test")
plot(numpy.log(numpy.array(trainError)), label="train")
grid()
legend()
def nonzeroes(coef, tol=1e-6):
return len(coef) - len(coef[np.isclose(0, coef, atol=tol)])
# Vaš kôd ovdje
d = 10
l0 = []
l1 = []
l2 = []
xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)
for lambd in range(0,101):
polyXTrain = PolynomialFeatures(10).fit_transform(xTr)
polyXTest = PolynomialFeatures(10).fit_transform(xTest)
r = Ridge(lambd).fit(polyXTrain, yTr)
r.coef_[0] = r.intercept_
l0.append(nonzeroes(r.coef_))
#print(r.coef_)
l1.append(numpy.linalg.norm(r.coef_, ord=1))
l2.append(numpy.linalg.norm(r.coef_, ord=2))
figure(figsize=(10,10))
plot(l0, label="l0")
legend()
grid()
figure(figsize=(10,10))
plot(l1, label="l1")
legend()
grid()
figure(figsize=(10,10))
plot(l2, label="l2")
legend()
grid()
# Vaš kôd ovdje
d = 10
l0 = []
l1 = []
l2 = []
xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)
for lambd in range(0,101):
polyXTrain = PolynomialFeatures(10).fit_transform(xTr)
polyXTest = PolynomialFeatures(10).fit_transform(xTest)
r = sklearn.linear_model.Lasso(lambd).fit(polyXTrain, yTr)
r.coef_[0] = r.intercept_
l0.append(nonzeroes(r.coef_))
#print(r.coef_)
l1.append(numpy.linalg.norm(r.coef_, ord=1))
l2.append(numpy.linalg.norm(r.coef_, ord=2))
figure(figsize=(10,10))
plot(l0, label="l0")
legend()
figure(figsize=(10,10))
plot(l1, label="l1")
legend()
figure(figsize=(10,10))
plot(l2, label="l2")
legend()
n_data_points = 500
np.random.seed(69)
# Generiraj podatke o bodovima na prijamnom ispitu koristeći normalnu razdiobu i ograniči ih na interval [1, 3000].
exam_score = np.random.normal(loc=1500.0, scale = 500.0, size = n_data_points)
exam_score = np.round(exam_score)
exam_score[exam_score > 3000] = 3000
exam_score[exam_score < 0] = 0
# Generiraj podatke o ocjenama iz srednje škole koristeći normalnu razdiobu i ograniči ih na interval [1, 5].
grade_in_highschool = np.random.normal(loc=3, scale = 2.0, size = n_data_points)
grade_in_highschool[grade_in_highschool > 5] = 5
grade_in_highschool[grade_in_highschool < 1] = 1
# Matrica dizajna.
grades_X = np.array([exam_score,grade_in_highschool]).T
# Završno, generiraj izlazne vrijednosti.
rand_noise = np.random.normal(loc=0.0, scale = 0.5, size = n_data_points)
exam_influence = 0.9
grades_y = ((exam_score / 3000.0) * (exam_influence) + (grade_in_highschool / 5.0) \
* (1.0 - exam_influence)) * 5.0 + rand_noise
grades_y[grades_y < 1] = 1
grades_y[grades_y > 5] = 5
# Vaš kôd ovdje
figure(figsize=(10,10))
scatter(exam_score, grades_y, label="l2")
legend()
figure(figsize=(10,10))
scatter(grade_in_highschool, grades_y, label="l2")
legend()
# Vaš kôd ovdje
r7b = Ridge(0.01).fit(grades_X, grades_y)
h2 = r7b.predict(grades_X)
E = mt.mean_squared_error(h2, grades_y)
print(E)
from sklearn.preprocessing import StandardScaler
# Vaš kôd ovdje
ssX = StandardScaler().fit_transform(grades_X)
ssY = StandardScaler().fit_transform(grades_y.reshape(-1, 1))
r = Ridge(0.01).fit(ssX, ssY)
h2 = r.predict(ssX)
E = mt.mean_squared_error(h2, ssY)
print(E)
# Vaš kôd ovdje
grades_X_fixed_colinear = [ [x[0], x[1], x[1]] for x in ssX]
#print(grades_X_fixed_colinear)
# Vaš kôd ovdje
r8a = Ridge(0.01).fit(grades_X_fixed_colinear, ssY)
h2 = r8a.predict(grades_X_fixed_colinear)
E = mt.mean_squared_error(h2, ssY)
print(E)
print(r7b.coef_)
print(r8a.coef_)
# Vaš kôd ovdje
for lambd in [0.01, 1000]:
print(lambd)
ws1 = []
ws2 = []
ws3 = []
for i in range(10):
xTrain, xTest, yTrain, yTest = train_test_split(grades_X_fixed_colinear, ssY, test_size=0.5)
print(l2.coef_)
l2 = Ridge(lambd).fit(xTrain, yTrain)
ws1.append(l2.coef_[0][0])
ws2.append(l2.coef_[0][1])
ws3.append(l2.coef_[0][2])
print("std dev: " + str(np.std(ws1)))
print("std dev: " + str(np.std(ws2)))
print("std dev: " + str(np.std(ws3)))
# Vaš kôd ovdje
#print(grades_X_fixed_colinear)
for l in [0.01, 10]:
#print(l * identity(len(grades_X_fixed_colinear)))
mm = matmul(transpose(grades_X_fixed_colinear), grades_X_fixed_colinear)
matr = mm + l * identity(len(mm))
print(matr)
print(np.linalg.cond(matr))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Zadatci
Step2: (a)
Step3: (b)
Step4: Radi jasnoće, u nastavku je vektor $\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ označen kao $\tilde{\mathbf{x}}$.
Step5: (d)
Step6: (e)
Step7: 2. Polinomijalna regresija i utjecaj šuma
Step8: Prikažite taj skup funkcijom scatter.
Step9: (b)
Step10: 3. Odabir modela
Step11: (b)
Step12: (c)
Step13: Q
Step14: (b)
Step15: 5. Regularizirana polinomijalna regresija
Step16: (b)
Step17: 6. L1-regularizacija i L2-regularizacija
Step18: (a)
Step19: (b)
Step20: 7. Značajke različitih skala
Step21: a) Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj značajki (x-os). Iscrtajte dva odvojena grafa.
Step22: b) Naučite model L2-regularizirane regresije ($\lambda = 0.01$), na podacima grades_X i grades_y
Step23: Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.
Step24: Q
Step25: Ponovno, naučite na ovom skupu L2-regularizirani model regresije ($\lambda = 0.01$).
Step26: Q
Step27: Q
|
8,991
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import gammainc, gammaincinv
import pandas as pd
import pastas as ps
ps.show_versions()
rain = ps.read.read_knmi('../examples/data/etmgeg_260.txt', variables='RH').series
evap = ps.read.read_knmi('../examples/data/etmgeg_260.txt', variables='EV24').series
def gamma_tmax(A, n, a, cutoff=0.999):
return gammaincinv(n, cutoff) * a
def gamma_step(A, n, a, cutoff=0.999):
tmax = gamma_tmax(A, n, a, cutoff)
t = np.arange(0, tmax, 1)
s = A * gammainc(n, t / a)
return s
def gamma_block(A, n, a, cutoff=0.999):
# returns the gamma block response starting at t=0 with intervals of delt = 1
s = gamma_step(A, n, a, cutoff)
return np.append(s[0], s[1:] - s[:-1])
Atrue = 800
ntrue = 1.1
atrue = 200
dtrue = 20
h = gamma_block(Atrue, ntrue, atrue)
tmax = gamma_tmax(Atrue, ntrue, atrue)
plt.plot(h)
plt.xlabel('Time (days)')
plt.ylabel('Head response (m) due to 1 mm of rain in day 1')
plt.title('Gamma block response with tmax=' + str(int(tmax)));
step = gamma_block(Atrue, ntrue, atrue)[1:]
lenstep = len(step)
h = dtrue * np.ones(len(rain) + lenstep)
for i in range(len(rain)):
h[i:i + lenstep] += rain[i] * step
head = pd.DataFrame(index=rain.index, data=h[:len(rain)],)
head = head['1990':'1999']
plt.figure(figsize=(12,5))
plt.plot(head,'k.', label='head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Time (years)');
ml = ps.Model(head)
sm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm)
ml.solve(noise=False, ftol=1e-8)
ml.plots.results();
plt.plot(gamma_block(Atrue, ntrue, atrue), label='Synthetic response')
plt.plot(ml.get_block_response('recharge'), '-.', label='Pastas response')
plt.legend(loc=0)
plt.ylabel('Head response (m) due to 1 m of rain in day 1')
plt.xlabel('Time (days)');
random_seed = np.random.RandomState(15892)
noise = random_seed.normal(0,1,len(head)) * np.std(head.values) * 0.5
head_noise = head[0] + noise
ml2 = ps.Model(head_noise)
sm2 = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml2.add_stressmodel(sm2)
ml2.solve(noise=True)
ml2.plots.results();
plt.figure(figsize=(12,5))
plt.plot(head_noise, '.k', alpha=0.1, label='Head with noise')
plt.plot(head, '.k', label='Head true')
plt.plot(ml2.simulate(), label='Pastas simulation')
plt.title('Simulated Pastas head compared with synthetic head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Date (years)');
noise_corr = np.zeros(len(noise))
noise_corr[0] = noise[0]
alphatrue = 2
for i in range(1, len(noise_corr)):
noise_corr[i] = np.exp(-1/alphatrue) * noise_corr[i - 1] + noise[i]
head_noise_corr = head[0] + noise_corr
ml3 = ps.Model(head_noise_corr)
sm3 = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml3.add_stressmodel(sm3)
ml3.solve(noise=True)
ml3.plots.results();
plt.figure(figsize=(12,5))
plt.plot(head_noise_corr, '.k', alpha=0.1, label='Head with correlated noise')
plt.plot(head, '.k', label='Head true')
plt.plot(ml3.simulate(), label='Pastas simulation')
plt.title('Simulated Pastas head compared with synthetic head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Date (years)');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data and define functions
Step2: The Gamma response function requires 3 input arguments; A, n and a. The values for these parameters are defined along with the parameter d, the base groundwater level. The response function is created using the functions defined above.
Step3: Create synthetic observations
Step4: Create Pastas model
Step5: The results of the Pastas model show the calibrated parameters for the Gamma response function. The parameters calibrated using pastas are equal to the Atrue, ntrue, atrue and dtrue parameters defined above. The Explained Variance Percentage for this example model is 100%.
Step6: Test 1
Step7: Create Pastas model
Step8: The results of the simulation show that Pastas is able to filter the noise from the observed groundwater head. The simulated groundwater head and the generated synthetic head are plotted below. The parameters found with the Pastas optimization are similair to the original parameters of the Gamma response function.
Step9: Test 2
Step10: Create Pastas model
Step11: The Pastas model is able to calibrate the model parameters fairly well. The calibrated parameters are close to the true values defined above. The noise_alpha parameter calibrated by Pastas is close the the alphatrue parameter defined for the correlated noise series.
|
8,992
|
<ASSISTANT_TASK:>
Python Code:
# you would normaly install eppy by doing #
# python setup.py install
# or
# pip install eppy
# or
# easy_install eppy
# if you have not done so, uncomment the following three lines
import sys
# pathnameto_eppy = 'c:/eppy'
pathnameto_eppy = '../'
sys.path.append(pathnameto_eppy)
from eppy import modeleditor
from eppy.modeleditor import IDF
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
try:
idf1 = IDF(fname1)
except Exception as e:
raise e
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
IDF.setiddname(iddfile)
idf1 = IDF(fname1)
try:
IDF.setiddname("anotheridd.idd")
except Exception as e:
raise e
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
# IDF.setiddname(iddfile)# idd ws set further up in this page
idf1 = IDF(fname1)
building = idf1.idfobjects['building'][0]
print(building)
print(building.getrange("Loads_Convergence_Tolerance_Value"))
print(building.checkrange("Loads_Convergence_Tolerance_Value"))
building.Loads_Convergence_Tolerance_Value = 0.6
from eppy.bunch_subclass import RangeError
try:
print(building.checkrange("Loads_Convergence_Tolerance_Value"))
except RangeError as e:
raise e
print(building.fieldnames)
for fieldname in building.fieldnames:
print("%s = %s" % (fieldname, building[fieldname]))
from eppy.bunch_subclass import RangeError
for fieldname in building.fieldnames:
try:
building.checkrange(fieldname)
print("%s = %s #-in range" % (fieldname, building[fieldname],))
except RangeError as e:
print("%s = %s #-****OUT OF RANGE****" % (fieldname, building[fieldname],))
# some initial steps
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
# IDF.setiddname(iddfile) # Has already been set
# - Let us first open a file from the disk
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
idf_fromfilename = IDF(fname1) # initialize the IDF object with the file name
idf_fromfilename.printidf()
# - now let us open a file from the disk differently
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
fhandle = open(fname1, 'r') # open the file for reading and assign it a file handle
idf_fromfilehandle = IDF(fhandle) # initialize the IDF object with the file handle
idf_fromfilehandle.printidf()
# So IDF object can be initialized with either a file name or a file handle
# - How do I create a blank new idf file
idftxt = "" # empty string
from io import StringIO
fhandle = StringIO(idftxt) # we can make a file handle of a string
idf_emptyfile = IDF(fhandle) # initialize the IDF object with the file handle
idf_emptyfile.printidf()
# - The string does not have to be blank
idftxt = "VERSION, 7.3;" # Not an emplty string. has just the version number
fhandle = StringIO(idftxt) # we can make a file handle of a string
idf_notemptyfile = IDF(fhandle) # initialize the IDF object with the file handle
idf_notemptyfile.printidf()
# - give it a file name
idf_notemptyfile.idfname = "notemptyfile.idf"
# - Save it to the disk
idf_notemptyfile.save()
txt = open("notemptyfile.idf", 'r').read()# read the file from the disk
print(txt)
import os
os.remove("notemptyfile.idf")
# making a blank idf object
blankstr = ""
from io import StringIO
idf = IDF(StringIO(blankstr))
newobject = idf.newidfobject("material")
print(newobject)
newobject.Name = "Shiny new material object"
print(newobject)
anothermaterial = idf.newidfobject("material")
anothermaterial.Name = "Lousy material"
thirdmaterial = idf.newidfobject("material")
thirdmaterial.Name = "third material"
print(thirdmaterial)
print(idf.idfobjects["MATERIAL"])
idf.popidfobject('MATERIAL', 1) # first material is '0', second is '1'
print(idf.idfobjects['MATERIAL'])
firstmaterial = idf.idfobjects['MATERIAL'][-1]
idf.removeidfobject(firstmaterial)
print(idf.idfobjects['MATERIAL'])
onlymaterial = idf.idfobjects["MATERIAL"][0]
idf.copyidfobject(onlymaterial)
print(idf.idfobjects["MATERIAL"])
gypboard = idf.newidfobject('MATERIAL', Name="G01a 19mm gypsum board",
Roughness="MediumSmooth",
Thickness=0.019,
Conductivity=0.16,
Density=800,
Specific_Heat=1090)
print(gypboard)
print(idf.idfobjects["MATERIAL"])
interiorwall = idf.newidfobject("CONSTRUCTION", Name="Interior Wall",
Outside_Layer="G01a 19mm gypsum board",
Layer_2="Shiny new material object",
Layer_3="G01a 19mm gypsum board")
print(interiorwall)
modeleditor.rename(idf, "MATERIAL", "G01a 19mm gypsum board", "peanut butter")
print(interiorwall)
idf.printidf()
defaultmaterial = idf.newidfobject("MATERIAL",
Name='with default')
print(defaultmaterial)
nodefaultmaterial = idf.newidfobject("MATERIAL",
Name='Without default',
defaultvalues=False)
print(nodefaultmaterial)
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname1 = "../eppy/resources/idffiles/V_7_2/box.idf"
# IDF.setiddname(iddfile)
idf = IDF(fname1)
surfaces = idf.idfobjects["BuildingSurface:Detailed"]
surface = surfaces[0]
print("area = %s" % (surface.area, ))
print("tilt = %s" % (surface.tilt, ))
print( "azimuth = %s" % (surface.azimuth, ))
zones = idf.idfobjects["ZONE"]
zone = zones[0]
area = modeleditor.zonearea(idf, zone.Name)
volume = modeleditor.zonevolume(idf, zone.Name)
print("zone area = %s" % (area, ))
print("zone volume = %s" % (volume, ))
idf1.printidf()
import eppy.json_functions as json_functions
json_str = {"idf.VERSION..Version_Identifier":8.5,
"idf.SIMULATIONCONTROL..Do_Zone_Sizing_Calculation": "No",
"idf.SIMULATIONCONTROL..Do_System_Sizing_Calculation": "No",
"idf.SIMULATIONCONTROL..Do_Plant_Sizing_Calculation": "No",
"idf.BUILDING.Empire State Building.North_Axis": 52,
"idf.BUILDING.Empire State Building.Terrain": "Rural",
}
json_functions.updateidf(idf1, json_str)
idf1.printidf()
json_str = {"idf.BUILDING.Taj.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
idf1.idfobjects['building']
# of course, you are creating an invalid E+ file. But we are just playing here.
# first way
json_str = {"idf.BUILDING.Taj.with.dot.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
# second way (put the name in single quotes)
json_str = {"idf.BUILDING.'Another.Taj.with.dot'.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
idf1.idfobjects['building']
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
IDF.setiddname(iddfile)
idf = IDF(fname)
from importlib import reload
import eppy
reload(eppy.modeleditor)
from eppy.easyopen import easyopen
fname = '../eppy/resources/idffiles/V8_8/smallfile.idf'
idf = easyopen(fname)
from eppy.results import readhtml # the eppy module with functions to read the html
import pprint
pp = pprint.PrettyPrinter()
fname = "../eppy/resources/outputfiles/V_7_2/5ZoneCAVtoVAVWarmestTempFlowTable_ABUPS.html" # the html file you want to read
html_doc = open(fname, 'r').read()
htables = readhtml.titletable(html_doc) # reads the tables with their titles
firstitem = htables[0]
pp.pprint(firstitem)
from eppy.results import fasthtml
fname = "../eppy/resources/outputfiles/V_7_2/5ZoneCAVtoVAVWarmestTempFlowTable_ABUPS.html" # the html file you want to read
filehandle = open(fname, 'r') # get a file handle to the html file
firsttable = fasthtml.tablebyindex(filehandle, 0)
pp.pprint(firstitem)
filehandle = open(fname, 'r') # get a file handle to the html file
namedtable = fasthtml.tablebyname(filehandle, "Site and Source Energy")
pp.pprint(namedtable)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When things go wrong in your eppy script, you get "Errors and Exceptions".
Step2: Now let us open file fname1 without setting the idd file
Step3: OK. It does not let you do that and it raises an exception
Step4: That worked without raising an exception
Step5: Excellent!! It raised the exception we were expecting.
Step6: Let us set these values outside the range and see what happens
Step7: So the Range Check works
Step8: So let us use this
Step9: Now let us test if the values are in the legal range. We know that "Loads_Convergence_Tolerance_Value" is out of range
Step10: You see, we caught the out of range value
Step11: It did not print anything. Why should it. It was empty.
Step12: Aha !
Step13: Let us confirm that the file was saved to disk
Step14: Yup ! that file was saved. Let us delete it since we were just playing
Step15: Deleting, copying/adding and making new idfobjects
Step16: To make and add a new idfobject object, we use the function IDF.newidfobject(). We want to make an object of type "MATERIAL"
Step17: Let us give this a name, say "Shiny new material object"
Step18: Let us look at all the "MATERIAL" objects
Step19: As we can see there are three MATERIAL idfobjects. They are
Step20: You can see that the second material is gone ! Now let us remove the first material, but do it using a different function
Step21: So we have two ways of deleting an idf object
Step22: So now we have a copy of the material. You can use this method to copy idf objects from other idf files too.
Step23: newidfobject() also fills in the default values like "Thermal Absorptance", "Solar Absorptance", etc.
Step24: Renaming an idf object
Step25: to rename gypboard and have that name change in all the places we call modeleditor.rename(idf, key, oldname, newname)
Step26: Now we have "peanut butter" everywhere. At least where we need it. Let us look at the entir idf file, just to be sure
Step27: Turn off default values
Step28: But why would you want to turn it off.
Step29: Can we do the same for zones ?
Step30: Not as slick, but still pretty easy
Step31: Compare the first printidf() and the second printidf().
Step32: What if you object name had a dot . in it? Will the json_function get confused?
Step33: Note When you us the json update function
Step34: You have to find the IDD file on your hard disk.
Step35: For this to work,
Step36: titletable reads all the tables in the HTML file. With large E+ models, this file can be extremeely large and titletable will load all the tables into memory. This can take several minutes. If you are trying to get one table or one value from a table, waiting several minutes for you reseult can be exessive.
Step37: You can also get the table if you know the title of the table. This is the bold text just before the table in the HTML file. The title of our table is Site and Source Energy. The function tablebyname will get us the table.
|
8,993
|
<ASSISTANT_TASK:>
Python Code:
# Serialising.
with open(path, 'wb') as proto_file:
proto_file.write(proto.SerializeToString())
# Deserialising. (from acton.proto.io)
proto = Proto()
with open(path, 'rb') as proto_file:
proto.ParseFromString(proto_file.read())
for proto in protos:
proto = proto.SerializeToString()
length = struct.pack('<Q', len(proto))
proto_file.write(length)
proto_file.write(proto)
length = proto_file.read(8) # 8 = long long
while length:
length, = struct.unpack('<Q', length)
proto = Proto()
proto.ParseFromString(proto_file.read(length))
length = proto_file.read(8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To serialise multiple protobufs into one file, we serialise each to a string, write the length of this string to a file, then write the string to the file. The length is needed because protobufs are not self-delimiting. We use an unsigned long long with the struct library to store the length.
Step2: We also want to store metadata in the resulting file. This is achieved by encoding the metadata as a bytestring and writing it before we write any protobufs. As with protobufs, we must store the length of the metadata before the metadata itself, and we again use an unsigned long long.
|
8,994
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
matplotlib.style.use('ggplot')
%matplotlib inline
np.__version__
data = pd.read_csv('data.csv')
data.shape
X = data.drop('Grant.Status', 1)
y = data['Grant.Status']
data.head()
numeric_cols = ['RFCD.Percentage.1', 'RFCD.Percentage.2', 'RFCD.Percentage.3',
'RFCD.Percentage.4', 'RFCD.Percentage.5',
'SEO.Percentage.1', 'SEO.Percentage.2', 'SEO.Percentage.3',
'SEO.Percentage.4', 'SEO.Percentage.5',
'Year.of.Birth.1', 'Number.of.Successful.Grant.1', 'Number.of.Unsuccessful.Grant.1']
categorical_cols = list(set(X.columns.values.tolist()) - set(numeric_cols))
data.dropna().shape
# place your code here
X_real_zeros = X[numeric_cols].fillna(0)
X_real_mean = X[numeric_cols].fillna(X.mean())
X_cat = X[categorical_cols].astype(str).fillna('lol') ##не заполняет наны, но код работает энивей
#X_cat = data[categorical_cols].fillna('NA').applymap(lambda s: str(s))` - вот так можно заполнить
# X_cat.shape == X_real_zeros.shape
X_cat.head()
from sklearn.linear_model import LogisticRegression as LR
from sklearn.feature_extraction import DictVectorizer as DV
categorial_data = pd.DataFrame({'sex': ['male', 'female', 'male', 'female'],
'nationality': ['American', 'European', 'Asian', 'European']})
print('Исходные данные:\n')
print(categorial_data)
encoder = DV(sparse = False)
encoded_data = encoder.fit_transform(categorial_data.T.to_dict().values())
print('\nЗакодированные данные:\n')
print(encoded_data)
encoder = DV(sparse = False)
X_cat_oh = encoder.fit_transform(X_cat.T.to_dict().values())
from sklearn.cross_validation import train_test_split
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0)
(X_train_real_mean,
X_test_real_mean) = train_test_split(X_real_mean,
test_size=0.3,
random_state=0)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0)
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import roc_auc_score
def plot_scores(optimizer):
scores = [[item[0]['C'],
item[1],
(np.sum((item[2]-item[1])**2)/(item[2].size-1))**0.5] for item in optimizer.grid_scores_]
scores = np.array(scores)
plt.semilogx(scores[:,0], scores[:,1])
plt.fill_between(scores[:,0], scores[:,1]-scores[:,2],
scores[:,1]+scores[:,2], alpha=0.3)
plt.show()
def write_answer_1(auc_1, auc_2):
answers = [auc_1, auc_2]
with open("preprocessing_lr_answer1.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
# place your code here
clf = LogisticRegression()
gridCVZeros = GridSearchCV(clf, param_grid, scoring = 'accuracy', cv = cv)
gridCVMeans = GridSearchCV(clf, param_grid, scoring = 'accuracy', cv = cv)
X_final_zeros = np.hstack((X_train_real_zeros, X_train_cat_oh))
X_final_means = np.hstack((X_train_real_mean, X_train_cat_oh))
gridCVZeros.fit(X_final_zeros, y_train)
gridCVMeans.fit(X_final_means, y_train)
# print X.shape
# print X_train_real_zeros.shape
# X_train_cat_oh.shape
# plot_scores(gridCVZeros)
pred_zeros = gridCVZeros.predict(np.hstack((X_test_real_zeros, X_test_cat_oh)))
pred_means = gridCVMeans.predict(np.hstack((X_test_real_mean, X_test_cat_oh)))
print roc_auc_score(y_test, pred_means), roc_auc_score(y_test, pred_zeros)
write_answer_1(roc_auc_score(y_test, pred_means), roc_auc_score(y_test, pred_zeros))
from pandas.tools.plotting import scatter_matrix
data_numeric = pd.DataFrame(X_train_real_zeros, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
from sklearn.preprocessing import StandardScaler
# place your code here
scaler = StandardScaler()
X_train_real_scaled = scaler.fit_transform(X_train_real_zeros)
X_test_real_scaled = scaler.transform(X_test_real_zeros)
data_numeric_scaled = pd.DataFrame(X_train_real_scaled, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric_scaled[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
def write_answer_2(auc):
with open("preprocessing_lr_answer2.txt", "w") as fout:
fout.write(str(auc))
# place your code here
gridCVScaled = GridSearchCV(clf, param_grid, scoring = 'accuracy', cv = cv)
X_final_scaled = np.hstack((X_train_real_scaled, X_train_cat_oh))
gridCVScaled.fit(X_final_scaled, y_train)
predScaled = gridCVScaled.predict(np.hstack((X_test_real_scaled, X_test_cat_oh)))
print roc_auc_score(y_test, predScaled)
write_answer_2(roc_auc_score(y_test, predScaled))
# print roc_auc_score(predScaled, y_test)
np.random.seed(0)
Сэмплируем данные из первой гауссианы
data_0 = np.random.multivariate_normal([0,0], [[0.5,0],[0,0.5]], size=40)
И из второй
data_1 = np.random.multivariate_normal([0,1], [[0.5,0],[0,0.5]], size=40)
На обучение берём 20 объектов из первого класса и 10 из второго
example_data_train = np.vstack([data_0[:20,:], data_1[:10,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((10))])
На тест - 20 из первого и 30 из второго
example_data_test = np.vstack([data_0[20:,:], data_1[10:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((30))])
Задаём координатную сетку, на которой будем вычислять область классификации
xx, yy = np.meshgrid(np.arange(-3, 3, 0.02), np.arange(-3, 3, 0.02))
Обучаем регрессию без балансировки по классам
optimizer = GridSearchCV(LogisticRegression(), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Строим предсказания регрессии для сетки
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
Считаем AUC
auc_wo_class_weights = roc_auc_score(example_labels_test, optimizer.predict(example_data_test))
plt.title('Without class weights')
plt.show()
print('AUC: %f'%auc_wo_class_weights)
Для второй регрессии в LogisticRegression передаём параметр class_weight='balanced'
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_w_class_weights = roc_auc_score(example_labels_test, optimizer.predict(example_data_test))
plt.title('With class weights')
plt.show()
print('AUC: %f'%auc_w_class_weights)
print(np.sum(y_train==0))
print(np.sum(y_train==1))
X_final_scaled = np.hstack((X_train_real_scaled, X_train_cat_oh))
X_test_final = np.hstack((X_test_real_scaled, X_test_cat_oh))
def write_answer_3(auc_1, auc_2):
answers = [auc_1, auc_2]
with open("preprocessing_lr_answer3.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
# place your code here
clf = LogisticRegression(class_weight='balanced')
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
###захуярим сюда X_real_zeroes
# X_final_scaled = np.hstack((X_train_real_zeros, X_train_cat_oh))
# X_test_final = np.hstack((X_test_real_zeros, X_test_cat_oh))
# X_final_scaled = np.hstack((X_train_real_scaled, X_train_cat_oh))
# X_test_final = np.hstack((X_test_real_scaled, X_test_cat_oh))
gridCV = GridSearchCV(clf, param_grid, scoring = 'accuracy', cv = cv)
gridCV.fit(X_final_scaled, y_train)
auc1 = roc_auc_score(y_test, gridCV.predict(X_test_final))
auc1
# np.random.seed(0)
# ind = y_train[y_train==1].index
# toadd = np.random.randint(len(ind),size=432)
# new_y = y_train[y_train==1].iloc[toadd]
# new_X = X_final_scaled[np.array(y_train == 1)][toadd]
np.random.seed(0)
y_train_0 = np.sum(y_train==0)
y_train_1 = np.sum(y_train==1)
X_man_sampled = X_final_scaled
y_man_sampled = y_train
while y_train_1 < y_train_0:
idx = np.random.randint(y_train_1)
if y_train.iloc[idx] == 1:
new_X = X_final_scaled[idx]
new_y = y_train.iloc[idx]
X_man_sampled = np.vstack((X_man_sampled, new_X))
y_man_sampled = np.hstack((y_man_sampled, new_y))
y_train_1 += 1
X_man_sampled.shape
##Add new lines to X and y:
# X_man_sampled = np.vstack((X_final_scaled, new_X))
# y_train_sampled = np.hstack((y_train, new_y))
# toadd2
# print(np.sum(y_train_sampled==0))
# print(np.sum(y_train_sampled==1))
clf1 = LogisticRegression()
# param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
# cv = 3
# X_test_final = np.hstack((X_test_real_scaled, X_test_cat_oh))
gridCV_w = GridSearchCV(clf1, param_grid, scoring = 'accuracy', cv = cv)
gridCV_w.fit(X_man_sampled, y_man_sampled)
pred_w2 = gridCV_w.predict(X_test_final)
auc2 = roc_auc_score(y_test, pred_w2)
auc2
write_answer_3(auc1, auc2)
# np.sum(y_train_sampled==1) == np.sum(y_train_sampled==0)
print('AUC ROC for classifier without weighted classes', auc_wo_class_weights)
print('AUC ROC for classifier with weighted classes: ', auc_w_class_weights)
Разделим данные по классам поровну между обучающей и тестовой выборками
example_data_train = np.vstack([data_0[:20,:], data_1[:20,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((20))])
example_data_test = np.vstack([data_0[20:,:], data_1[20:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((20))])
Обучим классификатор
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_stratified = roc_auc_score(example_labels_test, optimizer.predict(example_data_test))
plt.title('With class weights')
plt.show()
print('AUC ROC for stratified samples: ', auc_stratified)
def write_answer_4(auc):
with open("preprocessing_lr_answer4.txt", "w") as fout:
fout.write(str(auc))
# place your code here
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0, stratify=y)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0, stratify=y)
scaler = StandardScaler()
X_train_strat = np.hstack((scaler.fit_transform(X_train_real_zeros), X_train_cat_oh))
X_test_strat = np.hstack((scaler.transform(X_test_real_zeros), X_test_cat_oh))
gridCVstrat = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
gridCVstrat.fit(X_train_strat, y_train)
auc_strat = roc_auc_score(y_test, gridCVstrat.predict(X_test_strat))
print auc_strat
write_answer_4(auc_strat)
from sklearn.preprocessing import PolynomialFeatures
Инициализируем класс, который выполняет преобразование
transform = PolynomialFeatures(2)
Обучаем преобразование на обучающей выборке, применяем его к тестовой
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
Обращаем внимание на параметр fit_intercept=False
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('With class weights')
plt.show()
print(example_data_train_poly.shape)
transform = PolynomialFeatures(11)
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('Corrected class weights')
plt.show()
print(example_data_train_poly.shape)
def write_answer_5(auc):
with open("preprocessing_lr_answer5.txt", "w") as fout:
fout.write(str(auc))
# place your code here
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0, stratify=y)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0, stratify=y)
transform = PolynomialFeatures(2)
X_train_poly = transform.fit_transform(X_train_real_zeros)
X_test_poly = transform.transform(X_test_real_zeros)
scaler = StandardScaler()
X_train_strat = np.hstack((scaler.fit_transform(X_train_poly), X_train_cat_oh))
X_test_strat = np.hstack((scaler.transform(X_test_poly), X_test_cat_oh))
Обращаем внимание на параметр fit_intercept=False
polySearch = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
polySearch.fit(X_train_strat, y_train)
auc_p = roc_auc_score(y_test, polySearch.predict(X_test_strat))
write_answer_5(auc_p)
auc_p
def write_answer_6(features):
with open("preprocessing_lr_answer6.txt", "w") as fout:
fout.write(" ".join([str(num) for num in features]))
# place your code here
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0, stratify=y)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0, stratify=y)
scaler = StandardScaler()
X_train_strat = np.hstack((scaler.fit_transform(X_train_real_zeros), X_train_cat_oh))
X_test_strat = np.hstack((scaler.transform(X_test_real_zeros), X_test_cat_oh))
# clf = LogisticRegression(penalty='l1')
lasso = GridSearchCV(LogisticRegression(penalty='l1', class_weight='balanced'), param_grid, cv=cv, scoring = 'accuracy',n_jobs=-1)#
lasso.fit(X_train_strat, y_train)
# clf.fit(X_train_strat, y_train)
# auc_lasso = roc_auc_score(y_test, lasso.predict(X_test_strat))
clf = lasso.best_estimator_
# len(cl.coef_[0])
zero_coef = np.where(clf.coef_[0] == 0)[0]
# print X_train_real_zeros.shape
zero_coef[:3]
# len(clf.coef_[0])
write_answer_6(zero_coef[:3])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Описание датасета
Step2: Выделим из датасета целевую переменную Grant.Status и обозначим её за y
Step3: Теория по логистической регрессии
Step4: Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий
Step5: Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это
Step6: Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
Step7: Преобразование категориальных признаков.
Step8: Как видно, в первые три колонки оказалась закодированна информация о стране, а во вторые две - о поле. При этом для совпадающих элементов выборки строки будут полностью совпадать. Также из примера видно, что кодирование признаков сильно увеличивает их количество, но полностью сохраняет информацию, в том числе о наличии пропущенных значений (их наличие просто становится одним из бинарных признаков в преобразованных данных).
Step9: Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
Step10: Описание классов
Step11: Масштабирование вещественных признаков.
Step12: Как видно из графиков, разные признаки очень сильно отличаются друг от друга по модулю значений (обратите внимание на диапазоны значений осей x и y). В случае обычной регрессии это никак не влияет на качество обучаемой модели, т.к. у меньших по модулю признаков будут большие веса, но при использовании регуляризации, которая штрафует модель за большие веса, регрессия, как правило, начинает работать хуже.
Step13: Сравнение признаковых пространств.
Step14: Как видно из графиков, мы не поменяли свойства признакового пространства
Step24: Балансировка классов.
Step25: Как видно, во втором случае классификатор находит разделяющую поверхность, которая ближе к истинной, т.е. меньше переобучается. Поэтому на сбалансированность классов в обучающей выборке всегда следует обращать внимание.
Step26: Видно, что нет.
Step27: Стратификация выборок.
Step30: Насколько эти цифры реально отражают качество работы алгоритма, если учесть, что тестовая выборка так же несбалансирована, как обучающая? При этом мы уже знаем, что алгоритм логистический регрессии чувствителен к балансировке классов в обучающей выборке, т.е. в данном случае на тесте он будет давать заведомо заниженные результаты. Метрика классификатора на тесте имела бы гораздо больший смысл, если бы объекты были разделы в выборках поровну
Step31: Как видно, после данной процедуры ответ классификатора изменился незначительно, а вот качество увеличилось. При этом, в зависимости от того, как вы разбили изначально данные на обучение и тест, после сбалансированного разделения выборок итоговая метрика на тесте может как увеличиться, так и уменьшиться, но доверять ей можно значительно больше, т.к. она построена с учётом специфики работы классификатора. Данный подход является частным случаем т.н. метода стратификации.
Step35: Теперь вы разобрались с основными этапами предобработки данных для линейных классификаторов.
Step36: Видно, что данный метод преобразования данных уже позволяет строить нелинейные разделяющие поверхности, которые могут более тонко подстраиваться под данные и находить более сложные зависимости. Число признаков в новой модели
Step37: Но при этом одновременно данный метод способствует более сильной способности модели к переобучению из-за быстрого роста числа признаком с увеличением степени $p$. Рассмотрим пример с $p=11$
Step38: Количество признаков в данной модели
Step40: Задание 5. Трансформация вещественных признаков.
Step41: Регрессия Lasso.
|
8,995
|
<ASSISTANT_TASK:>
Python Code:
# Comentário de uma linha
# Função:
print('Hello World!')
help(print)
3 + 3
# Operações básicas:
print('Soma: ', '3 + 3 = ', 3 + 3)
print('Subtração: ', '3 - 3 = ', 3 - 3)
print('Multiplicação: ', '3 * 3 = ', 3 * 3)
print('Divisão: ', '3 / 3 = ', 3 / 3)
print('\n', '-'*30, '\n')
print('Quociente (inteiro): ', '3 // 3 = ', 3 // 3)
print('Resto: ', '3 % 3 = ', 3 % 3)
print('Exponenciação: ', '3 ** 3 = ', 3 ** 3)
print('\n', '-'*30, '\n')
print('''
A ordem de avaliação das operações é: **; depois *, /, //, e %; por fim + e -
\n obs1: sempre da esquerda para a direita
\n obs2: pode-se utilizar parênteses para modificar a precedência
''')
print('Integers (int): -2, -1, 0, 1, 2, 3, 4, 5')
print('Floats (floats): -1.25, -1.0, --0.5, 0.0, 0.5, 1.0, 1.25')
print('Strings (str): ', 'Hello World!', 'Spam spam spam', 'spam and eggs')
print('Respectivas funções: int(), float() e str()!')
# Concatenação de strings:
'Jayme ' + "Anchante"
'Jayme' + 42
'Jayme' + str(42)
'Jayme' * 5
'Jayme' * 'Anchante'
'Jayme' * 5.0
spam = 7 # atribuindo o valor '7' ao objeto 'spam'
spam # 'chamando' o objeto 'spam'
eggs = 3 # atribuindo o valor '3' ao objeto 'eggs'
spam + eggs # somando 'spam' e 'eggs'
spam = spam + 7 # sobreescrevendo um novo valor ao objeto 'spam'
spam # 'chamando' o objeto 'spam'
True
False
True == True
True != True
42 == '42'
42 == 42.0
42.0 == 0042.000
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Outra possibilidade é escrever a função com um sinal de interrogação ao final
Step2: 4. Tipos de dados
Step3: 5. Atribuindo valores a objetos
Step4: Nome das variáveis tem que obedecer o seguinte
|
8,996
|
<ASSISTANT_TASK:>
Python Code:
# Testando se a biblioteca está instalada corretamente e consegue ser importada
import pandas as pd
# Carregue o arquivo 'datasets/boston.csv' usando o pandas
boston_housing_data = pd.read_csv('../datasets/boston.csv')
# Use o método head() para exibir as primeiras cinco linhas do dataset
boston_housing_data.head()
# Use o método info() para exibir algumas informações sobre o dataset
boston_housing_data.info()
# Use o método describe() apra exibir algumas estatísticas do dataset
boston_housing_data.describe()
datasets = pd.read_csv('../datasets/anscombe.csv')
for i in range(1, 5):
dataset = datasets[datasets.Source == 1]
print('Dataset {} (X, Y) mean: {}'.format(i, (dataset.x.mean(), dataset.y.mean())))
print('\n')
for i in range(1, 5):
dataset = datasets[datasets.Source == 1]
print('Dataset {} (X, Y) std deviation: {}'.format(i, (dataset.x.std(), dataset.y.std())))
print('\n')
for i in range(1, 5):
dataset = datasets[datasets.Source == 1]
print('Dataset {} correlation between X and Y: {}'.format(i, dataset.x.corr(dataset.y)))
# Na primeira vez que o matplotlib é importado, pode ser exibido algum tipo
# de alerta relacionado às fontes do sistema dependendo da sua configuração
import matplotlib.pyplot as plt
# Essa linha permite que os gráficos gerados apareçam diretamente no notebook
# ao invés de serem abertos em uma janela ou arquivo separado.
%matplotlib inline
# Extraia os preços das casas e a quantidade média de cômodos em duas variáveis separadas
prices = boston_housing_data.medv
rooms = boston_housing_data.rm
# Crie um scatterplot dessas duas features usando plt.scatter()
plt.scatter(rooms, prices)
# Especifique labels para os eixos X e Y
plt.xlabel('Number of rooms')
plt.ylabel('House price')
# Exiba o gráfico
plt.show()
# Extraia os preços das casas e o índice de poluição da vizinhança em duas variáveis separadas
prices = boston_housing_data.medv
nox = boston_housing_data.nox
# Crie um scatterplot dessas duas features usando plt.scatter()
plt.scatter(nox, prices)
# Especifique labels para os eixos X e Y
plt.xlabel('Nitric oxide concentration')
plt.ylabel('House price')
# Exiba o gráfico
plt.show()
# Primeiro, extraia os preditores (as features que serão usadas para
# prever o preço das casas) e a saída (os preços das casas) em
# variáveis separadas.
x = rooms.values.reshape(-1, 1) # Extraia os valores da coluna 'rm' aqui
y = prices.values.reshape(-1, 1) # Extraia os valores da coluna 'medv' aqui
print('x: {}'.format(x[0:3, :]))
print('y: {}'.format(y[0:3]))
# Use a função train_test_split() do scikit-learn para dividir os dados em dois conjuntos.
# http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
from sklearn.model_selection import train_test_split
RANDOM_STATE = 4321
xtr, xts, ytr, yts = train_test_split(x, y, random_state=RANDOM_STATE) # Chame a função train_test_split aqui
from sklearn.linear_model import LinearRegression
lr = LinearRegression().fit(xtr, ytr) # Treine um modelo LinearRegression aqui usando o conjunto de treinamento
lr.predict(6)
# Calcule os preços previstos pelo modelo treinado
predicted_prices = lr.predict(x)
# Crie um scatterplot dessas duas propriedades usando plt.scatter()
plt.scatter(rooms, prices)
# Crie um line plot exibindo os valores previstos em vermelho
plt.plot(rooms, predicted_prices, 'r')
# Crie labels para os eixos X e Y
plt.xlabel('Number of rooms')
plt.ylabel('House price')
# Exiba o gráfico
plt.show()
# Use o conjunto de testes para avaliar a performace do modelo
from sklearn.metrics import mean_squared_error
# Calcule o mean_squared_error do modelo aqui
mean_squared_error(yts, lr.predict(xts))
X = boston_housing_data.drop('medv', axis=1) # Use o método drop() aqui para descartar a coluna 'medv' e manter as demais.
y = boston_housing_data.medv # Extraia o preço das casas aqui a partir da coluna 'medv'.
X.head()
from sklearn.model_selection import train_test_split
ANOTHER_RANDOM_STATE=1234
Xtr, Xts, ytr, yts = train_test_split(X, y, random_state=ANOTHER_RANDOM_STATE) # Divida o dataset em treinamente e teste
# Use o conjunto de treinamento para treinar um modelo LinearRegression
lr = LinearRegression().fit(Xtr, ytr)
# Calcule o mean_squared_error do modelo no conjunto de teste aqui
mean_squared_error(yts, lr.predict(Xts))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Neste exercício, usaremos o dataset [Boston Housinh]((http
Step2: Pandas permite a leitura de nossos dados a partir de diferentes formatos. Veja esse link para uma lista de formatos suportados e as respectivas funções usadas para lê-los.
Step3: O método head() imprime as primeiras cinco linhas por padrão. Ele pode receber opcionalmente um argumento que especifique quantas linhas devem ser exibidas, como boston.head(n=10).
Step4: O método info() exibe vários detalhes sobre o dataset, como a sua quantidade de linhas, quais features estão presentes, qual é o tipo de cada feature e se existem valores em branco.
Step5: O método describe() apenas mostra estatísticas de features numéricas. Se uma feature contém strings, por exemplo, ele não será capaz de mostrar informações sobre ela.
Step6: Todos eles possuem aproximadamente a mesma média, desvio-padrão e correlação. Quão parecidos esses datasets devem ser?
Step7: Previsão de preços
Step8: A chamada values.reshape(-1, 1) é necessária nesse caso porque o scikit-learn espera que os preditores estejam na forma de uma matriz - isto é, em um formato de array bidimensional. Como estamos usando apenas um preditor, o pandas acaba representando isso como um array unidimensional, então precisamos "reformatá-lo" em uma "matriz de uma coluna só". Esse passo não é necessário se estivermos usando mais de um preditor para treinar um modelo do scikit-learn, como será visto no próximo exemplo.
Step9: *Se tentarmos estimar a performance do modelo no mesmo conjunto de dados que foi usado para treiná-lo, obteremos uma estimativa enviesada já que o modelo foi treinado para minimizar sua taxa de erro exatamente nos exemplos presentes no conjunto de treinamento. Para estimar corretamente o quão bem o modelo se comportará na prática, ele precisa se testado em um conjunto de dados com o qual ele nunca teve contato.
Step10: Podemos agora usar a função mean_squared_error do Scikit-Learn para calcular o erro total médio do modelo nos dados do conjunto de teste.
Step11: O erro aqui provavelmente será bem alto. Usaremos então todas as features do dataset como preditores para tentar prever os preços das casas e vamos checar o quanto isso melhora a performance do modelo.
Step12: O método drop(), por padrão, exclui linhas ao invés de colunas. Para descartar colunas, é necessário passar uma argumento adicional axis=1.
|
8,997
|
<ASSISTANT_TASK:>
Python Code:
import re
str_pat = re.compile(r"\"(.*)\"")
text1 = 'Computer says "no."'
str_pat.findall(text1)
text2 = 'Computer says "no." Phone says "yes."'
str_pat.findall(text2)
str_pat = re.compile(r"\"(.*?)\"")
str_pat.findall(text2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 在这个例子中,模式 r'\"(.*)\"' 的意图是匹配被双引号包含的文本。 但是在正则表达式中 * 操作符是贪婪的,因此匹配操作会查找最长的可能匹配。 于是在第二个例子中搜索 text2 的时候返回结果并不是我们想要的。
|
8,998
|
<ASSISTANT_TASK:>
Python Code:
import graphviz as gv
class CodingTree:
sNodeCount = 0
def __init__(self):
CodingTree.sNodeCount += 1
self.mID = CodingTree.sNodeCount
def count(self):
"compute the number of characters"
pass
def cost(self):
"compute the number of bits used by this coding tree"
pass
def getID(self):
return self.mID # used only by graphviz
def _make_string(self, Attributes):
# map the function __str__ to all attributes and join them with a comma
name = self.__class__.__name__
return f"{name}({', '.join(map(str, [getattr(self, at) for at in Attributes]))})"
CodingTree._make_string = _make_string
def toDot(self):
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
nodeDict = {}
self._collectIDs(nodeDict)
for n, t in nodeDict.items():
if isinstance(t, Leaf):
if t.mCharacter == ' ':
dot.node(str(n), label='{ \' \' |' + "{:,}".format(t.mFrequency) + '}')
elif t.mCharacter == '\t':
dot.node(str(n), label='{ \'\\\\t\' |' + "{:,}".format(t.mFrequency) + '}')
elif t.mCharacter == '\n':
dot.node(str(n), label='{ \'\\\\n\' |' + "{:,}".format(t.mFrequency) + '}')
elif t.mCharacter == '\v':
dot.node(str(n), label='{ \'\\\\v\' |' + "{:,}".format(t.mFrequency) + '}')
else:
dot.node(str(n), label='{' + str(t.mCharacter) + '|' + "{:,}".format(t.mFrequency) + '}')
elif isinstance(t, Node):
dot.node(str(n), label="{:,}".format(t.count()))
else:
assert False, f'Unknown node {t}'
for n, t in nodeDict.items():
if isinstance(t, Node):
dot.edge(str(n), str(t.mLeft .getID()), label='0')
dot.edge(str(n), str(t.mRight.getID()), label='1')
return dot
CodingTree.toDot = toDot
def _collectIDs(self, nodeDict):
nodeDict[self.getID()] = self
if isinstance(self, Node):
self.mLeft ._collectIDs(nodeDict)
self.mRight._collectIDs(nodeDict)
CodingTree._collectIDs = _collectIDs
class Leaf(CodingTree):
def __init__(self, c, f):
CodingTree.__init__(self)
self.mCharacter = c
self.mFrequency = f
def count(self):
return self.mFrequency
def cost(self):
return 0
def __str__(self):
return _make_string(self, ['mCharacter', 'mFrequency'])
def __lt__(self, other):
if isinstance(other, Node):
return True
return self.mCharacter < other.mCharacter
class Node(CodingTree):
def __init__(self, l, r):
CodingTree.__init__(self)
self.mLeft = l
self.mRight = r
def count(self):
return self.mLeft.count() + self.mRight.count()
def cost(self):
return self.mLeft.cost() + self.mRight.cost() + self.count()
def __str__(self):
return _make_string(self, ['mLeft', 'mRight'])
def __lt__(self, other):
if isinstance(other, Leaf):
return False
return self.mLeft < other.mLeft
import heapq
H = []
heapq.heappush(H, 7)
heapq.heappush(H, 1)
heapq.heappush(H, 0)
heapq.heappush(H, 6)
H
a = heapq.heappop(H)
print('a = ', a)
H
def coding_tree(M):
H = [] # empty priority queue
for c, f in M:
heapq.heappush(H, (f, Leaf(c, f)))
while len(H) > 1:
a = heapq.heappop(H)
b = heapq.heappop(H)
heapq.heappush(H, (a[0] + b[0], Node(a[1], b[1])))
return H[0][1]
import math
def log2(n):
return math.log(n) / math.log(2)
log2(8)
def demo(M):
K = coding_tree(M)
display(K.toDot())
n = math.ceil(log2(len(M)))
cost_huffman = K.cost()
cost_constant = n * K.count()
savings = (cost_constant - cost_huffman) / cost_constant
print(f'cost of encoding with Huffman coding tree : {"{:,}".format(cost_huffman)} bits')
print(f'cost of encoding with {n} bits : {"{:,}".format(cost_constant)} bits')
print(f'savings: {100 * savings}%')
return savings
demo({ ('a', 990), ('b', 8), ('c', 1), ('d', 1) })
demo({ ('a', 4), ('b', 9), ('c', 16), ('d', 25), ('e', 36), ('f', 49), ('g', 64) })
demo({ ('a', 1), ('b', 2), ('c', 3), ('d', 4), ('e', 5), ('f', 6), ('g', 7), ('h', 8), ('i', 9), ('j', 10) })
demo({ ('a', 1), ('b', 1), ('c', 2), ('d', 3), ('e', 5), ('f', 8), ('g', 13) })
def demo_file(fn):
with open(fn, 'r') as file:
s = file.read() # read file as string s
Frequencies = {}
for c in s:
f = Frequencies.get(c, 0)
if f != 0:
Frequencies[c] += 1
else:
Frequencies[c] = 1
M = { (c, f) for (c, f) in Frequencies.items() }
print(M)
return demo(M)
!type alice.txt
demo_file('alice.txt')
demo_file('moby-dick.txt')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebook presents <em style="color
Step2: The function make_string is a helper function that is used to simplify the implementation of __str__.
Step3: The method $t.\texttt{toDot}()$ takes a binary trie $t$ and returns a graph that depicts the tree $t$.
Step4: The method $t.\texttt{collectIDs}(d)$ takes a coding tree $t$ and a dictionary $d$ and updates the dictionary so that the following holds
Step5: The class Leaf represents a leaf of the form $\texttt{Leaf}(c, f)$. It maintains two member variables.
Step6: The class Node represents an inner node of the form $\texttt{Node}(l, r)$. It maintains two member variables
Step7: Building a Coding Tree
Step8: The function coding_tree implements Huffman's algorithm for data compression.
Step9: Let us test this with a trivial example.
Step10: The function log2(n) computes $\log_2(n)$.
Step11: The function demo_file(fn) reads the file with name fn and calculates the frequency of all characters occurring in fn. Using these frequencies it computes the Huffman coding tree.
|
8,999
|
<ASSISTANT_TASK:>
Python Code:
config_params.py -m peakfilter
config_params.py -m peakfilter -p my_parameters.json
from LipidFinder.Configuration.LFParametersGUI import LFParametersGUI
LFParametersGUI(module='amalgamator');
run_peakfilter.py -i tests/XCMS/negative.csv -o results -p tests/XCMS/params_peakfilter_negative.json
run_peakfilter.py -i tests/XCMS/positive.csv -o results -p tests/XCMS/params_peakfilter_positive.json
run_amalgamator.py -neg results/peakfilter_negative_summary.csv \
-pos results/peakfilter_positive_summary.csv \
-p tests/XCMS/params_amalgamator.json -o results
run_mssearch.py -i results/amalgamated.csv -o results \
-p tests/XCMS/params_mssearch.json
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Additionally, if you already have a parameters JSON file, you can load its values instead of LipidFinder's defaults (see example below). Once launched, the process will guide you through a question-answering system to configure each parameter. At the end, the program will ask for the path and file name in which you want to save the new set of parameters
Step2: The second option is through a Jupyter notebook (like this one). The Configuration module includes a graphical user interface (GUI) class to set up each parameter of the selected module interactively based on Jupyter's widgets. The following code shows an example of how to launch the GUI to set Amalgamator's parameters based on default values
Step3: To use an existing parameters JSON file instead of the default values, you need to add the argument src=x, where x is the path to the JSON file, to the LFParametersGUI() call.
Step4: And then the positive one
Step5: By default, PeakFilter will generate the complete filtered file and a summary output CSV file with the relevant information of each remaining frame.
Step6: Duplicates are identified by comparing the negative file with the positive file within a small retention time tolerance and a corrected m/z tolerance (negative m/z + 2H<sup>+</sup>, followed by negative m/z + H<sup>+</sup> + CH3<sup>+</sup> for phosphotidylcholine and sphingomyelins with phosphocholine head group). Any hits are classed as a match.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.