text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Object Creation
Step2: Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns
Step3: Having specific dtypes
Step4: If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed
Step5: Transposing your data
Step6: Sorting by an axis
Step7: Sorting by values
Step8: Selection
Step9: Selecting via [], which slices the rows.
Step10: Selection by Label
Step11: Selecting on a multi-axis by label
Step12: Showing label slicing, both endpoints are included
Step13: Reduction in the dimensions of the returned object
Step14: For getting a scalar value
Step15: For getting fast access to a scalar (equiv to the prior method)
Step16: Selection by Position
Step17: By integer slices, acting similar to numpy/python
Step18: By lists of integer position locations, similar to the numpy/python style
Step19: For slicing rows explicitly
Step20: For slicing columns explicitly
Step21: For getting a value explicitly
Step22: Boolean Indexing
Step23: A where operation for getting.
Step24: Using the isin() method for filtering
Step25: Setting
Step26: Setting values by label
Step27: Setting values by position
Step28: Setting by assigning with a numpy array
Step29: The result of the prior setting operations
Step30: A where operation with setting.
Step31: Missing Data
Step32: To drop any rows that have missing data.
Step33: Filling missing data
Step34: To get the boolean mask where values are nan
Step35: Operations
Step36: Same operation on the other axis
Step37: Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.
Step38: Apply
Step39: Histogramming
Step40: Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.
Step41: Merge
Step42: Join
Step43: Append
Step44: Grouping
Step45: Grouping and then applying a function sum to the resulting groups.
Step46: Grouping by multiple columns forms a hierarchical index, which we then apply the function.
Step47: Reshaping
Step48: The stack() method “compresses” a level in the DataFrame’s columns.
Step49: With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level
Step50: Pivot Tables
Step51: We can produce pivot tables from this data very easily
Step52: Time Series
Step53: Time zone representation
Step54: Convert to another time zone
Step55: Converting between time span representations
Step56: Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end
Step57: Categoricals
Step58: Convert the raw grades to a categorical data type.
Step59: Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!)
Step60: Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default).
Step61: Sorting is per order in the categories, not lexical order.
Step62: Grouping by a categorical column shows also empty categories.
Step63: Plotting
Step64: On DataFrame, plot() is a convenience to plot all of the columns with labels
Step65: Getting Data In/Out
Step66: Reading from a csv file
Step67: HDF5
Step68: Reading from a HDF5 Store
Step69: Excel
Step70: Reading from an excel file
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
s = pd.Series([1,3,5,np.nan,6,8])
s
dates = pd.date_range('20130101', periods=6)
dates
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns = ['Ann', "Bob", "Charly", "Don"])
## columns=list('ABCD'))
df
df.Charly+df.Don
df2 = pd.DataFrame({ 'A' : 1.,
....: 'B' : pd.Timestamp('20130102'),
....: 'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
....: 'D' : np.array([3] * 4,dtype='int32'),
....: 'E' : pd.Categorical(["test","train","test","train"]),
....: 'F' : 'foo' })
....:
df2
df2.dtypes
df.head()
df.tail(3)
df.index
df.columns
df.values
df.describe()
df2.columns
df2.dtypes
df2.describe()
df.T
df.sort_index(axis=1, ascending=False)
df.sort_values(by='Bob', ascending=True)
df['Ann']
df[1:3]
df['20130102':'20130104']
dates[0]
df.loc[dates[0]]
df.loc[:,['Ann','Bob']]
df.loc['20130102':'20130104',['A','B']]
df.loc['20130102',['A','B']]
df.loc[dates[0],'A']
df.at[dates[0],'A']
df.iloc[3]
df.iloc[3:5,0:2]
df.iloc[[1,2,4],[0,2]]
df.iloc[1:3,:]
df.iloc[:,1:3]
In [37]: df.iloc[1,1]
Out[37]: -0.17321464905330858
For getting fast access to a scalar (equiv to the prior method)
In [38]: df.iat[1,1]
Out[38]: -0.17321464905330858
flt = (df.Ann >= 0.5) & (df.Ann < 1.5)
df[flt]
In [39]: df[(df.Ann >= 0.5) & (df.Ann < 1.5)]
df[df > 0]
df2 = df.copy()
df2['E'] = ['one', 'one','two','three','four','three']
df2
df2[df2['E'].isin(['two','four'])]
s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6))
s1
df['F'] = s1
df['G'] = df['Ann']-df['Bob']
df
df.G
df.at[dates[0],'Ann'] = 17.6
df
df.iat[5,2] = 349
df
df.loc[:,'D'] = np.array([5] * len(df))
df
df2 = df.copy()
df2[df2 > 0] = -df2
df2
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1.loc[dates[0]:dates[1],'E'] = 1
df1
df1.dropna(how='any')
df1
df1.fillna(value=5)
pd.isnull(df1)
df.mean(0)
df.mean(1)
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
s
df.sub(s, axis='index')
df
df.apply(np.cumsum)
df
df.max() ##df.apply(max)
df.Ann.max()-df.Ann.min()
df.apply(lambda x: x.max() - x.min())
s = pd.Series(np.random.randint(0, 7, size=10))
s
s.value_counts()
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()
df = pd.DataFrame(np.random.randn(10, 4))
df
# break it into pieces
pieces = [df[:3], df[3:7], df[7:]]
pd.concat(pieces)
left = pd.DataFrame({'key': ['foo', 'boo', 'foo'], 'lval': [1, 2, 3]})
#right = pd.DataFrame({'key': ['boo', 'foo', 'foo'], 'rval': [4, 5, 6]})
right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [5, 6]})
left
right
pd.merge(left, right, on='key', how='left')
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
s = df.iloc[3]
df.append(s, ignore_index=True)
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
....: 'foo', 'bar', 'foo', 'foo'],
....: 'B' : ['one', 'one', 'two', 'three',
....: 'two', 'two', 'one', 'three'],
....: 'C' : np.random.randn(8),
....: 'D' : np.random.randn(8)})
....:
df
df.groupby('A').sum()
df.groupby(['A','B']).sum()
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
....: 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two',
....: 'one', 'two', 'one', 'two']]))
....:
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
df2 = df[:4]
df2
stacked = df2.stack()
stacked
stacked.unstack()
stacked.unstack(1)
stacked.unstack(0)
df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
.....: 'B' : ['A', 'B', 'C'] * 4,
.....: 'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
.....: 'D' : np.random.randn(12),
.....: 'E' : np.random.randn(12)})
.....:
df
pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts.resample('5Min').sum()
Out[105]:
2012-01-01 25083
Freq: 5T, dtype: int64
rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
ts = pd.Series(np.random.randn(len(rng)), rng)
ts
ts_utc = ts.tz_localize('UTC')
ts_utc
ts_utc.tz_convert('US/Eastern')
rng = pd.date_range('1/1/2012', periods=5, freq='M')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
ts
ps = ts.to_period()
ps
ps.to_timestamp()
prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
ts = pd.Series(np.random.randn(len(prng)), prng)
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
ts.head()
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
df["grade"] = df["raw_grade"].astype("category")
df["grade"]
Out[124]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): [a, b, e]
df["grade"].cat.categories = ["very good", "good", "very bad"]
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df["grade"]
df.sort_values(by="grade")
Out[128]:
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
df.groupby("grade").size()
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
%matplotlib inline
ts.plot()
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
.....: columns=['A', 'B', 'C', 'D'])
.....:
df = df.cumsum()
%matplotlib inline
plt.figure(); df.plot(); plt.legend(loc='best')
df.to_csv('foo.csv')
pd.read_csv('foo.csv')
## df.to_hdf('foo.h5','df')
## pd.read_hdf('foo.h5','df')
df.to_excel('foo.xlsx', sheet_name='Sheet1')
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below are three options for bandpasses.
|
<ASSISTANT_TASK:>
Python Code:
data_dt=1.e20*data.astype(float64).detrend()
filt=sig.firwin(int(8*srate)-1,9./nyquist,pass_zero=False,window='hann')
data_hp=fir_filter(data_dt,filt)
freqs=[52,59.8,60.2,64,112,124,171,179.5,180.5,183,230,242]
#freqs=[110,124,171,179.5,180.5,183,230,242]
#freqs=[480,530,980,1040,1460,1530]
srate=data.sample_rate.value
nyquist=srate/2.
filt=sig.firwin(32*srate,[ff/nyquist for ff in freqs],pass_zero=False,window='hann')
data_bp=fir_filter(data_hp,filt)
from scipy.io import wavfile
output=data_bp.value[int(16*srate):int(46*srate)]
output=output/max(abs(output))
wavfile.write('fir_bp_caduceus.wav',rate=srate,data=output)
p1=data_bp.asd(16,12).plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Factor to downsample the temporal dimension of the TFR computed by
Step3: Compute statistic
Step4: View time-frequency plots
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_raw.fif'
event_fname = meg_path / 'sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332' # restrict example to one channel
# Load condition 1
reject = dict(grad=4000e-13, eog=150e-6)
event_id = 1
epochs_condition_1 = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject, preload=True)
epochs_condition_1.pick_channels([ch_name])
# Load condition 2
event_id = 2
epochs_condition_2 = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject, preload=True)
epochs_condition_2.pick_channels([ch_name])
decim = 2
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = 1.5
tfr_epochs_1 = tfr_morlet(epochs_condition_1, freqs,
n_cycles=n_cycles, decim=decim,
return_itc=False, average=False)
tfr_epochs_2 = tfr_morlet(epochs_condition_2, freqs,
n_cycles=n_cycles, decim=decim,
return_itc=False, average=False)
tfr_epochs_1.apply_baseline(mode='ratio', baseline=(None, 0))
tfr_epochs_2.apply_baseline(mode='ratio', baseline=(None, 0))
epochs_power_1 = tfr_epochs_1.data[:, 0, :, :] # only 1 channel as 3D matrix
epochs_power_2 = tfr_epochs_2.data[:, 0, :, :] # only 1 channel as 3D matrix
threshold = 6.0
F_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([epochs_power_1, epochs_power_2], out_type='mask',
n_permutations=100, threshold=threshold, tail=0)
times = 1e3 * epochs_condition_1.times # change unit to ms
fig, (ax, ax2) = plt.subplots(2, 1, figsize=(6, 4))
fig.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Compute the difference in evoked to determine which was greater since
# we used a 1-way ANOVA which tested for a difference in population means
evoked_power_1 = epochs_power_1.mean(axis=0)
evoked_power_2 = epochs_power_2.mean(axis=0)
evoked_power_contrast = evoked_power_1 - evoked_power_2
signs = np.sign(evoked_power_contrast)
# Create new stats image with only significant clusters
F_obs_plot = np.nan * np.ones_like(F_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
F_obs_plot[c] = F_obs[c] * signs[c]
ax.imshow(F_obs,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', cmap='gray')
max_F = np.nanmax(abs(F_obs_plot))
ax.imshow(F_obs_plot,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', cmap='RdBu_r',
vmin=-max_F, vmax=max_F)
ax.set_xlabel('Time (ms)')
ax.set_ylabel('Frequency (Hz)')
ax.set_title(f'Induced power ({ch_name})')
# plot evoked
evoked_condition_1 = epochs_condition_1.average()
evoked_condition_2 = epochs_condition_2.average()
evoked_contrast = mne.combine_evoked([evoked_condition_1, evoked_condition_2],
weights=[1, -1])
evoked_contrast.plot(axes=ax2, time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create random values for x in interval [0,1)
Step2: Compute y
Step3: Add random Gaussian noise to y
Step4: Put data into an SFrame to manipulate later
Step5: Create a function to plot the data, since we'll do it many times
Step6: Define some useful polynomial regression functions
Step7: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data"
Step8: Define function to plot data and predictions made, since we are going to use it many times.
Step9: Create a function that prints the polynomial coefficients in a pretty way
Step10: Fit a degree-2 polynomial
Step11: Inspect learned parameters
Step12: Form and plot our predictions along a grid of x values
Step13: Fit a degree-4 polynomial
Step14: Fit a degree-16 polynomial
Step15: Woah!!!! Those coefficients are crazy! On the order of 10^6.
Step16: Above
Step17: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
Step18: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
Step19: Run LOO cross validation for "num" values of lambda, on a log scale
Step20: Plot results of estimating LOO for each value of lambda
Step21: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
Step22: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
Step23: Let's look at fits for a sequence of increasing lambda values
Step24: Lasso Regression
Step25: Explore the lasso solution as a function of a few different penalty strengths
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
%matplotlib inline
random.seed(1)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
y = x.apply(lambda x: math.sin(4*x))
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
data = graphlab.SFrame({'X1':x,'Y':y})
data
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
xs = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
ys = model.predict(polynomial_features(xs,deg))
# plot predictions
plt.plot(xs['X1'], ys, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
model = polynomial_regression(data, deg=2)
print_coefficients(model)
plot_poly_predictions(data,model)
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
model = polynomial_regression(data, deg=16)
print_coefficients(model)
plot_poly_predictions(data,model)
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\L2_penalty$')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
for l2_penalty in [1e-25, 1e-20, 1e-8, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=2000, convergence_threshold=1e-10)
return model
for l1_penalty in [1e-10, 1e-2, 1e-01, 1, 1e1]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How do we test "contains"
Step2: Without 'in', how do we do this?
Step3: Is this fast? What does fast mean?
Step4: In the worst case, we have to check all words.
Step5: What if each comparison took 1 second?
Step6: This is less than a MB of text! What if we wanted to search through every book in the library of congress?
Step7: We can still use our function to check contains, because python is forgiving that way.
Step8: Use case
Step9: If we want to say the integer 2 is in our word set, we set words_map[2] = True
Step10: Starting from our empty set, let's load our wordslist in
Step11: We did that look up only checking one entry!
Step12: Summary
Step13: The modulus
Step14: If we take $N % k$, the number will always be between $0$ and $k - 1$
Step15: Populate our word_map
Step16: This is called a collision. In a basic hashing bitmap, we can only say with certainty is a number is NOT in the map
Step17: How can we get yes/no?
Step18: How do we add a number to words_map?
Step19: Query, is number in word_map?
Step20: We just made a hash table!
Step21: We set k equal to the length of our array. We can use any size of array by changing the mod k.
Step22: Back to strings
Step23: Hash table Load Factor
|
<ASSISTANT_TASK:>
Python Code:
def print_lines(filename, num_lines):
count = 0
with open(filename,'r') as f:
for line in f:
for word in line.split():
print word
count += 1
if count >= num_lines:
break
print_lines('leaves-of-grass.txt', 4)
import re
with open('leaves-of-grass.txt','r') as f:
words = [word.lower() for line in f for word in re.findall(r"[\w']+|[.,!?;]", line)]
print "number of words =", len(words)
print words[0]
print words[12305]
print words[148185]
print words[148186]
print 'leaves' in words
print 'leaVes' in words
print 'captain' in words
print 'internet' in words
def contains(wordlist, word):
if len(wordlist) == 0:
# Wordlist is empty
return False
for i in xrange(len(wordlist)):
element = wordlist[i]
if word == element:
return True
# No match in wordlist
return False
print contains(words, 'captain')
print contains(words, 'internet')
def contains(wordlist, word):
if len(wordlist) == 0:
# Wordlist is empty
print "Checked 1 word(s)"
return False
for i in xrange(len(wordlist)):
element = wordlist[i]
if word == element:
print "Checked", i+1, "word(s)"
return True
# No match in wordlist
print "Checked", i+1, "word(s)"
return False
print contains(words, 'leaves')
print contains(words, 'captain')
print contains(words, 'internet')
print "Worst case is %.1f days to query 'words'" % (len(words)/60./60/24)
import random
print random.randint(0, 10**6)
print random.randint(0, 10**6)
print random.randint(0, 10**6)
print random.randint(0, 10**6)
n = len(words)
print n
words = [random.randint(0, 10**6 - 1) for i in range(n)]
words[0:10]
contains(words, 1)
contains(words, 9999)
contains(words, 100000)
contains(words, 413351)
# Start with the empty set of ints from 1 to 999,999
words_map = [False for i in range(1000000)]
words_map[0:10]
words_map[2] = True
words_map[0:10]
words_map = [False for i in range(1000000)]
for i in words:
words_map[i] = True
contains(words, 219943)
words_map[219943]
def contains_map(word_bitmap, word):
# Caution, only works on integers up to len(word_bitmap)!
print "Checked 1 word(s)"
return word_bitmap[word]
contains(words, 219943)
contains_map(words_map, 219943)
contains(words, 99149)
contains_map(words_map, 129343)
for i in range(2,10):
print "Word of length %d has %d possible combinations" % (i, 26**i)
4/2
4%2
5/2
5%2
2000/1000
2999/1000
2999%1000
3000/1000
k = 100000
words_map = [False for i in range(k)]
words_map
len(words_map)
for i in words:
words_map[i%k] = True
words_map
words_map[5]
5 % k
100005 % k
def contains_modmap(word_modmap, k, word):
print "Checked 1 word(s)"
if word_modmap[word % k]:
return "Maybe"
else:
return False
contains_modmap(words_map, 100000, 219943)
k = 100000
words_map = [list() for i in range(k)]
words_map
number = 3
word_list = words_map[3 % k]
word_list.append(3)
words_map
number = 100003
word_list = words_map[number % k]
word_list.append(number)
words_map
def contains_modmap(word_modmap, k, word):
word_list = word_modmap[word % k]
print contains(word_list, word)
contains_modmap(words_map, 100000, 100003)
words_map[3 % 100000]
def our_hash(n):
k = 100000
return n % k
hash(0)
hash(1)
hash("Ryan")
hash("Ryan") % 10
# Load the book in again
with open('leaves-of-grass.txt','r') as f:
words = [word.lower() for line in f for word in re.findall(r"[\w']+|[.,!?;]", line)]
k = 100000
words_hashtable = [list() for i in range(k)]
for word in words:
word_list = words_hashtable[hash(word) % k]
word_list.append(word)
words_hashtable
def contains_hashtable(words_hashtable, k, word):
word_list = words_hashtable[hash(word) % k]
print contains(word_list, word)
print contains(words, 'captain')
contains_hashtable(words_hashtable, 100000, 'lincoln')
print "leaves" < "zebra"
print "leaves" < "abc"
words.sort()
print words[0]
print words[30000]
print words[90000]
print words[148186]
book.__contains__("captain")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the latest GA version of google-cloud-pipeline-components library as well.
Step3: Restart the kernel
Step4: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
Step5: Before you begin
Step6: Region
Step7: Timestamp
Step8: Authenticate your Google Cloud account
Step9: Create a Cloud Storage bucket
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Service Account
Step13: Set service account access for Vertex AI Pipelines
Step14: Set up variables
Step15: Vertex AI Pipelines constants
Step16: Additional imports.
Step17: Initialize Vertex AI SDK for Python
Step18: Define custom model pipeline that uses components from google_cloud_pipeline_components
Step19: Compile the pipeline
Step20: Run the pipeline
Step21: Click on the generated link to see your run in the Cloud Console.
|
<ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
! pip3 install $USER kfp google-cloud-pipeline-components --upgrade
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))"
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
import google.cloud.aiplatform as aip
PIPELINE_ROOT = "{}/pipeline_root/bikes_weather".format(BUCKET_NAME)
import kfp
from kfp.v2.dsl import component
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
hp_dict: str = '{"num_hidden_layers": 3, "hidden_size": 32, "learning_rate": 0.01, "epochs": 1, "steps_per_epoch": -1}'
data_dir: str = "gs://aju-dev-demos-codelabs/bikes_weather/"
TRAINER_ARGS = ["--data-dir", data_dir, "--hptune-dict", hp_dict]
# create working dir to pass to job spec
WORKING_DIR = f"{PIPELINE_ROOT}/{TIMESTAMP}"
MODEL_DISPLAY_NAME = f"train_deploy{TIMESTAMP}"
print(TRAINER_ARGS, WORKING_DIR, MODEL_DISPLAY_NAME)
@kfp.dsl.pipeline(name="train-endpoint-deploy" + TIMESTAMP)
def pipeline(
project: str = PROJECT_ID,
model_display_name: str = MODEL_DISPLAY_NAME,
serving_container_image_uri: str = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-3:latest",
):
from google_cloud_pipeline_components.types import artifact_types
from google_cloud_pipeline_components.v1.custom_job import \
CustomTrainingJobOp
from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,
ModelDeployOp)
from google_cloud_pipeline_components.v1.model import ModelUploadOp
from kfp.v2.components import importer_node
custom_job_task = CustomTrainingJobOp(
project=project,
display_name="model-training",
worker_pool_specs=[
{
"containerSpec": {
"args": TRAINER_ARGS,
"env": [{"name": "AIP_MODEL_DIR", "value": WORKING_DIR}],
"imageUri": "gcr.io/google-samples/bw-cc-train:latest",
},
"replicaCount": "1",
"machineSpec": {
"machineType": "n1-standard-16",
"accelerator_type": aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
"accelerator_count": 2,
},
}
],
)
import_unmanaged_model_task = importer_node.importer(
artifact_uri=WORKING_DIR,
artifact_class=artifact_types.UnmanagedContainerModel,
metadata={
"containerSpec": {
"imageUri": "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-3:latest",
},
},
).after(custom_job_task)
model_upload_op = ModelUploadOp(
project=project,
display_name=model_display_name,
unmanaged_container_model=import_unmanaged_model_task.outputs["artifact"],
)
model_upload_op.after(import_unmanaged_model_task)
endpoint_create_op = EndpointCreateOp(
project=project,
display_name="pipelines-created-endpoint",
)
ModelDeployOp(
endpoint=endpoint_create_op.outputs["endpoint"],
model=model_upload_op.outputs["model"],
deployed_model_display_name=model_display_name,
dedicated_resources_machine_type="n1-standard-16",
dedicated_resources_min_replica_count=1,
dedicated_resources_max_replica_count=1,
)
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="tabular regression_pipeline.json".replace(" ", "_"),
)
DISPLAY_NAME = "bikes_weather_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="tabular regression_pipeline.json".replace(" ", "_"),
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
job.run()
! rm tabular_regression_pipeline.json
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "tabular" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: These two lines of code will download and read in the handwritten digits data automatically.
Step2: We're going to look at only 100 examples at a time.
Step3: Here is the first example of data. It's a representation of a picture as a bunch of numbers.
Step4: We use the convenient InteractiveSession, for checking the intermediate results along the way. You can now use Tensor.eval() and Operation.run() without having to specify a session explicitly.
Step5: We start building the computation graph by creating placeholders for the input images(X) and target output labels(t).
Step6: Below is an overview of the model we will build. It starts with a convolutional layer, pass the result to ReLU, pool, affine layer, ReLU again, second affine layer, and then apply softmax function. Keep in mind this architecture while you're following the notebook.
Step7: To compute convolution easily, we do a simple trick called flattening. After flattening, input data will be transformed into a 2D matrix, which allows for matrix multiplication with a filter (which is also flattened into 2D).
Step8: ReLU
Step9: Max pooling
Step10: Affine layer 1
Step11: The above result shows the representation of the first example as a 100 dimention vector in the hidden layer.
Step12: This is after applying ReLU to the above representation. You can see that we set all the negative numbers to 0.
Step13: Softmax
Step14: We got somewhat evenly distributed probabilities over 10 digits. This is as expected because we haven't trained our model yet.
Step15: We got an accuracy of 98%. Awesome!
|
<ASSISTANT_TASK:>
Python Code:
# GPUs or CPU
import tensorflow as tf
# Check TensorFlow Version
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/home/arasdar/datasets/MNIST_data/", one_hot=True, reshape=False)
batch_size = 100
example_X, example_ys = mnist.train.next_batch(batch_size)
example_X[0].shape
session = tf.InteractiveSession()
X = tf.placeholder('float', [batch_size, 28, 28, 1])
t = tf.placeholder('float', [batch_size, 10])
filter_h, filter_w, filter_c, filter_n = 5, 5, 1, 30
W1 = tf.Variable(tf.random_normal([filter_h, filter_w, filter_c, filter_n], stddev=0.01))
b1 = tf.Variable(tf.zeros([filter_n]))
def convolution(X, W, b, padding, stride):
n, h, w, c = map(lambda d: d.value, X.get_shape())
filter_h, filter_w, filter_c, filter_n = [d.value for d in W.get_shape()]
out_h = (h + 2*padding - filter_h)//stride + 1
out_w = (w + 2*padding - filter_w)//stride + 1
X_flat = flatten(X, filter_h, filter_w, filter_c, out_h, out_w, stride, padding)
W_flat = tf.reshape(W, [filter_h*filter_w*filter_c, filter_n])
z = tf.matmul(X_flat, W_flat) + b # b: 1 X filter_n
return tf.transpose(tf.reshape(z, [out_h, out_w, n, filter_n]), [2, 0, 1, 3])
def flatten(X, window_h, window_w, window_c, out_h, out_w, stride=1, padding=0):
X_padded = tf.pad(X, [[0,0], [padding, padding], [padding, padding], [0,0]])
windows = []
for y in range(out_h):
for x in range(out_w):
window = tf.slice(X_padded, [0, y*stride, x*stride, 0], [-1, window_h, window_w, -1])
windows.append(window)
stacked = tf.stack(windows) # shape : [out_h, out_w, n, filter_h, filter_w, c]
return tf.reshape(stacked, [-1, window_c*window_w*window_h])
print(X.shape, X.dtype, W1.shape, W1.dtype, b1.shape, b1.dtype)
conv_layer = convolution(X, W1, b1, padding=2, stride=1)
conv_layer
def relu(X):
return tf.maximum(X, tf.zeros_like(X))
conv_activation_layer = relu(conv_layer)
conv_activation_layer
def max_pool(X, pool_h, pool_w, padding, stride):
n, h, w, c = [d.value for d in X.get_shape()]
out_h = (h + 2*padding - pool_h)//stride + 1
out_w = (w + 2*padding - pool_w)//stride + 1
X_flat = flatten(X, pool_h, pool_w, c, out_h, out_w, stride, padding)
pool = tf.reduce_max(tf.reshape(X_flat, [out_h, out_w, n, pool_h*pool_w, c]), axis=3)
return tf.transpose(pool, [2, 0, 1, 3])
pooling_layer = max_pool(conv_activation_layer, pool_h=2, pool_w=2, padding=0, stride=2)
pooling_layer
batch_size, pool_output_h, pool_output_w, filter_n = [d.value for d in pooling_layer.get_shape()]
# number of nodes in the hidden layer
hidden_size = 100
W2 = tf.Variable(tf.random_normal([pool_output_h*pool_output_w*filter_n, hidden_size], stddev=0.01))
b2 = tf.Variable(tf.zeros([hidden_size]))
def affine(X, W, b):
n = X.get_shape()[0].value # number of samples
X_flat = tf.reshape(X, [n, -1])
return tf.matmul(X_flat, W) + b
affine_layer1 = affine(pooling_layer, W2, b2)
affine_layer1
init = tf.global_variables_initializer()
init.run()
affine_layer1.eval({X:example_X, t:example_ys})[0]
affine_activation_layer1 = relu(affine_layer1)
affine_activation_layer1
affine_activation_layer1.eval({X:example_X, t:example_ys})[0]
output_size = 10
W3 = tf.Variable(tf.random_normal([hidden_size, output_size], stddev=0.01))
b3 = tf.Variable(tf.zeros([output_size]))
W3, b3
affine_layer2 = affine(affine_activation_layer1, W3, b3)
# because you have new variables, you need to initialize them.
init = tf.global_variables_initializer()
init.run()
affine_layer2.eval({X:example_X, t:example_ys})[0]
def softmax(X):
X_centered = X - tf.reduce_max(X) # to avoid overflow
X_exp = tf.exp(X_centered)
exp_sum = tf.reduce_sum(X_exp, axis=1)
return tf.transpose(tf.transpose(X_exp) / exp_sum)
softmax_layer = softmax(affine_layer2)
softmax_layer
softmax_layer.eval({X:example_X, t:example_ys})[0]
def cross_entropy_error(y, t):
return -tf.reduce_mean(tf.log(tf.reduce_sum(y * t, axis=1)))
loss = cross_entropy_error(softmax_layer, t)
loss.eval({X:example_X, t:example_ys})
learning_rate = 0.1
trainer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# number of times to iterate over training data
training_epochs = 2
# number of batches
num_batch = int(mnist.train.num_examples/batch_size)
num_batch
501%10
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for i in range(500):
X_mb, y_mb = mnist.train.next_batch(batch_size)
_, loss_val = sess.run([trainer, loss], feed_dict={X: X_mb, t: y_mb})
avg_cost = loss_val / num_batch #.eval(feed_dict={X:X_mb, t:y_mb})
# Every 1000 iterations = if remaining of or mode of i on 1000 is zero or i is the multiplication of 1000
# Print out the results
if i%100 == 0:
print(avg_cost)
# if i % 100 == 0:
# y_pred = sess.run(forward_step, feed_dict={X: X_val})
# acc = accuracy(y_val, y_pred)
# print('Iter: {} Loss: {:.4f} Validation: {}'.format(i, loss_val, acc))
avg_cost = 0
for i in range(50):
train_X, train_ys = mnist.train.next_batch(100)
trainer.run(feed_dict={X:train_X, t:train_ys})
avg_cost = loss.eval(feed_dict={X:train_X, t:train_ys}) / num_batch
print(avg_cost)
# # if net_type == 'cnn':
# X_mb = X_mb.reshape([-1, 28, 28, 1])
# _, loss_val = sess.run([train_step, loss], feed_dict={X: X_mb, y: y_mb})
# if i % 100 == 0:
# y_pred = sess.run(forward_step, feed_dict={X: X_val})
# acc = accuracy(y_val, y_pred)
# print('Iter: {} Loss: {:.4f} Validation: {}'.format(i, loss_val, acc))
from tqdm import tqdm_notebook
for epoch in range(training_epochs):
avg_cost = 0
for _ in tqdm_notebook(range(num_batch)):
train_X, train_ys = mnist.train.next_batch(batch_size)
trainer.run(feed_dict={X:train_X, t:train_ys})
avg_cost += loss.eval(feed_dict={X:train_X, t:train_ys}) / num_batch
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost), flush=True)
test_x = mnist.test.images[:batch_size]
test_t = mnist.test.labels[:batch_size]
def accuracy(network, t):
t_predict = tf.argmax(network, axis=1)
t_actual = tf.argmax(t, axis=1)
return tf.reduce_mean(tf.cast(tf.equal(t_predict, t_actual), tf.float32))
accuracy(softmax_layer, t).eval(feed_dict={X:test_x, t:test_t})
session.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: read
Step2: features and targets
Step3: accuracy
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import graphviz
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
plt.rcParams["figure.figsize"] = (17, 10)
import pandas as pd
import seaborn as sns
sns.set(context = "paper", font = "monospace")
import sklearn.datasets
from sklearn.preprocessing import MinMaxScaler
import sklearn.tree
import sqlite3
import warnings
warnings.filterwarnings("ignore")
pd.set_option("display.max_rows", 500)
pd.set_option("display.max_columns", 500)
df = pd.read_csv("ttHbb_data.csv")
df.head()
features = list(df.columns[:-1])
X = df[features]
y = df["target"]
classifier = sklearn.tree.DecisionTreeClassifier(min_samples_split = 20, random_state = 99, max_depth = 5)
classifier.fit(X, y)
graph = graphviz.Source(
sklearn.tree.export_graphviz(
classifier,
out_file = None,
feature_names = list(df[features].columns.values),
filled = True,
rounded = True,
special_characters = True
)
)
graph
y_predictions = classifier.predict(X)
y_predictions
sklearn.metrics.accuracy_score(y, y_predictions)
_df = pd.DataFrame()
_df["variable"] = X.columns.values
_df["importance"] = classifier.feature_importances_
_df.index = _df["variable"].values
del _df["variable"]
_df = _df.sort_values(by = "importance", ascending = False)
_df
plt.rcParams["figure.figsize"] = (17, 10)
_df.sort_values(by = "importance", ascending = True).plot(kind = "barh", legend = "False");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None,real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, shape=(None,z_dim), name='inputs_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1,h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=(tf.ones_like(d_logits_real) * (1 - smooth))))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=(tf.zeros_like(d_logits_fake))))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=(tf.ones_like(d_logits_fake))))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [x for x in t_vars if x.name[0:9] == 'generator']
d_vars = [x for x in t_vars if x.name[0:13] == 'discriminator']
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you get something like "Pandas version
Step2: Remind yourself
Step3: Question. Can you see consolidation here?
Step4: Answer these questions below. Code is sufficient, but it's often helpful to add comments to remind yourself what you did, and why.
Step5: Comment. Now we have the number of people in any five-year age group running down columns. The column labels are the years.
|
<ASSISTANT_TASK:>
Python Code:
# to make sure things are working, run this
import pandas as pd
print('Pandas version: ', pd.__version__)
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
%matplotlib inline
url = 'http://pages.stern.nyu.edu/~dbackus/Data/beer_production_1947-2004.xlsx'
beer = pd.read_excel(url, skiprows=12, index_col=0)
print('Dimensions:', beer.shape)
beer[list(range(1,11))].head(3)
vars = list(range(1,101)) # extract top 100 firms
pdf = beer[vars].T # transpose (flip rows and columns)
pdf[[1947, 1967, 1987, 2004]].head()
# a basic plot
fig, ax = plt.subplots()
pdf[1947].plot(ax=ax, logy=True)
pdf[1967].plot(ax=ax, logy=True)
pdf[1987].plot(ax=ax, logy=True)
pdf[2004].plot(ax=ax, logy=True)
ax.legend()
# data input (takes about 20 seconds on a wireless network)
url1 = 'http://esa.un.org/unpd/wpp/DVD/Files/'
url2 = '1_Indicators%20(Standard)/EXCEL_FILES/1_Population/'
url3 = 'WPP2017_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.XLSX'
url = url1 + url2 + url3
cols = [2, 4, 5] + list(range(6,28))
prj = pd.read_excel(url, sheetname=1, skiprows=16, parse_cols=cols, na_values=['…'])
print('Dimensions: ', prj.shape)
print('Column labels: ', prj.columns)
# rename some variables
pop = prj
pop = pop.rename(columns={'Reference date (as of 1 July)': 'Year',
'Region, subregion, country or area *': 'Country',
'Country code': 'Code'})
# select Japan and years
countries = ['Japan']
years = [2015, 2035, 2055, 2075, 2095]
pop = pop[pop['Country'].isin(countries) & pop['Year'].isin(years)]
pop = pop.drop(['Country', 'Code'], axis=1)
pop = pop.set_index('Year').T
pop = pop/1000 # convert population from thousands to millions
pop.head()
pop.tail()
# data input (takes about 20 seconds on a wireless network)
url = 'http://pages.stern.nyu.edu/~dbackus/Data/feds200628.csv'
gsw = pd.read_csv(url, skiprows=9, index_col=0, usecols=list(range(11)), parse_dates=True)
print('Dimensions: ', gsw.shape)
print('Column labels: ', gsw.columns)
print('Row labels: ', gsw.index)
# grab recent data
df = gsw[gsw.index >= dt.datetime(2010,1,1)]
# convert to annual, last day of year
df = df.resample('A', how='last').sort_index()
df.head()
df.columns = list(range(1,11))
ylds = df.T
ylds.head(3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run examples
Step2: Normalized Cuts
|
<ASSISTANT_TASK:>
Python Code:
# Arg: quality and num_per_dim -> tradeoffs between quality and time spent running
# quality affects dense=False, and num_per_dim affects dense=True
ckpt_path = './ckpt/exif_final/exif_final.ckpt'
exif_demo = demo.Demo(ckpt_path=ckpt_path, use_gpu=0, quality=3.0, num_per_dim=30)
# MeanShift + dense affinities (as described in our paper)
ms_st = time.time()
im1, res1 = exif_demo('./images/demo.png', dense=True) # Upsampled via bilinear upsampling
print('MeanShift run time: %.3f' % (time.time() - ms_st))
# DBSCAN + sparse anchors
db_st = time.time()
im2, res2 = exif_demo('./images/demo.png', dense=False) # No upsampling
print('DBSCAN run time: %.3f' % (time.time() - db_st))
% matplotlib inline
plt.subplots(figsize=(16, 8))
plt.subplot(1, 3, 1)
plt.title('Input Image')
plt.imshow(im1)
plt.axis('off')
plt.subplot(1, 3, 2)
plt.title('Cluster w/ MeanShift')
plt.axis('off')
plt.imshow(1.0 - res1, cmap='jet', vmin=0.0, vmax=1.0)
plt.subplot(1, 3, 3)
plt.title('Cluster w/ DBSCAN')
plt.axis('off')
plt.imshow(res2, cmap='jet', vmin=0.0, vmax=1.0)
plt.show()
res = exif_demo.run(im1, use_ncuts=True, blue_high=True)
% matplotlib inline
plt.subplots(figsize=(16, 8))
plt.subplot(1, 3, 1)
plt.title('Input Image')
plt.imshow(im1)
plt.axis('off')
plt.subplot(1, 3, 2)
plt.title('Cluster w/ MeanShift')
plt.axis('off')
plt.imshow(res[0], cmap='jet', vmin=0.0, vmax=1.0)
plt.subplot(1, 3, 3)
plt.title('Segment with NCuts')
plt.axis('off')
plt.imshow(res[1], vmin=0.0, vmax=1.0)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The model should be able to handle noise-corrupted input signal.
Step2: This time the model should be able to handle also phase-shifted signal since it was trained such.
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import keras
import numpy as np
import keras
N = 50
# phase_step = 1 / (2 * np.pi)
t = np.arange(50)
phases = np.linspace(0, 1, N) * 2 * np.pi
x = np.array([np.sin(2 * np.pi / N * t + phi) for phi in phases])
print(x.shape)
imshow(x);
plot(x[0]);
plot(x[1]);
plot(x[2]);
from keras.models import Sequential
from keras.layers import containers
from keras.layers.core import Dense, AutoEncoder
encoder = containers.Sequential([
Dense(25, input_dim=50),
Dense(12)
])
decoder = containers.Sequential([
Dense(25, input_dim=12),
Dense(50)
])
model = Sequential()
model.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True))
model.compile(loss='mean_squared_error', optimizer='sgd')
plot(model.predict(x)[0]);
from loss_history import LossHistory
loss_history = LossHistory()
model.fit(x, x, nb_epoch=1000, batch_size=50, callbacks=[loss_history])
plot(model.predict(x)[0])
plot(x[0])
plot(model.predict(x)[10])
plot(x[10])
print('last loss:', loss_history.losses[-1])
plot(loss_history.losses);
imshow(model.get_weights()[0], interpolation='nearest', cmap='gray');
imshow(model.get_weights()[2], interpolation='nearest', cmap='gray');
x_noised = x + 0.2 * np.random.random(len(x[0]))
plot(x_noised[0], label='input')
plot(model.predict(x_noised)[0], label='predicted')
legend();
x_shifted = np.cos(2*np.pi/N * t.reshape(1, -1))
plot(x_shifted[0], label='input')
plot(model.predict(x_shifted)[0], label='predicted')
legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Other Useful Packages (not used today)
Step2: Look at those zip codes!
Step3: Rearrange The Data
Step4: Lost Columns! Fips summed!
Step5: Aside on Copying
Step6: Rearrange The Data
Step7: Rename Columns, Subset Data
Step8: Save Your Data
Step9: Scraping
Step10: Requests from Python
Step11: View Returned Data
Step12: Ethics
Step13: Collect Our Data
Step14: Scraping
Step15: Use Find Feature to Narrow Your Search
Step16: Back To Our Data
Step17: Subset down to the data we have
Step18: Create a dataframe from the new AQI data
Step19: Combine The Data
Step20: Look At The Data
Step21: Look At The Data
Step22: Look at particulates
Step23: Scatter Plot
Step24: Run a regression
Step25: Clustering Algorithm
Step26: Look At Clusters
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import pickle
import statsmodels.api as sm
from sklearn import cluster
import matplotlib.pyplot as plt
%matplotlib inline
from bs4 import BeautifulSoup as bs
import requests
import time
# from ggplot import *
asthma_data = pd.read_csv('asthma-emergency-department-visit-rates-by-zip-code.csv')
asthma_data.head()
asthma_data[['zip','coordinates']] = asthma_data.loc[:,'ZIP code'].str.split(
pat='\n',expand=True)
asthma_data.drop('ZIP code', axis=1,inplace=True)
asthma_data.head(2)
asthma_grouped = asthma_data.groupby(by=['Year','zip']).sum()
asthma_grouped.head(4)
asthma_grouped.drop('County Fips code',axis=1,inplace=True)
temp_grp = asthma_data.groupby(by=['Year','zip']).first()
asthma_grouped[['fips','county','coordinates']]=temp_grp.loc[:,['County Fips code',
'County',
'coordinates']]
asthma_grouped.loc[:,'Number of Visits']=asthma_grouped.loc[:,'Number of Visits']/2
asthma_grouped.head(2)
A = [5]
B = A
A.append(6)
print(B)
import copy
A = [5]
B = A.copy()
A.append(6)
print(B)
asthma_grouped[['fips','county','coordinates']]=temp_grp.loc[:,['County Fips code',
'County',
'coordinates']].copy()
asthma_unstacked = asthma_data.pivot_table(index = ['Year',
'zip',
'County',
'coordinates',
'County Fips code'],
columns = 'Age Group',
values = 'Number of Visits')
asthma_unstacked.reset_index(drop=False,inplace=True)
asthma_unstacked.head(2)
asthma_unstacked.rename(columns={
'zip':'Zip',
'coordinates':'Coordinates',
'County Fips code':'Fips',
'Adults (18+)':'Adults',
'All Ages':'Incidents',
'Children (0-17)': 'Children'
},
inplace=True)
asthma_2015 = asthma_unstacked.loc[asthma_unstacked.Year==2015,:]
asthma_2015.head(2)
pickle.dump(asthma_unstacked,open('asthma_unstacked.p','wb'))
asthma_unstacked.to_csv('asthma_unstacked.csv')
asthma_unstacked = pickle.load(open('asthma_unstacked.p','rb'))
base_url = "http://www.airnowapi.org/aq/observation/zipCode/historical/"
attributes = ["format=application/json",
"zipCode=20007",
"date=2017-09-05T00-0000",
"distance=25",
"API_KEY=39DC3727-09BD-48C4-BBD8-XXXXXXXXXXXX"
]
post_url = '&'.join(attributes)
print(post_url)
ingredients=requests.get(base_url, post_url)
ingredients = ingredients.json()
print(ingredients[0])
for item in ingredients:
AQIType = item['ParameterName']
City=item['ReportingArea']
AQIValue=item['AQI']
print("For Location ", City, " the AQI for ", AQIType, "is ", AQIValue)
time.sleep(1)
base_url = "http://www.airnowapi.org/aq/observation/zipCode/historical/"
zips = asthma_2015.Zip.unique()
zips = zips[:450]
date ="date=2015-09-01T00-0000"
api_key = "API_KEY=39DC3727-09BD-48C4-BBD8-XXXXXXXXXXXX"
return_format = "format=application/json"
zip_str = "zipCode="
post_url = "&".join([date,api_key,return_format,zip_str])
data_dict = {}
for zipcode in zips:
time.sleep(1)
zip_post = post_url + str(zipcode)
ingredients = requests.get(base_url, zip_post)
ingredients = ingredients.json()
zip_data = {}
for data_point in ingredients:
AQIType = data_point['ParameterName']
AQIVal = data_point['AQI']
zip_data[AQIType] = AQIVal
data_dict[zipcode]= zip_data
ingredients = requests.get("https://en.wikipedia.org/wiki/Data_science")
soup = bs(ingredients.text)
print(soup.body.p)
parser_div = soup.find("div", class_="mw-parser-output")
wiki_content = parser_div.find_all('p')
print(wiki_content[0])
print('*****************************************')
print(wiki_content[0].text)
pickle.dump(data_dict,open('AQI_data_raw.p','wb'))
collected = list(data_dict.keys())
asthma_2015_sub = asthma_2015.loc[asthma_2015.Zip.isin(collected),:]
aqi_data = pd.DataFrame.from_dict(data_dict, orient='index')
aqi_data.reset_index(drop=False,inplace=True)
aqi_data.rename(columns={'index':'Zip'},inplace=True)
aqi_data.head()
asthma_aqi = asthma_2015_sub.merge(aqi_data,how='outer',on='Zip')
asthma_aqi.head(2)
asthma_aqi.Incidents.plot.hist(20)
asthma_aqi.loc[:,['Incidents','OZONE']].plot.density()
asthma_aqi.loc[:,['PM2.5','PM10']].plot.hist()
asthma_aqi.plot.scatter('OZONE','PM2.5')
y =asthma_aqi.loc[:,'Incidents']
x =asthma_aqi.loc[:,['OZONE','PM2.5']]
x['c'] = 1
ols_model1 = sm.OLS(y,x,missing='drop')
results = ols_model1.fit()
print(results.summary())
pickle.dump([results,ols_model1],open('ols_model_results.p','wb'))
model_df = asthma_aqi.loc[:,['OZONE','PM2.5','Incidents',]]
model_df.dropna(axis=0,inplace=True)
model_df = (model_df - model_df.mean()) / (model_df.max() - model_df.min())
asthma_air_clusters=cluster.KMeans(n_clusters = 3)
asthma_air_clusters.fit(model_df)
model_df['clusters3']=asthma_air_clusters.labels_
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
labels = asthma_air_clusters.labels_
ax.scatter(model_df.loc[:, 'PM2.5'], model_df.loc[:, 'OZONE'], model_df.loc[:, 'Incidents'],
c=labels.astype(np.float), edgecolor='k')
ax.set_xlabel('Particulates')
ax.set_ylabel('Ozone')
ax.set_zlabel('Incidents')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some prelimaries. Import code, and configure chart sizes to be larger than the default.
Step2: Load the .csv into a pandas DataFrame, adding column names.
Step3: A quick plot to get a rough idea of how the sensors differ.
Step4: Aside from a wider spread in sensor values than I'd like, and higher temperatures (the room wasn't that hot!), this is roughly what I expected for the temperature pattern. It was a hot, humid day. The bedroom starts off warm, cools when I turned on A/C at 9pm, then oscillates during the night as the A/C kicks in on a scheduled setting.
Step5: This is roughly what's needed, except for the NaN (missing) values. Resampling the data into 2 minute buckets deals with those.
Step6: The first thing that jumps out is that one of the sensor ~5 degrees lower than the others. The SHT30 sensors are inexpensive; it might be a manufacturing problem, or I might have damaged one while soldering on the headers. (Or maybe it's the sane one, and the other four are measuring hot.)
Step7: That looks promising.
Step8: That same sensor is the outlier. Eyeballing the graph, that sensor's humidity reading looks high by about 9 units.
|
<ASSISTANT_TASK:>
Python Code:
!head -5 temps.csv
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12, 5)
import pandas as pd
df = pd.read_csv('temps.csv', header=None, names=['time', 'mac', 'f', 'h'], parse_dates=[0])
df.head()
df.plot();
per_sensor_f = df.pivot(index='time', columns='mac', values='f')
per_sensor_f.head()
downsampled_f = per_sensor_f.resample('2T').mean()
downsampled_f.head()
downsampled_f.plot();
downsampled_f['5C:CF:7F:33:F7:F8'] += 5.0
downsampled_f.plot();
per_sensor_h = df.pivot(index='time', columns='mac', values='h')
downsampled_h = per_sensor_h.resample('2T').mean()
downsampled_h.plot();
downsampled_h['5C:CF:7F:33:F7:F8'] -= 9.0
downsampled_h.plot();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: filter(func)
Step2: flatMap(func)
Step3: intersection(otherRDD)
Step4: groupByKey()
Step5: reduceByKey(func)
Step6: sortByKey([asceding])
|
<ASSISTANT_TASK:>
Python Code:
data = sc.parallelize(range(1, 11))
def duplicar(x): return x*x
# data é um rdd
res = data.map( duplicar )
print (res.collect())
data = sc.parallelize(range(1, 11))
res = data.filter(lambda x: x%2 ==1)
print(res.collect())
data = sc.parallelize(["Linha 1", "Linha 2"])
def partir(l): return l.split(" ")
print ('map:', data.map(partir).collect())
print ('flatMap:', data.flatMap(partir).collect())
two_multiples = sc.parallelize(range(0, 20, 2))
three_multiples = sc.parallelize(range(0, 20, 3))
print (two_multiples.intersection(three_multiples).collect())
data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])
for pair in data.groupByKey().collect():
print (pair[0], list(pair[1]))
data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])
res = data.reduceByKey( lambda x,y: x+y )
print (res.collect())
data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])
print(data.sortByKey(ascending=False).collect())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Two flips
Step2: Three Flips
Step3: Flip Two Coins
Step4: Flip One Of Two
Step5: Answer
Step6: Cancer Example 2
Step7: Program Bayes Rule 2
|
<ASSISTANT_TASK:>
Python Code:
def f(p):
return 1-p
print f(0.3)
def f(p):
return p*p
print f(0.3)
def f(p):
return 3 * p * (1-p) * (1-p)
print f(0.5)
print f(0.8)
def f(p1,p2):
return p1 * p2
print f(0.5,0.8)
def f(p0,p1,p2):
return p0 * p1 +(1-p0) * p2
print f(0.3,0.5,0.9)
#Calculate the probability of a positive result given that
#p0=P(C)
#p1=P(Positive|C)
#p2=P(Negative|Not C)
def f(p0,p1,p2):
return p0 * p1 + (1-p0) * (1-p2)
print f(0.1, 0.9, 0.8)
#Return the probability of A conditioned on B given that
#P(A)=p0, P(B|A)=p1, and P(Not B|Not A)=p2
def f(p0,p1,p2):
return p0 * p1 / (p0 * p1 + (1-p0) * (1-p2))
print f(0.1, 0.9, 0.8)
print f(0.01, 0.7, 0.9)
#Return the probability of A conditioned on Not B given that
#P(A)=p0, P(B|A)=p1, and P(Not B|Not A)=p2
def f(p0,p1,p2):
return p0 * (1-p1) / (p0 * (1-p1) + (1-p0) * p2)
print f(0.1, 0.9, 0.8)
print f(0.01, 0.7, 0.9)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Main functions
Step2: Run
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
from subprocess import Popen, PIPE
from bisect import bisect_left
tmp_script = 'tmp.praat'
def gen_script():
# This generates temporary praat script file
global tmp_script
with open(tmp_script, 'w') as f:
f.write('''
form extract_formant
text FILENAME
positive MAXFORMANT 5500
real WINLEN 0.025
positive PREEMPH 50
endform
Read from file... 'FILENAME$'
To Formant (burg)... 0.01 5 'MAXFORMANT' 'WINLEN' 'PREEMPH'
List... no yes 6 no 3 no 3 no
exit
''')
return tmp_script
def run_praat_cmd(*args):
o = Popen(['praat'] + [str(i) for i in args],
shell=False, stdout=PIPE, stderr=PIPE)
stdout, stderr = o.communicate()
if os.path.exists(tmp_script):
os.remove(tmp_script)
if o.returncode:
raise Exception(stderr.decode('utf-8'))
else:
return stdout
def get_formant(FNAME, time, MAXFORMANT=5500, WINLEN=0.025, PREEMPH=50):
fmt_out = {}
def _float(s):
# Retrieved from https://github.com/mwv/praat_formants_python
try:
return float(s)
except ValueError:
return np.nan
key = (FNAME, MAXFORMANT, WINLEN, PREEMPH)
run_out = run_praat_cmd(gen_script(), FNAME, MAXFORMANT, WINLEN, PREEMPH)
fmt_out[key] = np.array(list(map(lambda x: list(map(_float, x.rstrip().split('\t')[:4])),
run_out.decode('utf-8').split('\n')[1:-1])))
out = fmt_out[key]
val = out[bisect_left(out[:,0], time), 1:]
if np.any(np.isnan(val)):
val = 0.0 # pad nan as 0
return val
time = 0.5 # sec
get_formant('da_ta.wav', time) # output: F1, F2, F3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Featurization
Step2: Contact Featurizer
Step3: Intermediate kinetic model
Step4: tICA Heatmap
Step5: Clustering
Step6: MSM
Step7: Macrostate model
|
<ASSISTANT_TASK:>
Python Code:
from msmbuilder.dataset import dataset
import numpy as np
import os
from mdtraj.utils import timing
from msmbuilder.featurizer import DihedralFeaturizer
import seaborn as sns; sns.set_style("white"); sns.set_palette("Blues")
with timing("Loading data as dataset object"):
wt_xyz = dataset("/Users/je714/wt_data/*/05*nc", topology="/Users/je714/wt_data/test.pdb")
with timing("Loading data as dataset object"):
S1P_xyz = dataset("/Users/je714/p_data/run*/S1P/05*nc", topology="/Users/je714/p_data/S1P_ff14SB_newclean.prmtop")
with timing("Loading data as dataset object"):
SEP_xyz = dataset("/Users/je714/p_data/run*/SEP/05*nc", topology="/Users/je714/p_data/SEP_ff14SB_newclean.prmtop")
wt_featurizer = DihedralFeaturizer(types=['phi', 'psi'])
if os.path.isfile('/Users/je714/wt_data/wt_diheds_phi-psi.tgz'):
with timing("Loading dihedrals from file..."):
wt_diheds = np.loadtxt('/Users/je714/wt_data/wt_diheds_phi-psi.tgz')
else:
with timing("Featurizing trajectory into dihedrals..."):
wt_diheds = wt_featurizer.fit_transform(wt_xyz)
np.savetxt('/Users/je714/wt_data/wt_diheds_phi-psi.tgz', np.concatenate(wt_diheds))
S1P_featurizer = DihedralFeaturizer(types=['phi', 'psi'])
if os.path.isfile('/Users/je714/p_data/S1P_diheds_phi-psi.tgz'):
with timing("Loading dihedrals from file..."):
S1P_diheds = np.loadtxt('/Users/je714/p_data/S1P_diheds_phi-psi.tgz')
else:
with timing("Featurizing trajectory into dihedrals..."):
S1P_diheds = S1P_featurizer.fit_transform(S1P_xyz)
np.savetxt('/Users/je714/p_data/S1P_diheds_phi-psi.tgz', np.concatenate(S1P_diheds))
SEP_featurizer = DihedralFeaturizer(types=['phi', 'psi'])
if os.path.isfile('/Users/je714/p_data/SEP_diheds_phi-psi.tgz'):
with timing("Loading dihedrals from file..."):
SEP_diheds = np.loadtxt('/Users/je714/p_data/SEP_diheds_phi-psi.tgz')
else:
with timing("Featurizing trajectory into dihedrals..."):
SEP_diheds = SEP_featurizer.fit_transform(SEP_xyz)
np.savetxt('/Users/je714/p_data/SEP_diheds_phi-psi.tgz', np.concatenate(SEP_diheds))
# from msmbuilder.featurizer import ContactFeaturizer
# featurizer_contact = ContactFeaturizer("all", scheme="ca")
# contacts = featurizer_contact.fit_transform(xyz)
# print(xyz[0].xyz.shape)
# print(contacts[0].shape)
wt_xyz[0][0].topology
wt_diheds.shape
from sklearn.pipeline import Pipeline
from msmbuilder.decomposition import tICA
from msmbuilder.cluster import MiniBatchKMeans
from msmbuilder.msm import MarkovStateModel
DihedralFeaturizer?
model = Pipeline([
('featurizer', DihedralFeaturizer(types=['phi', 'psi'])),
('tica', tICA(n_components=10, lag_time=20)),
('cluster', MiniBatchKMeans(n_clusters=1000)),
('msm', MarkovStateModel(lag_time=50))
])
model.fit(wt_xyz)
for step in model.steps:
print(step[0])
diheds = model.steps[0][1]
tica_obj = model.steps[1][1]
clusterer = model.steps[2][1]
msm = model.steps[3][1]
tica_trajs = tica_obj.transform(diheds.transform(wt_xyz))
np.concatenate(tica_trajs).shape
plt.plot(np.concatenate(tica_trajs)[::,0])
def plot_ticaTrajs(tica_trajs):
txx = np.concatenate(tica_trajs)
plt.figure(figsize=(10.5,5))
cmap=sns.cubehelix_palette(8, start=.5, rot=-.75, as_cmap=True)
plt.subplot(1, 2, 1)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap=cmap)
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
cb = plt.colorbar()
cb.set_label('log10(N)')
plt.subplot(1, 2, 2)
plt.hexbin(txx[:,2], txx[:,3], bins='log', mincnt=1, cmap=cmap)
plt.xlabel('tIC 3')
plt.ylabel('tIC 4')
cb = plt.colorbar()
cb.set_label('log10(N)')
plot_ticaTrajs(tica_trajs)
clusterer.cluster_centers_.shape
def plot_clusterCenters(clusterer_object, tica_trajs):
txx = np.concatenate(tica_trajs)
plt.figure(figsize=(10.5,5))
plt.subplot(1, 2, 1)
cmap=sns.cubehelix_palette(8, start=.5, rot=-.75, as_cmap=True)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap=cmap)
cb = plt.colorbar()
cb.set_label('log10(N)')
plt.scatter(clusterer.cluster_centers_[:,0],
clusterer.cluster_centers_[:,1],
s=4, c='black')
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
plt.subplot(1,2,2)
plt.hexbin(txx[:,2], txx[:,3], bins='log', mincnt=1, cmap=cmap)
cb = plt.colorbar()
cb.set_label('log10(N)')
plt.scatter(clusterer.cluster_centers_[:,2],
clusterer.cluster_centers_[:,3],
s=4, c='black')
plt.xlabel('tIC 3')
plt.ylabel('tIC 4')
plt.tight_layout()
plot_clusterCenters(clusterer, tica_trajs)
plt.savefig("/Users/je714/Dropbox (Imperial)/ESAreport/Figures/tica_clusters.png", format='png', dpi=300)
np.asarray(range(0,10))
plt.hexbin(np.asarray(range(0, np.hstack(clusterer.labels_).shape[0]))*0.00002,
np.hstack(clusterer.labels_),
mincnt=1,
cmap=sns.cubehelix_palette(8, start=.5, rot=-.75, as_cmap=True))
plt.ylabel("Cluster ID")
plt.xlabel("Aggregated time ($\mu$s)")
plt.savefig("/Users/je714/Dropbox (Imperial)/ESAReport/Figures/labeled_Trajs",
format='png', dpi=300)
msm_lagtimes = [x for x in range(1,201) if (x%10==0) or (x==1)]
msm_lagtimes
msm_test = MarkovStateModel(lag_time=1)
msm_test.fit(np.hstack(clusterer.labels_))
msm_objects = []
for lagtime in msm_lagtimes:
msm = MarkovStateModel(lag_time=lagtime)
msm.fit(np.hstack(clusterer.labels_))
msm_objects.append(msm)
msm_timescales = []
for msm in msm_objects:
msm_timescales.append(msm.timescales_)
first_timescale = []
for lag_time, timescale in zip(msm_lagtimes, msm_timescales):
print(lag_time, timescale[0])
first_timescale.append(timescale[0])
time_asNParray = np.array(first_timescale)
lag_asNParray = np.array(msm_lagtimes[0:6])
plt.scatter(lag_asNParray, time_asNParray)
plt.semilogy()
txx = np.concatenate(tica_trajs)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap="Greys")
plt.scatter(clusterer.cluster_centers_[:,0],
clusterer.cluster_centers_[:,1],
s=1e4 * msm.populations_, # size by population
c=msm.left_eigenvectors_[:,1], # color by eigenvector
cmap="RdBu")
plt.colorbar(label='First dynamical eigenvector')
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
plt.tight_layout()
from msmbuilder.lumping import PCCAPlus
pcca = PCCAPlus.from_msm(msm, n_macrostates=5)
macro_trajs = pcca.transform(np.concatenate(clusterer.labels_))
print(msm.left_eigenvectors_[:,1].shape)
print(clusterer.cluster_centers_[:,0].shape)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap="Greys")
plt.scatter(clusterer.cluster_centers_[:,0],
clusterer.cluster_centers_[:,1],
s=100,
c=pcca.microstate_mapping_,
)
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
plt.plot(msm.eigenvalues_, 'bo')
plt.xlabel("MSM state")
plt.ylim(0,1)
plt.plot(msm.sample_discrete(n_steps=1000), 'bo')
from msmbuilder import utils
plt.plot(msm.populations_)
plt.plot(msm.timescales_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The lines beginning with # are called comments. These are ignored by Python and are often used to provide explanations of your code. Note that we can combine strings and numbers in our print statements to give more meaningful output. We denote strings, or collections of characters, using double quotes "" or single quotes ''.
Step2: This is a nice example, but if we look at the last line, $4/3=1$, we notice that is in fact incorrect. In fact, $4/3=1~\frac{1}{3}=1.\overline{33}$. So what's going on with division in Python? In turns out that this is a very common mistake that programmers make and can lead to some very serious and hard-to-find problems.
Step3: Variables
Step4: Now let's perform some operations on a and b and see what happens.
Step5: Variables can also contain strings and we can even perform operations on these strings, within reason of course.
Step6: We shouldn't get too carried away though. For example, it's nonsensical to multiply, divide, or subtract two words. If we try to do this, Python will give us an error.
Step7: Containers
Step8: However, what if we have 30 students? Defining a variable for each one seems a little unwieldy and offers no information about how these variables are connected (i.e. they are in the same class). It's easier and better practice to define the relationship between these students using a list.
Step9: But what if we wanted to access the individual parts of the list? We will use what is called the index. One important thing to note about lists (and counting in general) in Python is that numbers start at 0. Thus, for the above list, we can use 0-4 to access the parts of our list.
Step10: But what if we try to access an element beyond the last element in our list?
Step11: We can also use negative numbers to access the elements of our list, unintuitive as this may seem. In this case, -1 corresponds to the last element of our list, -2 the second-to-last element and so on. This is especially useful when we have very long lists or we have lists where the length is unknown and we want to access elements starting from the back.
Step12: What if another student joins the class? We would like to be able to add elements to our list as well. This can be done through the append command, shown below.
Step13: Alternatively, we can remove elements of a list using the pop command in a similar way.
Step14: There are many more ways of manipulating lists and we won't cover all of them here. Consult the Python documentation for (many more) additional details.
Step15: However, this doesn't give us any information about what each of the individual entries mean. To preserve the context of the information in our list, we have to know the correspondence of the index (0-3) to the property it specifies. However, by using a dictionary, the key we use to access the value tells us what the value means.
Step16: The kind of container we choose to use will depend on the problem at hand. Throughout our tutorial, we will show the advantages of using both types. Unsurprisingly, there are several more types of containers available in Python. We have only provided the two most used types here.
Step17: And now I want to assign a letter grade to this average.
Step18: This is nice, but if we got anything below a 90, this snippet of code doesn't give us much information. Say my class average is an 89.
Step19: Well now I know I didn't get an A, but for all I know I got an F when in reality I got a B. To solve this problem, let's test the B condition. Looking at the table, we can see that to get a B, our average must satisfy two conditions
Step20: Now, let's combine our A and B conditions (along with the C, D, and F conditions) using the "if", "else-if" (denoted in Python using elif) and "else" statements. Now, we'll change our average to a 72.
Step21: We can also evaluate more strict conditions, like if two things are exactly equal. Say we reverse the above situation
Step22: These symbols that we've been using to determine relationships between objects are called relational operators
Step23: Collectively, what we've been using are known as conditional statements
Step24: Here it should be obvious that is is not an assignment operator. Rather we are testing whether something is true. Similarly, we can assign the result of an is statement to a variable.
Step25: Note also that conditional statements can also be used to directly evaluate whether something is true or false. Really this is what has been going on all along, we've just been hiding it in a way. Additionally, the default for a conditional statement is to test whether a statement is True. Note the equivalence of the following two statements.
Step26: This may just seem like we are saying the same thing over and over again and the use of relational operators and boolean logic may not seem immediately obvious. However, these decision making tools are some of the most useful when writing code, scientific or otherwise, and their usefulness will become more apparent the more examples we work through.
Step27: We could of course look at each list entry individually, writing a block of code to evaluate the 0th entry, then the 1st entry, then the 2nd entry and so on. However, this would mean writing as many if statements as there are entries in our class grades list. Instead, we will use what is called a loop, in this case a for loop, to iterate over the list, applying the same block of code to each successive entry.
Step28: Let's unpack this code snippet. The len() command gets the length of the list, in this case 11. The range(n) command creates a list with entries 0 through n-1, separated by 1; thus, this is a list of all the indices of our class_grades list. The for i in ... line tells Python to execute the indented block of code 11 times, incrementing i by 1 each time.
Step29: Here, i doesn't represent the index of the list, but rather the list entry itself. When iterating over a single list, this is often the best and most concise way to construct your for loop. However, when iterating over two lists where there is a correspondence between the entries, it is often useful to iterate over the list of indices. A for loop is possibly the most useful tool in any programming language, especially in scientific computing. We will make frequent use of both for and while loops in this tutorial, showing their usefulness in a variety of contexts.
Step30: As long as the number of C grades we've found is less than 2, we will continue searching the list. Once we've found 2, we stop searching. Note that if we didn't increase our counter and the found_c_grades variable, our while loop will continue to execute forever. When using a while loop, special attention should be given to avoiding this problem.
Step31: Every function is denoted using the def keyword (for definition). Then, we give the function a name (my_first_function in this example) followed by the inputs (x in this case) to the function. The return statement then tells the function what result should be output. Below is an example of how we would use the function.
Step32: Notice that we've saved ourselves quite a few lines of code by not having to write $f(x)=x^2+1$ repeatedly. Instead, we can just reference the function definition above. Additionally, if we wanted to change something about our expression $f(x)$, we would only need to make the change in one place, the function definition, rather than having to make the same change in multiple places in our code. Writing our code using functions helps us to avoid simple mistakes that so often occur when writing a program.
|
<ASSISTANT_TASK:>
Python Code:
#addition
print 4+3
#subtraction
print 4-3
#multiplication
print 4*3
#exponentiation
print 4**3
#division
print 4/3
#addition
print "4+3 = ",4+3
#subtraction
print "4-3 = ",4-3
#multiplication
print '4*3 = ',4*3
#exponentiation
print "4^3 = ",4**3
#division
print "4/3 = ",4/3
#division with floats
print "4/3 = ",4.0/3.0
#Give a value of 3.0 to a and 4.0 to b
a = 4.0
b = 3.0
#output the values of these variables to the screen
print "The initial value of a is ",a," and the initial value of b is ",b
#Update a and b
a = a - 1.0
b = b + 5.0
print "The new value of a is ",a," and the new value of b is ",b
#define the two words
word1 = "Hello "
word2 = "World!"
#print the two words to the screen
print "The first word is ",word1
print "The second word is ",word2
#add or concatenate the two strings
expression = word1+word2
#print the new string to the screen
print "The whole sentence is ",expression
word1-word2
word1*word2
word1/word2
student1 = "Jake"
student2 = "Jenny"
student3 = "Lucas"
print "The names of three of the students are ",student1,', ',student2,', and ',student3
#Use a list to define a classroom rather than individual variables
class1 = ["Marissa","Ben","Seth","Rachel","Ryan"]
#Print the list
print "The students in the class are ",class1
print "The first student in our class is ",class1[0]
print "The second student in our class is ",class1[1]
print "The fifth student in our class is ",class1[4]
print class1[5]
print "The last student on our class list is ",class1[-1]
print "The second-to-last student on our class list is ",class1[-2]
#add a student to the class
class1.append('Mischa')
#print the class with the new student
print "The students in the class are ",class1
#remove Ben from the class list; note Ben corresponds to entry 1
class1.pop(1)
#print the class minus Ben
print "The new class roster is ",class1
#make a list for my_car
my_car_list = ['Mercury','Sable','Dark Green',1998]
#print the information
print "The details on my car are ",my_car_list
#make a dictionary for my car
my_car_dict = {'make':'Mercury','model':'Sable','color':'Dark Green','year':1998}
#print the details of my car
print "The make of my car is ",my_car_dict['make']
print "The model of my car is ",my_car_dict['model']
print "The color of my car is ",my_car_dict['color']
print "My car was made in ",my_car_dict['year']
my_average=91
if my_average >= 90:
print "Your letter grade is an A!"
else:
print "You did not get an A."
my_average = 89
if my_average >= 90:
print "Your letter grade is an A!"
else:
print "You did not get an A."
if my_average >= 80 and my_average < 90:
print "Your letter grade is a B!"
my_average=72
if my_average >= 90:
print "Your letter grade is an A!"
elif my_average >= 80 and my_average < 90:
print "Your letter grade is a B!"
elif my_average >= 70 and my_average < 80:
print "Your letter grade is a C"
elif my_average >= 60 and my_average < 70:
print "Your letter grade is a D"
else:
print "Your letter grade is an F"
my_letter_grade="D"
if my_letter_grade == "A":
print "Your grade is greater than a 90"
elif my_letter_grade == "B":
print "Your grade is between 80 and 90"
elif my_letter_grade == "C":
print "Your grade is between a 70 and an 80"
elif my_letter_grade == "D":
print "Your grade is between a 60 and a 70"
else:
print "Your grade is below a 60"
if my_letter_grade == "A" or my_letter_grade == "B":
print "You're doing great!"
elif my_letter_grade == "C" or my_letter_grade == "D":
print "You need to do better..."
else:
print "You are failing."
my_letter_grade="B"
if my_letter_grade == "A" or my_letter_grade == "B":
print "You're doing great!"
elif my_letter_grade == "C" or my_letter_grade == "D":
print "You need to do better..."
else:
print "You are failing."
my_letter_grade is "B"
if my_letter_grade is "A":
print "You got an A!"
else:
print "You did not get an A."
i_got_an_a=(my_letter_grade is "A")
print "The statement 'I got an A' is ",i_got_an_a
if i_got_an_a:
print "Your letter grade is an A!"
else:
print "You did not get an A."
if i_got_an_a is True:
print "Your letter grade is an A!"
else:
print "You did not get an A."
class_grades = [99,78,44,82,56,61,94,78,76,100,85]
for i in range(len(class_grades)):
print "The numerical grade is ",class_grades[i]
for i in class_grades:
print "The numerical grade is ",i
c_grades = [] #declare empty list to save C grades
found_c_grades = 0 #set a counter for the number of C grades found
counter = 0 #set a counter to step through the class grades list
while found_c_grades < 2:
if class_grades[counter] >= 70 and class_grades[counter] < 80:
c_grades.append(class_grades[counter])
found_c_grades = found_c_grades + 1
counter = counter +1
print "The first two C grades are ",c_grades
def my_first_function(x):
return x**2 + 1
#single value of x
x = 1.0
print "The result of f(x) = x^2 + 1 for x = ",x," is ",my_first_function(x)
#list of values
f_result = []
x = [-2.0,-1.0,0.0,1.0,2.0]
for i in x:
f_result.append(my_first_function(i))
print "The result of f(x) = x^2 + 1 for x = ",x," is ",f_result
def numerical_grade_to_letter_grade(num_grade):
if num_grade >= 90:
let_grade = 'A'
elif num_grade >= 80 and num_grade < 90:
let_grade = 'B'
elif num_grade >= 70 and num_grade < 80:
let_grade = 'C'
elif num_grade >= 60 and num_grade < 70:
let_grade = 'D'
else:
let_grade = 'F'
return let_grade
#map numerical grades to letter grades
class_grades_letters = []
for grade in class_grades:
class_grades_letters.append(numerical_grade_to_letter_grade(grade))
#print correspondence between numerical and letter grades
for i in range(len(class_grades)):
print "The numerical grade is ",class_grades[i]," and the letter grade is ",class_grades_letters[i]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.
Step2: Of course a child can have more than one parent, and a parent can have many children.
Step3: PyMC is concerned with two types of programming variables
Step4: The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.
Step5: The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like
Step6: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable.
Step7: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
Step8: This is how we include data into our models
Step9: Finally...
Step10: Modeling approaches
Step11: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution
Step12: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example
Step13: 4. Plot the artificial dataset
Step14: It is okay that our fictional dataset does not look like our observed dataset
Step15: Later we will see how we use this to make predictions and test the appropriateness of our models.
Step16: Had we had stronger beliefs, we could have expressed them in the prior above.
Step17: The observed frequency is
Step18: We combine the observations into the PyMC observed variable, and run our inference algorithm
Step19: We plot the posterior distribution of the unknown $p_A$ below
Step20: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
Step21: Below we plot the posterior distributions for the three unknowns
Step22: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
Step23: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Step24: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
Step25: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students
Step26: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$
Step27: Although not everyone flips a second time, we can still model the possible realization of second coin-flips
Step28: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC deterministic variable
Step29: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
Step30: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be "Yes".
Step31: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step32: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
Step33: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
Step34: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step35: More PyMC Tricks
Step36: The remainder of this chapter examines some practical examples of PyMC and PyMC modeling
Step37: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
Step38: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function
Step39: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Step40: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter
Step41: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like
Step42: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$
Step43: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Step44: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
Step45: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
Step46: Is our model appropriate?
Step47: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
Step48: Next we sort each column by the posterior probabilities
Step49: We can present the above data better in a figure
Step50: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
Step51: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
Step52: References
|
<ASSISTANT_TASK:>
Python Code:
import pymc as pm
parameter = pm.Exponential("poisson_param", 1)
data_generator = pm.Poisson("data_generator", parameter)
data_plus_one = data_generator + 1
print("Children of `parameter`: ")
print(parameter.children)
print("\nParents of `data_generator`: ")
print(data_generator.parents)
print("\nChildren of `data_generator`: ")
print(data_generator.children)
print("parameter.value =", parameter.value)
print("data_generator.value =", data_generator.value)
print("data_plus_one.value =", data_plus_one.value)
lambda_1 = pm.Exponential("lambda_1", 1) # prior on first behaviour
lambda_2 = pm.Exponential("lambda_2", 1) # prior on second behaviour
tau = pm.DiscreteUniform("tau", lower=0, upper=10) # prior on behaviour change
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value, "\n")
lambda_1.random(), lambda_2.random(), tau.random()
print("After calling random() on the variables...")
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value)
type(lambda_1 + lambda_2)
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
@pm.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_data_points)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after tau is lambda2
return out
%matplotlib inline
from IPython.core.pylabtools import figsize
from matplotlib import pyplot as plt
figsize(12.5, 4)
samples = [lambda_1.random() for i in range(20000)]
plt.hist(samples, bins=70, density=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
data = np.array([10, 5])
fixed_variable = pm.Poisson("fxd", 1, value=data, observed=True)
print("value: ", fixed_variable.value)
print("calling .random()")
fixed_variable.random()
print("value: ", fixed_variable.value)
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
obs = pm.Poisson("obs", lambda_, value=data, observed=True)
print(obs.value)
model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])
tau = pm.rdiscrete_uniform(0, 80)
print(tau)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
print(lambda_1, lambda_2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
def plot_artificial_sms_dataset():
tau = pm.rdiscrete_uniform(0, 80)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlim(0, 80)
figsize(12.5, 5)
plt.suptitle("More examples of artificial datasets", fontsize=14)
for i in range(1, 5):
plt.subplot(4, 1, i)
plot_artificial_sms_dataset()
import pymc as pm
# The parameters are the bounds of the Uniform.
p = pm.Uniform('p', lower=0, upper=1)
# set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = pm.rbernoulli(p_true, N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(occurrences.sum())
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % occurrences.mean())
print("Does this equal the true frequency? %s" % (occurrences.mean() == p_true))
# include the observations, which are Bernoulli
obs = pm.Bernoulli("obs", p, value=occurrences, observed=True)
# To be explained in chapter 3
mcmc = pm.MCMC([p, obs])
mcmc.sample(18000, 1000)
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", density=True)
plt.legend();
import pymc as pm
figsize(12, 4)
# these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
# notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
# generate some observations
observations_A = pm.rbernoulli(true_p_A, N_A)
observations_B = pm.rbernoulli(true_p_B, N_B)
print("Obs from Site A: ", observations_A[:30].astype(int), "...")
print("Obs from Site B: ", observations_B[:30].astype(int), "...")
print(observations_A.mean())
print(observations_B.mean())
# Set up the pymc model. Again assume Uniform priors for p_A and p_B.
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
@pm.deterministic
def delta(p_A=p_A, p_B=p_B):
return p_A - p_B
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, value=observations_A, observed=True)
obs_B = pm.Bernoulli("obs_B", p_B, value=observations_B, observed=True)
# To be explained in chapter 3.
mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])
mcmc.sample(20000, 1000)
p_A_samples = mcmc.trace("p_A")[:]
p_B_samples = mcmc.trace("p_B")[:]
delta_samples = mcmc.trace("delta")[:]
figsize(12.5, 10)
# histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", density=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", density=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", density=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
(delta_samples < 0).mean())
print("Probability site A is BETTER than site B: %.3f" % \
(delta_samples > 0).mean())
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
import pymc as pm
N = 100
p = pm.Uniform("freq_cheating", 0, 1)
true_answers = pm.Bernoulli("truths", p, size=N)
first_coin_flips = pm.Bernoulli("first_flips", 0.5, size=N)
print(first_coin_flips.value)
second_coin_flips = pm.Bernoulli("second_flips", 0.5, size=N)
@pm.deterministic
def observed_proportion(t_a=true_answers,
fc=first_coin_flips,
sc=second_coin_flips):
observed = fc * t_a + (1 - fc) * sc
return observed.sum() / float(N)
observed_proportion.value
X = 35
observations = pm.Binomial("obs", N, observed_proportion, observed=True,
value=X)
model = pm.Model([p, true_answers, first_coin_flips,
second_coin_flips, observed_proportion, observations])
# To be explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(40000, 15000)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", density=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
p = pm.Uniform("freq_cheating", 0, 1)
@pm.deterministic
def p_skewed(p=p):
return 0.5 * p + 0.25
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed,
value=35, observed=True)
model = pm.Model([yes_responses, p_skewed, p])
# To Be Explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(25000, 2500)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", density=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
N = 10
x = np.empty(N, dtype=object)
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
# drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
# plot it, as a function of temperature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.title("Logistic functon plotted for several value of $\\beta$ parameter", fontsize=14)
plt.legend();
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.title("Logistic functon with bias, plotted for several value of $\\alpha$ bias parameter", fontsize=14)
plt.legend(loc="lower left");
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
import pymc as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
# notice the`value` here. We explain why below.
beta = pm.Normal("beta", 0, 0.001, value=0)
alpha = pm.Normal("alpha", 0, 0.001, value=0)
@pm.deterministic
def p(t=temperature, alpha=alpha, beta=beta):
return 1.0 / (1. + np.exp(beta * t + alpha))
p.value
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
observed = pm.Bernoulli("bernoulli_obs", p, value=D, observed=True)
model = pm.Model([observed, beta, alpha])
# Mysterious code to be explained in Chapter 3
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(120000, 100000, 2)
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d
beta_samples = mcmc.trace('beta')[:, None]
figsize(12.5, 6)
# histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", density=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", density=True)
plt.legend();
t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, density=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
simulated = pm.Bernoulli("bernoulli_sim", p)
N = 10000
mcmc = pm.MCMC([simulated, alpha, beta, observed])
mcmc.sample(N)
figsize(12.5, 5)
simulations = mcmc.trace("bernoulli_sim")[:]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i + 1)
plt.scatter(temperature, simulations[1000 * i, :], color="k",
s=50, alpha=0.6)
posterior_probability = simulations.mean(axis=0)
print("posterior prob of defect | realized defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[i], D[i]))
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7. / 23 * np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
# type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$");
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice that I've updated the assert to include the word "To-Do" instead of "Django". Now our test should fail. Let's check that it fails.
Step2: We got what was called an expected fail which is what we wanted!
Step3: Some things to notice about our new test file
|
<ASSISTANT_TASK:>
Python Code:
%cd ../examples/superlists/
%ls
%%writefile functional_tests.py
from selenium import webdriver
browser = webdriver.Firefox()
# Edith has heard about a cool new online to-do app. She goes
# to check out its homepage
browser.get('http://localhost:8000')
# She notices the page title and header mention to-do lists
assert 'To-Do' in browser.title
# She is invited to enter a to-do item straight away
# She types "Buy peacock feathers" into a text box (Edith's hobby
# is tying fly-fishing lures)
# When she hits enter, the page updates, and now the page lists
# "1: Buy peacock feathers" as an item in a to-do list
# There is still a text box inviting her to add another item. She
# enters "Use peacock feathers to make a fly" (Edith is very methodical)
# The page updates again, and now shows both items on her list
# Edith wonders whether the site will remember her list. Then she sees
# that the site has generated a unique URL for her -- there is some
# explanatory text to that effect.
# She visits that URL - her to-do list is still there.
# Satisfied, she goes back to sleep
browser.quit()
# First start up the server:
#!python3 manage.py runserver
# Run test
!python3 functional_tests.py
%%writefile functional_tests.py
from selenium import webdriver
import unittest
class NewVisitorTest(unittest.TestCase): #1
def setUp(self): #2
self.browser = webdriver.Firefox()
self.browser.implicitly_wait(3) # Wait three seconds before trying anything.
def tearDown(self): #3
self.browser.quit()
def test_can_start_a_list_and_retrieve_it_later(self): #4
# Edith has heard about a cool new online to-do app. She goes
# to check out its homepage
self.browser.get('http://localhost:8000')
# She notices the page title and header mention to-do lists
self.assertIn('To-Do', self.browser.title) #5
self.fail('Finish the test!') #6
# She is invited to enter a to-do item straight away
# [...rest of comments as before]
if __name__ == '__main__': #7
unittest.main(warnings='ignore') #8
!python3 functional_tests.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matching coordinates
Step2: Plot $W_1-J$ vs $W_1$
Step3: W1-J < -1.7 => galaxy
Step4: Collect relevant data
Step5: Analysis
Step6: DBSCAN
Step7: Plot $W_1$ vs $J$
Step8: t-SNE
|
<ASSISTANT_TASK:>
Python Code:
#obj = ["3C 454.3", 343.49062, 16.14821, 1.0]
obj = ["PKS J0006-0623", 1.55789, -6.39315, 1.0]
#obj = ["M87", 187.705930, 12.391123, 1.0]
#### name, ra, dec, radius of cone
obj_name = obj[0]
obj_ra = obj[1]
obj_dec = obj[2]
cone_radius = obj[3]
obj_coord = coordinates.SkyCoord(ra=obj_ra, dec=obj_dec, unit=(u.deg, u.deg), frame="icrs")
# Query data
data_2mass = Irsa.query_region(obj_coord, catalog="fp_psc", radius=cone_radius * u.deg)
data_wise = Irsa.query_region(obj_coord, catalog="allwise_p3as_psd", radius=cone_radius * u.deg)
__data_galex = Vizier.query_region(obj_coord, catalog='II/335', radius=cone_radius * u.deg)
data_galex = __data_galex[0]
num_2mass = len(data_2mass)
num_wise = len(data_wise)
num_galex = len(data_galex)
print("Number of object in (2MASS, WISE, GALEX): ", num_2mass, num_wise, num_galex)
# use only coordinate columns
ra_2mass = data_2mass['ra']
dec_2mass = data_2mass['dec']
c_2mass = coordinates.SkyCoord(ra=ra_2mass, dec=dec_2mass, unit=(u.deg, u.deg), frame="icrs")
ra_wise = data_wise['ra']
dec_wise = data_wise['dec']
c_wise = coordinates.SkyCoord(ra=ra_wise, dec=dec_wise, unit=(u.deg, u.deg), frame="icrs")
ra_galex = data_galex['RAJ2000']
dec_galex = data_galex['DEJ2000']
c_galex = coordinates.SkyCoord(ra=ra_galex, dec=dec_galex, unit=(u.deg, u.deg), frame="icrs")
####
sep_min = 1.0 * u.arcsec # minimum separation in arcsec
# Only 2MASS and WISE matching
#
idx_2mass, idx_wise, d2d, d3d = c_wise.search_around_sky(c_2mass, sep_min)
# select only one nearest if there are more in the search reagion (minimum seperation parameter)!
print("Only 2MASS and WISE: ", len(idx_2mass))
# from matching of 2 cats (2MASS and WISE) coordinate
data_2mass_matchwith_wise = data_2mass[idx_2mass]
data_wise_matchwith_2mass = data_wise[idx_wise] # WISE dataset
w1 = data_wise_matchwith_2mass['w1mpro']
j = data_2mass_matchwith_wise['j_m']
w1j = w1-j
cutw1j = -1.7 # https://academic.oup.com/mnras/article/448/2/1305/1055284
# WISE galaxy data -> from cut
galaxy = data_wise_matchwith_2mass[w1j < cutw1j]
print("Number of galaxy from cut W1-J:", len(galaxy))
w1j_galaxy = w1j[w1j<cutw1j]
w1_galaxy = w1[w1j<cutw1j]
plt.scatter(w1j, w1, marker='o', color='blue')
plt.scatter(w1j_galaxy, w1_galaxy, marker='.', color="red")
plt.axvline(x=cutw1j) # https://academic.oup.com/mnras/article/448/2/1305/1055284
# GALEX
###
# coord of object in 2mass which match wise (first objet/nearest in sep_min region)
c_2mass_matchwith_wise = c_2mass[idx_2mass]
c_wise_matchwith_2mass = c_wise[idx_wise]
#Check with 2mass cut
idx_2mass_wise_galex, idx_galex1, d2d, d3d = c_galex.search_around_sky(c_2mass_matchwith_wise, sep_min)
num_galex1 = len(idx_galex1)
#Check with wise cut
idx_wise_2mass_galex, idx_galex2, d2d, d3d = c_galex.search_around_sky(c_wise_matchwith_2mass, sep_min)
num_galex2 = len(idx_galex2)
print("Number of GALEX match in 2MASS cut (with WISE): ", num_galex1)
print("Number of GALEX match in WISE cut (with 2MASS): ", num_galex2)
# diff/average
print("Confusion level: ", abs(num_galex1 - num_galex2)/np.mean([num_galex1, num_galex2])*100, "%")
# Choose which one is smaller!
if num_galex1 < num_galex2:
select_from_galex = idx_galex1
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# 2MASS from GALEX_selected
_idx_galex1, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(c_selected_galex, sep_min)
match_2mass = data_2mass[_idx_2mass]
# WISE from 2MASS_selected
_ra_match_2mass = match_2mass['ra']
_dec_match_2mass = match_2mass['dec']
_c_match_2mass = coordinates.SkyCoord(ra=_ra_match_2mass, dec=_dec_match_2mass, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_wise, d2d, d3d = c_wise.search_around_sky(_c_match_2mass, sep_min)
match_wise = data_wise[_idx_wise]
else:
select_from_galex = idx_galex2
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# WISE from GALEX_selected
_idx_galex1, _idx_wise, d2d, d3d = c_wise.search_around_sky(c_selected_galex, sep_min)
match_wise = data_wise[_idx_wise]
# 2MASS from WISE_selected
_ra_match_wise = match_wise['ra']
_dec_match_wise = match_wise['dec']
_c_match_wise = coordinates.SkyCoord(ra=_ra_match_wise, dec=_dec_match_wise, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(_c_match_wise, sep_min)
match_2mass = data_2mass[_idx_2mass]
print("Number of match in GALEX: ", len(match_galex))
print("Number of match in 2MASS: ", len(match_2mass))
print("Number of match in WISE : ", len(match_wise))
joindata = np.array([match_2mass['j_m'], match_2mass['h_m'], match_2mass['k_m'],
match_wise['w1mpro'], match_wise['w2mpro'], match_wise['w3mpro'], match_wise['w4mpro'],
match_galex['NUVmag']])
joindata = joindata.T
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
X = joindata
pca = PCA(n_components=4)
X_r = pca.fit(X).transform(X)
print(pca.components_)
print(pca.explained_variance_)
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,2], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,2], X_r[i,3], marker=".", color="red")
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
X = joindata
db = DBSCAN(eps=1, min_samples=3).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
#print(labels)
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
## J vs J-W1
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=8)
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.plot(X[i,3], X[i,0], marker="X", markerfacecolor='red', markeredgecolor='none', markersize=8)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
from sklearn.manifold import TSNE
X = joindata #scale(joindata)
X_r = TSNE(n_components=2).fit_transform(X)
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color="blue")
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker='.', color="red")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 🛴 A Comprehensive Example
|
<ASSISTANT_TASK:>
Python Code:
!pip install avalanche-lib==0.2.0
from torch.optim import SGD
from torch.nn import CrossEntropyLoss
from avalanche.benchmarks.classic import SplitMNIST
from avalanche.evaluation.metrics import forgetting_metrics, accuracy_metrics, \
loss_metrics, timing_metrics, cpu_usage_metrics, confusion_matrix_metrics, disk_usage_metrics
from avalanche.models import SimpleMLP
from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger
from avalanche.training.plugins import EvaluationPlugin
from avalanche.training.supervised import Naive
scenario = SplitMNIST(n_experiences=5)
# MODEL CREATION
model = SimpleMLP(num_classes=scenario.n_classes)
# DEFINE THE EVALUATION PLUGIN and LOGGERS
# The evaluation plugin manages the metrics computation.
# It takes as argument a list of metrics, collectes their results and returns
# them to the strategy it is attached to.
# log to Tensorboard
tb_logger = TensorboardLogger()
# log to text file
text_logger = TextLogger(open('log.txt', 'a'))
# print to stdout
interactive_logger = InteractiveLogger()
eval_plugin = EvaluationPlugin(
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
timing_metrics(epoch=True, epoch_running=True),
forgetting_metrics(experience=True, stream=True),
cpu_usage_metrics(experience=True),
confusion_matrix_metrics(num_classes=scenario.n_classes, save_image=False,
stream=True),
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loggers=[interactive_logger, text_logger, tb_logger]
)
# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
evaluator=eval_plugin)
# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in scenario.train_stream:
print("Start of experience: ", experience.current_experience)
print("Current Classes: ", experience.classes_in_this_experience)
# train returns a dictionary which contains all the metric values
res = cl_strategy.train(experience)
print('Training completed')
print('Computing accuracy on the whole test set')
# test also returns a dictionary which contains all the metric values
results.append(cl_strategy.eval(scenario.test_stream))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download data from FRED and manage
Step2: Plots to be produced in Excel
|
<ASSISTANT_TASK:>
Python Code:
import fredpy as fp
import matplotlib.pyplot as plt
import pandas as pd
# Load fredpy API key
fp.api_key = fp.load_api_key('fred_api_key.txt')
# Download labor market data
u = fp.series('unrate').data
u_men = fp.series('LNS14000001').data
u_women = fp.series('LNS14000002').data
lf_men = fp.series('LNS11000001').data
lf_women = fp.series('LNS11000002').data
# Concatenate data into a DataFrame
df = pd.concat([u,u_men,u_women,lf_men,lf_women],axis=1).dropna()
df.columns = ['Unemployment Rate','Unemployment Rate - Men','Unemployment Rate - Women','Labor Force - Men','Labor Force - Women']
df.index.name = 'Date'
# Export data as csv
df.to_csv('../Data/labor_force_data.csv')
# US unemployment rate
plt.plot(u)
plt.ylabel('Percent')
plt.title('US Unemployment Rate');
# Men's and women's unemployment rates
plt.plot(u_men,label='Men')
plt.plot(u_women,'--',label='Women')
plt.ylabel('Percent')
plt.title('Unemployment Rates')
plt.legend();
# Scatter plot of men's v. women's unemployment rates
plt.scatter(u_men,u_women,s=40,alpha=0.25)
plt.xlabel('Men (Percent)')
plt.ylabel('Women (Percent)')
plt.title('Unemployment Rates');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the pass_air_data dataframe we have the information of number of people that fly every year on a particular route on the list of airlines that fly that route.
Step2: Cleveland to Chicago, how many people fly this route?
Step3: Exercise
Step4: So let's have a look at the important nodes in this network, i.e. important airports in this network. We'll use pagerank, betweenness centrality and degree centrality.
Step5: Exercise
Step6: What about degree distribution of this network?
Step7: Let's plot a log log plot to get a better overview of this.
Step8: Directed Graphs
Step9: Moving back to Airports
Step10: 'ANC' is the airport code of Anchorage airport, a place in Alaska, and according to pagerank and betweenness centrality it is the most important airport in this network Isn't that weird? Thoughts?
Step11: How reachable is this network?
Step12: Wait, What??? This network is not connected. That seems like a really stupid thing to do.
Step13: SPB, SSB, AIK anyone?
Step14: Strongly vs weakly connected graphs.
Step15: Exercise! (Actually this is a game
Step16: What about airlines? Can we find airline specific reachability?
Step17: Exercise
Step18: Exercise
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import warnings
warnings.filterwarnings('ignore')
pass_air_data = pd.read_csv('datasets/passengers.csv')
pass_air_data.head()
# Create a MultiDiGraph from this dataset
passenger_graph = nx.from_pandas_edgelist(pass_air_data, source='ORIGIN', target='DEST', edge_attr=['YEAR', 'PASSENGERS', 'UNIQUE_CARRIER_NAME'], create_using=nx.MultiDiGraph())
passenger_graph['CLE']['ORD'][25]
temp = [(i['YEAR'], i['PASSENGERS'])for i in dict(passenger_graph['CLE']['ORD']).values()]
x, y = zip(*temp)
plt.plot(x, y)
plt.show()
temp = pass_air_data.groupby(['YEAR'])['PASSENGERS'].transform(max) == pass_air_data['PASSENGERS']
pass_air_data[temp][pass_air_data.YEAR.isin([1990, 2015])]
pass_air_data[(pass_air_data['ORIGIN'] == 'LAX') & (pass_air_data['DEST'] == 'HNL')].plot('YEAR', 'PASSENGERS')
pass_air_data[(pass_air_data['ORIGIN'] == 'LAX') & (pass_air_data['DEST'] == 'SFO')].plot('YEAR', 'PASSENGERS')
# nx.pagerank(passenger_graph)
def year_network(G, year):
temp_g = nx.DiGraph()
for i in G.edges(data=True):
if i[2]['YEAR'] == year:
temp_g.add_edge(i[0], i[1], weight=i[2]['PASSENGERS'])
return temp_g
pass_2015 = year_network(passenger_graph, 2015)
len(pass_2015)
len(pass_2015.edges())
# Load in the GPS coordinates of all the airports
lat_long = pd.read_csv('datasets/GlobalAirportDatabase.txt', delimiter=':', header=None)
lat_long[lat_long[1].isin(list(pass_2015.nodes()))]
pos_dict = {}
for airport in lat_long[lat_long[1].isin(list(pass_2015.nodes()))].iterrows():
pos_dict[airport[1][1]] = (airport[1][15], airport[1][14])
pos_dict
plt.figure(figsize=(20, 9))
G = nx.subgraph(pass_2015, pos_dict.keys())
nx.draw_networkx_nodes(G, pos=pos_dict, node_size=10, alpha=0.6, node_color='b')
# nx.draw_networkx_edges(G, pos=pos_dict, width=0.1, arrows=False)
plt.show()
plt.figure(figsize=(20, 9))
x = [i[0] for i in pos_dict.values()]
y = [i[1] for i in pos_dict.values()]
plt.scatter(x, y)
plt.hist(list(nx.degree_centrality(pass_2015).values()))
plt.show()
d = {}
for i, j in dict(nx.degree(pass_2015)).items():
if j in d:
d[j] += 1
else:
d[j] = 1
x = np.log2(list((d.keys())))
y = np.log2(list(d.values()))
plt.scatter(x, y, alpha=0.4)
plt.show()
G = nx.DiGraph()
G.add_edge(1, 2, weight=1)
# print(G.edges())
# G[1][2]
# G[2][1]
# G.is_directed()
# type(G)
G.add_edges_from([(1, 2), (3, 2), (4, 2), (5, 2), (6, 2), (7, 2)])
nx.draw_circular(G, with_labels=True)
G.in_degree()
nx.pagerank(G)
G.add_edge(5, 6)
nx.draw_circular(G, with_labels=True)
nx.pagerank(G)
G.add_edge(2, 8)
nx.draw_circular(G, with_labels=True)
nx.pagerank(G)
sorted(nx.pagerank(pass_2015, weight=None).items(), key=lambda x:x[1], reverse=True)[:10]
sorted(nx.betweenness_centrality(pass_2015).items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.degree_centrality(pass_2015).items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.betweenness_centrality(pass_2015, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.pagerank(pass_2015, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
# nx.average_shortest_path_length(pass_2015)
list(nx.weakly_connected_components(pass_2015))
pass_air_data[(pass_air_data['YEAR'] == 2015) & (pass_air_data['ORIGIN'] == 'AIK')]
pass_2015.remove_nodes_from(['SPB', 'SSB', 'AIK'])
nx.is_weakly_connected(pass_2015)
nx.is_strongly_connected(pass_2015)
G = nx.DiGraph()
G.add_edge(1, 2)
G.add_edge(2, 3)
G.add_edge(3, 1)
nx.draw(G)
G.add_edge(3, 4)
nx.draw(G)
nx.is_strongly_connected(G)
list(nx.strongly_connected_components(pass_2015))
pass_air_data[(pass_air_data['YEAR'] == 2015) & (pass_air_data['DEST'] == 'TSP')]
pass_2015_strong = max(nx.strongly_connected_components_recursive(pass_2015), key=len)
len(pass_2015_strong)
nx.average_shortest_path_length(pass_2015_strong)
sort_degree = sorted(nx.degree_centrality(pass_2015_strong).items(), key=lambda x:x[1], reverse=True)
top_count = 0
for n, v in sort_degree:
count = 0
for node, val in sort_degree:
if node != n:
if node not in pass_2015_strong.adj[n]:
pass_2015_strong.add_edge(n, node)
count += 1
if count == 25:
break
top_count += 1
if top_count == 20:
break
nx.average_shortest_path_length(pass_2015_strong)
passenger_graph['JFK']['SFO'][25]
def str_to_list(a):
return a[1:-1].split(', ')
for i in str_to_list(passenger_graph['JFK']['SFO'][25]['UNIQUE_CARRIER_NAME']):
print(i)
%%time
for origin, dest in passenger_graph.edges():
for key in passenger_graph[origin][dest]:
passenger_graph[origin][dest][key]['airlines'] = str_to_list(passenger_graph[origin][dest][key]['UNIQUE_CARRIER_NAME'])
united_network = nx.DiGraph()
for origin, dest in passenger_graph.edges():
if 25 in passenger_graph[origin][dest]:
if "'United Air Lines Inc.'" in passenger_graph[origin][dest][25]['airlines']:
united_network.add_edge(origin, dest, weight=passenger_graph[origin][dest][25]['PASSENGERS'])
len(united_network)
len(united_network.edges())
sorted(nx.pagerank(united_network, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.degree_centrality(united_network).items(), key=lambda x:x[1], reverse=True)[0:10]
pass_air_data[(pass_air_data.ORIGIN == 'CLE') &
(pass_air_data.UNIQUE_CARRIER_NAME.str.contains('United Air Lines Inc.'))
].groupby('YEAR')['PASSENGERS'].sum().plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we can define the transfer function using scipy
Step2: Step response
Step3: Once we have the response we can plot it using matplotlib
Step4: Simulating any arbitrary input
Step5: Now we'll plot the output and input and make our plot a little more fancy
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
num = [2, 25]
den = [1, 4, 25]
sys = signal.TransferFunction(num, den)
time, response = signal.step(sys)
plt.plot(time,response,label="Simulation")
plt.show()
t = np.linspace(0, 5)
u = 1 * t
tout, resp, x = signal.lsim(sys, u, t)
plt.plot(t, resp, label='Output')
plt.plot(t, u, label='Input')
plt.legend()
plt.title('Ramp Response')
plt.xlabel('Time (sec)')
plt.ylabel('Response')
plt.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we define some functions. make_imgs function makes images for animation and make_anim makes animation using images the first function made.
Step2: API documentation is available at http
Step3: Here we define dataset Global Ocean wave analysis dataset namespace, variable names, period start and end, location and name of the animation that will be saved into your computer. We choose a time period when hurricane Dorian were active near Bahamas. First we use variables VHM0_SW1 and VMDR_SW1 which are Spectral significant primary swell wave height. You can find more variables from the dataset detail page.
Step4: In the following cell we are downloading the data.
Step5: Now we open the file by using xarray.
Step6: We like to use Basemap to plot data on it. Here we define the area. You can find more information and documentation about Basemap here.
Step7: Now it is time to make images from every time step. We only show one time step here.
Step8: This is part where we are making animation.
Step9: Now we download Significant Wave Height data to see the difference in images.
Step10: As explained above, significant wave height is the combined height of the sea and the swell waves. We can see that hurricane is seen much more round and higher height. The reason is that here, wind and swell wave are both taken into account. In the image above, only primary swell waves were shown.
Step11: We have seen the historic data about waves, but let's see what the future brings now. For that we use data from Meteo France WAve Model (MFWAM) Global Forecast. It is using the same model as the analysis.
Step12: We merge analysis and forecast datasets by using xarray concat. It merges two datasets by 'time' dimension and if there's some data conflict, it uses dd2 (analysis) data as it is more precise than forecast.
|
<ASSISTANT_TASK:>
Python Code:
import os
from dh_py_access import package_api
import dh_py_access.lib.datahub as datahub
import xarray as xr
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import imageio
import shutil
import datetime
import matplotlib as mpl
mpl.rcParams['font.family'] = 'Avenir Lt Std'
mpl.rcParams.update({'font.size': 25})
print('matplotlib', mpl.__version__)
print ('imageio',imageio.__version__)
print ('xarray',xr.__version__)
print ('numpy',np.__version__)
def make_imgs(dd, lonmap,latmap, variable, vmin,vmax,folder,title):
vmin = vmin; vmax = vmax
for k in range(0,len(dd[variable])):
filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png'
fig = plt.figure(figsize = (14,12))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd[variable][k].data,vmin = vmin, vmax = vmax,cmap='rainbow')
levs = np.arange(1,7,1)
S1 = m.contour(lonmap,latmap,dd[variable][k].data,levs,colors='black',linewidths=0.4,alpha = 0.6)
plt.clabel(S1,inline=1,inline_spacing=0,fontsize=7,fmt='%1.0f',colors='black')
m.fillcontinents(color='#58606F')
m.drawcoastlines(color='#222933')
m.drawcountries(color='#222933')
m.drawstates(color='#222933')
cbar = plt.colorbar(pcm,fraction=0.035, pad=0.03)
ttl = plt.title(title + '\n ' + str(dd[variable].time[k].data)[:-10],fontweight = 'bold')
ttl.set_position([.5, 1.05])
if not os.path.exists(folder):
os.mkdir(folder)
plt.savefig(filename)
if k == 10:
plt.show()
plt.close()
def make_anim(folder,anim_name):
files = sorted(os.listdir(folder))
fileList = []
for file in files:
if not file.startswith('.'):
complete_path = folder + file
fileList.append(complete_path)
writer = imageio.get_writer(anim_name, fps=4)
for im in fileList:
writer.append_data(imageio.imread(im))
writer.close()
print ('Animation is saved as ' + anim_name + ' under current working directory')
shutil.rmtree(folder)
API_key = open('APIKEY').read().strip()
server='api.planetos.com/'
version = 'v1'
time_start = '2019-08-30T00:00:00'
time_end = str(datetime.datetime.today().strftime('%Y-%m-%d') + 'T00:00:00')#'2019-09-06T00:00:00'
dataset_key = 'meteo_france_global_ocean_wave_analysis_daily'
variable1 = 'VHM0_SW1'
area = 'bah'
latitude_north = 40; latitude_south = 12
longitude_west = -89; longitude_east = -58
anim_name = variable1 + '_animation_' + str(datetime.datetime.strptime(time_start,'%Y-%m-%dT%H:%M:%S').year) + '.mp4'
dh=datahub.datahub(server,version,API_key)
package = package_api.package_api(dh,dataset_key,variable1,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area+variable1)
package.make_package()
package.download_package()
dd1 = xr.open_dataset(package.local_file_name)
dd1['longitude'] = ((dd1.longitude+180) % 360) - 180
m = Basemap(projection='merc', lat_0 = 0, lon_0 = (longitude_east + longitude_west)/2,
resolution = 'l', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(dd1.longitude,dd1.latitude)
lonmap,latmap = m(lons,lats)
folder = './ani/'
if not os.path.exists(folder):
os.mkdir(folder)
title = 'Significant Primary Swell Wave Height [m]'
vmin = 0; vmax = 6
make_imgs(dd1, lonmap,latmap, variable1, vmin,vmax,folder,title)
make_anim(folder, anim_name)
variable2 = 'VHM0'
anim_name2 = variable2 + '_animation_' + str(datetime.datetime.strptime(time_start,'%Y-%m-%dT%H:%M:%S').year) + '.mp4'
package2 = package_api.package_api(dh,dataset_key,variable2,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area + variable2)
package2.make_package()
package2.download_package()
dd2 = xr.open_dataset(package2.local_file_name)
title = 'Significant Wave Height [m]'
vmin = 0; vmax = 6
make_imgs(dd2, lonmap,latmap, variable2, vmin,vmax,folder,title)
make_anim(folder, anim_name2)
dataset_key2 = 'meteofrance_global_ocean_forecast'
reftime_start = str(datetime.datetime.today().strftime('%Y-%m-%d') + 'T00:00:00')#'2019-09-06T00:00:00'
reftime_end = str(datetime.datetime.today().strftime('%Y-%m-%d') + 'T00:00:00')#'2019-09-06T00:00:00'
package3 = package_api.package_api(dh,dataset_key2,variable2,longitude_west,longitude_east,latitude_south,latitude_north,reftime_start=reftime_start,reftime_end=reftime_end,area_name=area+variable2)
package3.make_package()
package3.download_package()
dd3 = xr.open_dataset(package3.local_file_name)
dd3 = dd3.drop('reftime')
dd_merged = xr.concat([dd2,dd3],dim='time')
title = 'Significant Wave Height [m]'
anim_name3 = variable2 + '_animation_forecast_analysis_' + str(datetime.datetime.strptime(time_start,'%Y-%m-%dT%H:%M:%S').year) + '.mp4'
vmin = 0; vmax = 6
make_imgs(dd_merged, lonmap,latmap, variable2, vmin,vmax,folder,title)
make_anim(folder, anim_name3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note from Junpeng Lao
Step2: 4.2 The seven scientists
Step3: 4.3 Repeated measurement of IQ
|
<ASSISTANT_TASK:>
Python Code:
# Data
x = np.array([1.1, 1.9, 2.3, 1.8])
n = len(x)
with pm.Model() as model1:
# prior
mu = pm.Normal('mu', mu=0, tau=.001)
sigma = pm.Uniform('sigma', lower=0, upper=10)
# observed
xi = pm.Normal('xi',mu=mu, tau=1/(sigma**2), observed=x)
# inference
trace = pm.sample(1e3, njobs=2)
pm.traceplot(trace[50:]);
from matplotlib.ticker import NullFormatter
nullfmt = NullFormatter() # no labels
y = trace['mu'][50:]
x = trace['sigma'][50:]
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
bottom_h = left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.2]
rect_histy = [left_h, bottom, 0.2, height]
# start with a rectangular Figure
plt.figure(1, figsize=(8, 8))
axScatter = plt.axes(rect_scatter)
axHistx = plt.axes(rect_histx)
axHisty = plt.axes(rect_histy)
# no labels
axHistx.xaxis.set_major_formatter(nullfmt)
axHisty.yaxis.set_major_formatter(nullfmt)
# the scatter plot:
axScatter.scatter(x, y, c=[1, 1, 1], alpha=.5)
# now determine nice limits by hand:
binwidth1 = 0.25
axScatter.set_xlim((-.01, 10.5))
axScatter.set_ylim((-0, 5))
bins1 = np.linspace(-.01, 10.5, 20)
axHistx.hist(x, bins=bins1)
bins2 = np.linspace(-0, 5, 20)
axHisty.hist(y, bins=bins2, orientation='horizontal')
axHistx.set_xlim(axScatter.get_xlim())
axHisty.set_ylim(axScatter.get_ylim());
print('The mu estimation is: ', y.mean())
print('The sigma estimation is: ', x.mean())
# display the total number and percentage of divergent
divergent = trace['diverging']
print('Number of Divergent %d' % divergent.nonzero()[0].size)
divperc = divergent.nonzero()[0].size/len(trace)
print('Percentage of Divergent %.5f' % divperc)
# scatter plot for the identifcation of the problematic neighborhoods in parameter space
plt.figure(figsize=(6, 6))
y = trace['mu']
x = trace['sigma']
plt.scatter(x[divergent == 0], y[divergent == 0], color='r', alpha=.05)
plt.scatter(x[divergent == 1], y[divergent == 1], color='g', alpha=.5);
# data
x = np.array([-27.020,3.570,8.191,9.898,9.603,9.945,10.056])
n = len(x)
with pm.Model() as model2:
# prior
mu = pm.Normal('mu', mu=0, tau=.001)
lambda1 = pm.Gamma('lambda1', alpha=.01, beta=.01, shape=(n))
# sigma = pm.Deterministic('sigma',1 / sqrt(lambda1))
# observed
xi = pm.Normal('xi',mu = mu, tau = lambda1, observed = x )
# inference
trace2 = pm.sample(5000, njobs=2)
burnin = 1000
pm.traceplot(trace2[burnin:]);
mu = trace2['mu'][burnin:]
lambda1 = trace2['lambda1'][burnin:]
print('The mu estimation is: ', mu.mean())
print('The sigma estimation is: ')
for i in np.mean(np.squeeze(lambda1),axis=0):
print(1 / np.sqrt(i))
# Data
y = np.array([[90,95,100],[105,110,115],[150,155,160]])
ntest = 3
nsbj = 3
import sys
eps = sys.float_info.epsilon
with pm.Model() as model3:
# mu_i ~ Uniform(0, 300)
mui = pm.Uniform('mui', 0, 300, shape=(nsbj,1))
# sg ~ Uniform(0, 100)
# sg = pm.Uniform('sg', .0, 100)
# It is more stable to use a Gamma prior
lambda1 = pm.Gamma('lambda1', alpha=.01, beta=.01)
sg = pm.Deterministic('sg',1 / np.sqrt(lambda1))
# y ~ Normal(mu_i, sg)
yd = pm.Normal('y', mu=mui, sd=sg, observed=y)
trace3 = pm.sample(5e3, njobs=2)
burnin = 500
pm.traceplot(trace3[burnin:]);
mu = trace3['mui'][burnin:]
sigma = trace3['sg'][burnin:]
print('The mu estimation is: ', np.mean(mu, axis=0))
print('The sigma estimation is: ',sigma.mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
Step4: Exercise 4
Step5: Exercise 5
Step6: Exercise 6
Step7: Exercise 7
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
# TODO
# TODO convert over50k to boolean
# TODO convert independend variables
# TODO (hint: use drop(columns,axis=1))
from sklearn.model_selection import train_test_split
# TODO
from sklearn.ensemble import RandomForestClassifier
# TODO
from plotting_utilities import plot_feature_importances
import matplotlib.pyplot as plt
%matplotlib inline
# TODO
# TODO predict
from sklearn.metrics import confusion_matrix
# TODO
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some 'theory'
Step2: Using the filtering and reductions operations we have seen in the previous notebooks, we could do something like
Step3: And many more methods are available.
Step4: <div class="alert alert-success">
Step5: <div class="alert alert-success">
Step6: <div class="alert alert-success">
Step7: <div class="alert alert-success">
Step8: <div class="alert alert-success">
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except ImportError:
pass
pd.options.display.max_rows = 10
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
df.groupby('key').aggregate(np.sum) # 'sum'
df.groupby('key').sum()
df = pd.read_csv("data/titanic.csv")
df.head()
df.groupby('Sex')['Age'].mean()
df['Survived'].sum() / len(df['Survived'])
df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived'])
def survival_ratio(survived):
return survived.sum() / len(survived)
df.groupby('Sex')['Survived'].aggregate(survival_ratio)
df.groupby('Pclass')['Survived'].aggregate(survival_ratio).plot(kind='bar')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A few points
Step2: Images
Step3: Returning an Image object from an expression will automatically display it
Step4: Or you can pass an object with a rich representation to display
Step5: An image can also be displayed from raw data or a URL.
Step6: SVG images are also supported out of the box.
Step7: Embedded vs non-embedded Images
Step8: Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image.
Step9: Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline.
Step11: Of course, if you re-run this Notebook, the two images will be the same again.
Step12: You can also use the %%html cell magic to accomplish the same thing.
Step13: JavaScript
Step14: Pass a string of JavaScript source code to the JavaScript object and then display it.
Step15: The same thing can be accomplished using the %%javascript cell magic
Step17: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
Step18: LaTeX
Step20: With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray
Step21: Or you can enter LaTeX directly with the %%latex cell magic
Step22: Audio
Step23: A NumPy array can be auralized automatically. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
Step24: Video
Step25: Using the nascent video capabilities of modern browsers, you may also be able to display local
Step26: External sites
Step27: Links to local files
Step28: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
from IPython.display import Image
i = Image(filename='../images/ipython_logo.png')
i
display(i)
Image(url='http://python.org/images/python-logo.gif')
from IPython.display import SVG
SVG(filename='../images/python_logo.svg')
from IPython.display import Image
img_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg'
# by default Image data are embedded
Embed = Image(img_url)
# if kwarg `url` is given, the embedding is assumed to be false
SoftLinked = Image(url=img_url)
# In each case, embed can be specified explicitly with the `embed` kwarg
# ForceEmbed = Image(url=img_url, embed=True)
Embed
SoftLinked
from IPython.display import HTML
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display(h)
%%html
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
from IPython.display import Javascript
js = Javascript('alert("hi")');
display(js)
%%javascript
alert("hi");
Javascript(
$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')
)
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("data/flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
from IPython.display import Math
Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx')
from IPython.display import Latex
Latex(r\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray})
%%latex
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
from IPython.display import Audio
Audio(url="http://www.nch.com.au/acm/8k16bitpcm.wav")
import numpy as np
max_time = 3
f1 = 220.0
f2 = 224.0
rate = 8000.0
L = 3
times = np.linspace(0,L,rate*L)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
Audio(data=signal, rate=rate)
from IPython.display import YouTubeVideo
YouTubeVideo('sjfsUzECqK0')
from IPython.display import HTML
from base64 import b64encode
video = open("../images/animation.m4v", "rb").read()
video_encoded = b64encode(video).decode('ascii')
video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)
HTML(data=video_tag)
from IPython.display import IFrame
IFrame('http://jupyter.org', width='100%', height=350)
from IPython.display import FileLink, FileLinks
FileLink('Cell Magics.ipynb')
FileLinks('.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: generate training data
Step2: fit the model
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn import svm
packet_loss=np.log(np.linspace(0.0001, 1, 100))
latency=np.log(np.linspace(0.01, 200, 500))
xx, yy = np.meshgrid(packet_loss, latency)
xx, yy = np.meshgrid(np.linspace(-5, 5, 500), np.linspace(-5, 5, 500))
X = 0.3 * np.random.randn(100,2)
X_train = np.r_[X + 2, X - 2]
# Generate some regular novel observations
X = 0.3 * np.random.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
# Generate some abnormal novel observations
X_outliers = np.random.uniform(low=-4, high=4, size=(20, 2))
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.1)
clf.fit(X_train)
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
y_pred_outliers = clf.predict(X_outliers)
n_error_train = y_pred_train[y_pred_train == -1].size
n_error_test = y_pred_test[y_pred_test == -1].size
n_error_outliers = y_pred_outliers[y_pred_outliers == 1].size
# plot the line, the points, and the nearest vectors to the plane
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.title("Novelty Detection")
plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 7), cmap=plt.cm.Blues_r)
a = plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='red')
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='orange')
b1 = plt.scatter(X_train[:, 0], X_train[:, 1], c='white')
b2 = plt.scatter(X_test[:, 0], X_test[:, 1], c='green')
c = plt.scatter(X_outliers[:, 0], X_outliers[:, 1], c='red')
plt.axis('tight')
plt.xlim((-5, 5))
plt.ylim((-5, 5))
plt.legend([a.collections[0], b1, b2, c],
["learned frontier", "training observations",
"new regular observations", "new abnormal observations"],
loc="upper left",
prop=matplotlib.font_manager.FontProperties(size=11))
plt.xlabel(
"error train: %d/200 ; errors novel regular: %d/40 ; "
"errors novel abnormal: %d/40"
% (n_error_train, n_error_test, n_error_outliers))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MoveStrategy and MoveScheme
Step2: OpenPathSampling comes with a nice tool to visualize the move scheme. There are two main columns in the output of this visualization
Step3: MoveSchemes are built from MoveStrategy objects
Step4: Now when we visualize this, note the difference in the replica exchange block
Step5: What if you changed your mind, or wanted to go the other way? Of course, you could just create a new scheme from scratch. However, you can also append a NearestNeighborRepExStrategy after the AllSetRepExStrategy and, from that, return to nearest-neighbor replica exchange.
Step6: Combination strategies
Step7: Examples of practical uses
Step8: Now we have 7 replica exchange movers (5 not including MS-outer), as can be seen in the move tree visualization.
Step9: First crossing shooting point selection for some ensembles
Step10: Two different kinds of shooting for one ensemble
Step11: In the visualization of this, you'll see that we have 2 blocks of shooting moves
Step12: RepEx-Shoot-RepEx
Step13: You'll notice that the combo_mover we defined above is within a RandomChoiceMover
Step14: Modifying the probabilities of moves
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import openpathsampling as paths
from openpathsampling.visualize import PathTreeBuilder, PathTreeBuilder
from IPython.display import SVG, HTML
import openpathsampling.high_level.move_strategy as strategies # TODO: handle this better
# real fast setup of a small network
from openpathsampling import VolumeFactory as vf
cvA = paths.FunctionCV(name="xA", f=lambda s : s.xyz[0][0])
cvB = paths.FunctionCV(name="xB", f=lambda s : -s.xyz[0][0])
stateA = paths.CVDefinedVolume(cvA, float("-inf"), -0.5).named("A")
stateB = paths.CVDefinedVolume(cvB, float("-inf"), -0.5).named("B")
interfacesA = paths.VolumeInterfaceSet(cvA, float("-inf"),
[-0.5, -0.3, -0.1])
interfacesB = paths.VolumeInterfaceSet(cvB, float("-inf"),
[-0.5, -0.3, -0.1])
network = paths.MSTISNetwork(
[(stateA, interfacesA),
(stateB, interfacesB)],
ms_outers=paths.MSOuterTISInterface.from_lambdas(
{interfacesA: 0.0, interfacesB: 0.0}
)
)
scheme = paths.DefaultScheme(network)
move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)
SVG(move_vis.svg())
# example: switching between AllSetRepEx and NearestNeighborRepEx
scheme = paths.DefaultScheme(network)
scheme.append(strategies.AllSetRepExStrategy())
move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)
SVG(move_vis.svg())
scheme.append(strategies.NearestNeighborRepExStrategy(), force=True)
move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)
SVG(move_vis.svg())
# example: single replica
ens00 = network.sampling_transitions[0].ensembles[0]
ens02 = network.sampling_transitions[0].ensembles[2]
extra_repex = strategies.SelectedPairsRepExStrategy(ensembles=[ens00, ens02], replace=False)
scheme = paths.DefaultScheme(network)
scheme.append(extra_repex)
move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)
SVG(move_vis.svg())
# select the outermost ensemble in each sampling transition
special_ensembles = [transition.ensembles[-1] for transition in network.sampling_transitions]
alternate_shooting = strategies.OneWayShootingStrategy(
selector=paths.UniformSelector(), # TODO: change this
ensembles=special_ensembles
)
# note that replace=True is the default
scheme = paths.DefaultScheme(network)
scheme.movers = {} # TODO: this will be removed, and lines on either side combined, when all is integrated
scheme.append(alternate_shooting)
move_decision_tree = scheme.move_decision_tree()
# TODO: find a way to visualize
# example: add extra shooting (in a different group, preferably)
extra_shooting = strategies.OneWayShootingStrategy(
selector=paths.UniformSelector(), # TODO: change this
group='small_step_shooting'
)
scheme = paths.DefaultScheme(network)
scheme.append(extra_shooting)
move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)
SVG(move_vis.svg())
# example: custom subclass of `MoveStrategy`
class RepExShootRepExStrategy(strategies.MoveStrategy):
_level = strategies.levels.GROUP
# we define an init function mainly to set defaults for `replace` and `group`
def __init__(self, ensembles=None, group="repex_shoot_repex", replace=True, network=None):
super(RepExShootRepExStrategy, self).__init__(
ensembles=ensembles, group=group, replace=replace
)
def make_movers(self, scheme):
# if we replace, we remove these groups from the scheme.movers dictionary
if self.replace:
repex_movers = scheme.movers.pop('repex')
shoot_movers = scheme.movers.pop('shooting')
else:
repex_movers = scheme.movers['repex']
shoot_movers = scheme.movers['shooting']
# combine into a list for the SequentialMover
mover_list = repex_movers + shoot_movers + list(reversed(repex_movers))
combo_mover = paths.SequentialMover(mover_list)
return [combo_mover]
repex_shoot_repex = RepExShootRepExStrategy()
scheme = paths.DefaultScheme(network)
scheme.append(repex_shoot_repex)
# TODO: there appears to be a bug in MoveTreeBuilder with this scheme
move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)
SVG(move_vis.svg())
# TODO: This is done differently (and more easily) now
# example: getting into the details
#scheme = paths.DefaultScheme(network)
#move_decision_tree = scheme.move_decision_tree()
#ens = network.sampling_transitions[0].ensembles[-1]
#shooting_chooser = [m for m in move_decision_tree.movers if m.movers==scheme.movers['shooting']][0]
#idx_ens = [shooting_chooser.movers.index(m)
# for m in shooting_chooser.movers
# if m.ensemble_signature==((ens,), (ens,))]
#print shooting_chooser.weights
#for idx in idx_ens:
# shooting_chooser.weights[idx] *= 2
#print shooting_chooser.weights
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is a cross post from the new emcee documentation.
Step2: Now we'll estimate the empirical autocorrelation function for each of these parallel chains and compare this to the true function.
Step3: This figure shows how the empirical estimate of the normalized autocorrelation function changes as more samples are generated.
Step4: Now let's estimate the autocorrelation time using these estimated autocorrelation functions.
Step5: In this figure, the true autocorrelation time is shown as a horizontal line and it should be clear that both estimators give outrageous results for the short chains.
Step6: Here's the marginalized density in the first dimension.
Step7: And here's the comparison plot showing how the autocorrelation time estimates converge with longer chains.
Step8: As before, the short chains give absurd estimates of $\tau$, but the new method converges faster and with lower variance than the old method.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = "retina"
from __future__ import print_function
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(123456)
# Build the celerite model:
import celerite
from celerite import terms
kernel = terms.RealTerm(log_a=0.0, log_c=-6.0)
kernel += terms.RealTerm(log_a=0.0, log_c=-2.0)
# The true autocorrelation time can be calculated analytically:
true_tau = sum(2*np.exp(t.log_a-t.log_c) for t in kernel.terms)
true_tau /= sum(np.exp(t.log_a) for t in kernel.terms)
true_tau
# Simulate a set of chains:
gp = celerite.GP(kernel)
t = np.arange(2000000)
gp.compute(t)
y = gp.sample(size=32)
# Let's plot a little segment with a few samples:
plt.plot(y[:3, :300].T)
plt.xlim(0, 300)
plt.xlabel("step number")
plt.ylabel("$f$")
plt.title("$\\tau_\mathrm{{true}} = {0:.0f}$".format(true_tau), fontsize=14);
def next_pow_two(n):
i = 1
while i < n:
i = i << 1
return i
def autocorr_func_1d(x, norm=True):
x = np.atleast_1d(x)
if len(x.shape) != 1:
raise ValueError("invalid dimensions for 1D autocorrelation function")
n = next_pow_two(len(x))
# Compute the FFT and then (from that) the auto-correlation function
f = np.fft.fft(x - np.mean(x), n=2*n)
acf = np.fft.ifft(f * np.conjugate(f))[:len(x)].real
acf /= 4*n
# Optionally normalize
if norm:
acf /= acf[0]
return acf
# Make plots of ACF estimate for a few different chain lengths
window = int(2*true_tau)
tau = np.arange(window+1)
f0 = kernel.get_value(tau) / kernel.get_value(0.0)
# Loop over chain lengths:
fig, axes = plt.subplots(1, 3, figsize=(12, 4), sharex=True, sharey=True)
for n, ax in zip([10, 100, 1000], axes):
nn = int(true_tau * n)
ax.plot(tau / true_tau, f0, "k", label="true")
ax.plot(tau / true_tau, autocorr_func_1d(y[0, :nn])[:window+1], label="estimate")
ax.set_title(r"$N = {0}\,\tau_\mathrm{{true}}$".format(n), fontsize=14)
ax.set_xlabel(r"$\tau / \tau_\mathrm{true}$")
axes[0].set_ylabel(r"$\rho_f(\tau)$")
axes[-1].set_xlim(0, window / true_tau)
axes[-1].set_ylim(-0.05, 1.05)
axes[-1].legend(fontsize=14);
fig, axes = plt.subplots(1, 3, figsize=(12, 4), sharex=True, sharey=True)
for n, ax in zip([10, 100, 1000], axes):
nn = int(true_tau * n)
ax.plot(tau / true_tau, f0, "k", label="true")
f = np.mean([autocorr_func_1d(y[i, :nn], norm=False)[:window+1]
for i in range(len(y))], axis=0)
f /= f[0]
ax.plot(tau / true_tau, f, label="estimate")
ax.set_title(r"$N = {0}\,\tau_\mathrm{{true}}$".format(n), fontsize=14)
ax.set_xlabel(r"$\tau / \tau_\mathrm{true}$")
axes[0].set_ylabel(r"$\rho_f(\tau)$")
axes[-1].set_xlim(0, window / true_tau)
axes[-1].set_ylim(-0.05, 1.05)
axes[-1].legend(fontsize=14);
# Automated windowing procedure following Sokal (1989)
def auto_window(taus, c):
m = np.arange(len(taus)) < c * taus
if np.any(m):
return np.argmin(m)
return len(taus) - 1
# Following the suggestion from Goodman & Weare (2010)
def autocorr_gw2010(y, c=5.0):
f = autocorr_func_1d(np.mean(y, axis=0))
taus = 2.0*np.cumsum(f)-1.0
window = auto_window(taus, c)
return taus[window]
def autocorr_new(y, c=5.0):
f = np.zeros(y.shape[1])
for yy in y:
f += autocorr_func_1d(yy)
f /= len(y)
taus = 2.0*np.cumsum(f)-1.0
window = auto_window(taus, c)
return taus[window]
# Compute the estimators for a few different chain lengths
N = np.exp(np.linspace(np.log(100), np.log(y.shape[1]), 10)).astype(int)
gw2010 = np.empty(len(N))
new = np.empty(len(N))
for i, n in enumerate(N):
gw2010[i] = autocorr_gw2010(y[:, :n])
new[i] = autocorr_new(y[:, :n])
# Plot the comparisons
plt.loglog(N, gw2010, "o-", label="G\&W 2010")
plt.loglog(N, new, "o-", label="DFM 2017")
ylim = plt.gca().get_ylim()
plt.plot(N, N / 50.0, "--k", label=r"$\tau = N/50$")
plt.axhline(true_tau, color="k", label="truth", zorder=-100)
plt.ylim(ylim)
plt.xlabel("number of samples, $N$")
plt.ylabel(r"$\tau$ estimates")
plt.legend(fontsize=14);
import emcee
def log_prob(p):
return np.logaddexp(-0.5*np.sum(p**2), -0.5*np.sum((p-4.0)**2))
sampler = emcee.EnsembleSampler(32, 3, log_prob)
sampler.run_mcmc(np.concatenate((np.random.randn(16, 3),
4.0+np.random.randn(16, 3)), axis=0),
500000, progress=True);
chain = sampler.get_chain()[:, :, 0].T
plt.hist(chain.flatten(), 100)
plt.gca().set_yticks([])
plt.xlabel(r"$\theta$")
plt.ylabel(r"$p(\theta)$");
# Compute the estimators for a few different chain lengths
N = np.exp(np.linspace(np.log(100), np.log(chain.shape[1]), 10)).astype(int)
gw2010 = np.empty(len(N))
new = np.empty(len(N))
for i, n in enumerate(N):
gw2010[i] = autocorr_gw2010(chain[:, :n])
new[i] = autocorr_new(chain[:, :n])
# Plot the comparisons
plt.loglog(N, gw2010, "o-", label="G\&W 2010")
plt.loglog(N, new, "o-", label="DFM 2017")
ylim = plt.gca().get_ylim()
plt.plot(N, N / 50.0, "--k", label=r"$\tau = N/50$")
plt.ylim(ylim)
plt.xlabel("number of samples, $N$")
plt.ylabel(r"$\tau$ estimates")
plt.legend(fontsize=14);
from scipy.optimize import minimize
def autocorr_ml(y, thin=1, c=5.0):
# Compute the initial estimate of tau using the standard method
init = autocorr_new(y, c=c)
z = y[:, ::thin]
N = z.shape[1]
# Build the GP model
tau = max(1.0, init/thin)
kernel = terms.RealTerm(np.log(0.9*np.var(z)), -np.log(tau),
bounds=[(-5.0, 5.0), (-np.log(N), 0.0)])
kernel += terms.RealTerm(np.log(0.1*np.var(z)), -np.log(0.5*tau),
bounds=[(-5.0, 5.0), (-np.log(N), 0.0)])
gp = celerite.GP(kernel, mean=np.mean(z))
gp.compute(np.arange(z.shape[1]))
# Define the objective
def nll(p):
# Update the GP model
gp.set_parameter_vector(p)
# Loop over the chains and compute likelihoods
v, g = zip(*(
gp.grad_log_likelihood(z0, quiet=True)
for z0 in z
))
# Combine the datasets
return -np.sum(v), -np.sum(g, axis=0)
# Optimize the model
p0 = gp.get_parameter_vector()
bounds = gp.get_parameter_bounds()
soln = minimize(nll, p0, jac=True, bounds=bounds)
gp.set_parameter_vector(soln.x)
# Compute the maximum likelihood tau
a, c = kernel.coefficients[:2]
tau = thin * 2*np.sum(a / c) / np.sum(a)
return tau
# Calculate the estimate for a set of different chain lengths
ml = np.empty(len(N))
ml[:] = np.nan
for j, n in enumerate(N[1:8]):
i = j+1
thin = max(1, int(0.05*new[i]))
ml[i] = autocorr_ml(chain[:, :n], thin=thin)
# Plot the comparisons
plt.loglog(N, gw2010, "o-", label="G\&W 2010")
plt.loglog(N, new, "o-", label="DFM 2017")
plt.loglog(N, ml, "o-", label="DFM 2017: ML")
ylim = plt.gca().get_ylim()
plt.plot(N, N / 50.0, "--k", label=r"$\tau = N/50$")
plt.ylim(ylim)
plt.xlabel("number of samples, $N$")
plt.ylabel(r"$\tau$ estimates")
plt.legend(fontsize=14);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot the different time series and PSDs
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import fit_iir_model_raw
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'
raw = mne.io.read_raw_fif(raw_fname)
proj = mne.read_proj(proj_fname)
raw.add_proj(proj)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # mark bad channels
# Set up pick list: Gradiometers - bad channels
picks = mne.pick_types(raw.info, meg='grad', exclude='bads')
order = 5 # define model order
picks = picks[:1]
# Estimate AR models on raw data
b, a = fit_iir_model_raw(raw, order=order, picks=picks, tmin=60, tmax=180)
d, times = raw[0, 10000:20000] # look at one channel from now on
d = d.ravel() # make flat vector
innovation = signal.convolve(d, a, 'valid')
d_ = signal.lfilter(b, a, innovation) # regenerate the signal
d_ = np.r_[d_[0] * np.ones(order), d_] # dummy samples to keep signal length
plt.close('all')
plt.figure()
plt.plot(d[:100], label='signal')
plt.plot(d_[:100], label='regenerated signal')
plt.legend()
plt.figure()
plt.psd(d, Fs=raw.info['sfreq'], NFFT=2048)
plt.psd(innovation, Fs=raw.info['sfreq'], NFFT=2048)
plt.psd(d_, Fs=raw.info['sfreq'], NFFT=2048, linestyle='--')
plt.legend(('Signal', 'Innovation', 'Regenerated signal'))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.1 增量寻优
Step2: 因为我们始终是找极小值点,这个过程就始终如上图所示"U"形。那么每次的步进方向就可用 $f(x_1) - f(x_0)$ 来指示。也就是说,虽然我们无法对机器$f(x)$建模,但我们可以对寻优的过程 $z = f(x_i) - f(x_{i+1})$ 建模 $g(z)$。而 $g(z)$ 只要满足单调且过零点,理论上它可以是任意的,不同函数影响的只是收敛速度,但都会收敛到 $w^*$ 附近。
Step3: 同样地,$g(z)$只要满足一定条件的函数。在这里,就是过零点,正数区间单增,负数区间单减,最常见的代表就是二次函数 $g(z) = z^2$。
|
<ASSISTANT_TASK:>
Python Code:
show_image("./res/gradient_descent.jpg", figsize=(12,8))
show_image("./res/iterator.jpg")
show_image("./res/incr_opt.png", figsize=(10,5))
show_image("./res/approx.png", figsize=(10,5))
show_image("./res/model.png", figsize=(10,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ruta de la trayectoria
Step2: Convirtiendo la trayectoria DCD -> XTC
Step3: Realizando la conversión de la trayectoria
Step4: Cargando la nueva trayectoria en VMD para su revisión
Step5: Calculando el RMSD con Gromacs 5
Step6: Calculando el RMSD con la opción 3 'C-Alpha'
Step7: Creando archivo rmsd.dat para su visualización en XMGRACE
Step8: Creando el archivo rmsd_residue.dat para visualizar con xmgrace
Step9: Creando archivo rmsd.dat para su visualización en Matplotlib
Step10: Creando archivo rmsd_residue.dat para su visualización en Matplotlib
Step11: RMSF
Step12: Calculando el RMSF con la opción 3 'C-Alpha'
Step13: Creando archivo rmsf.dat para su visualización en XMGRACE
Step14: Creando archivo rmsf.dat para su visualización en Matplotlib
Step15: B-factors
Step16: Generando archivo para visualizar con Matplotlib
Step17: Secondary Structure
Step18: Calculando la estructura secundaria
Step19: R-GYRATE
Step20: Calculando el r-gyro con la opción (3) - C-alpha
Step21: Generando el archivo rgyro.dat para su análisis con XMGRACE
Step22: Ploteando el archivo gyrate.xvg con matplotlib
Step23: RMSD Helix Alfa
Step24: Entrada de datos
Step25: SASA
Step26: Ejecutando el análisis de SASA con Gromacs4
Step27: Creando el archivo sasa_residuo.dat para salida con XMGRACE
Step28: Cargando archivo residue-sas.xvg para su visualización en Matplotlib
Step29: Creando el archivo sasa.dat para salida con XMGRACE
Step30: Cargando archivo solven-accessible-surface.xvg para graficar con Matplotlib
Step31: MATRIZ DE RMSD
Step32: Cargar el scrit rmsd_matrix con vmd en la nueva trayectoria
Step33: Graficando el archivo de salida
Step34: Matriz de distancia mínima
Step35: Calculando la matriz de distancia mínima
Step36: Generando los archivos para visualizarlos
Step37: Free Energy
Step38: Calculando el rmsd y el r-gyro para obtener el mínimo y máximo de cada uno de ellos.
Step39: Escribiendo script a /tmp para utilizar en el cálculo
Step40: Realizando los cálculos para la Free Energy
Step41: Ploteando con GNUplot
Step42: PCA
Step43: Una vez calculada la matriz el eigenvalues y eigenvectors sirven de entrada para generar el pca.
Step44: Ploteando el archivo con gnuplot
Step45: Análisis de puentes di sulfuro
Step46: Creando la ruta
Step47: Cargando de los puentes di sulfuro
Step48: Obteniendo los índices de los puentes
Step49: Ordenando los puentes de la forma ['N', 'CA', 'CB', 'SG', 'SG', 'CB', 'CA', 'N']
Step50: Creando los archivos tcl para el cálculo del RMSD de los puentes
Step51: Ejecutando los archivos rmsd en tcl con vmd
Step52: Generando los gráficos RMSD en matplotlib
Step53: FREE ENERGY DIHEDRAL INTRAMOLECULAR
Step54: Creación de los archivos tcl para el cálculo de los ángulos diedros
Step55: Ejecutando los archivos de los ángulos diedros tcl generados con VMD
Step56: Calculando la Free Energy Intramolecular para el Puente 1
Step57: Ploteando con GNUPLOT el Puente 1
Step58: Calculando la Free Energy intramolecular para el Puente 2
Step59: Ploteando con GNUPLOT el Puente 2
Step60: Free Energy Intermolecular
Step61: Ploteando la Free Energy Intermolecular puentes DB1 y DB2
Step62: Calcular los histogramas de los diedros
Step63: Ángulos de Enlace de los puentes Intermolecular
Step64: Copiando el archivo de generación de FES
Step65: Generando los archivos Tcl para el cálculo de los ángulos.
Step66: Ejecutando los archivos tcl generados con VMD
Step67: Calculando la Free Energy de los Bonds de los puentes
Step68: Ploteando la Free Energy de los ángulos con gnuplot
Step69: Calculando los histogramas de los bonds
Step70: Generación de clusters
Step71: Calculando los clusters con la opción (1= Protein)
Step72: Cargando los clusters para su visualización en VMD
Step73: colorByRMSF
Step74: Copiando el archivo a la carpeta de datos
Step75: Calculando el RMSF para el análisis de la proteína con la opción (1) Protein
Step76: Cargar el scrit colorByRMSF.vmd en VMD
Step77: Graficando B-Factors con Chimera
Step78: Adecuando archivo bfac.pdb para obtener la columna de B-factors
Step79: Revisando la estructura del archivo generado.
Step80: Generando el archivo de Bfactors para todos los átomos FALTA ADECUAR PARA SACAR EL MAYOR POR RESIDUO
Step81: Cargando el archivo pdb con Chimera para realizar la coloración de Bfactors
Step82: Instrucciones para generar la imagen de B-factors
Step83: Graficando SASA
Step84: Coloreando el SASA
Step85: Restaurando configuración default de VMD
Step86: Graficando el RGYRO
Step87: Coloreando el RGYRO
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pylab as pl
import matplotlib.patches as mpatches
import matplotlib.ticker as ticker
import os
import shutil
from IPython.display import Image
from matplotlib.ticker import FormatStrFormatter
ruta=os.getcwd()
c=input('Nombre de la trayectoria para realizar el análisis... Ejemplo: run001....')
if os.path.isdir(c):
indir = '/'+c
print (indir)
ruta_old_traj=ruta+indir
print (ruta)
print (ruta_old_traj)
else:
print ('La carpetac'+c+' no existe...')
#
ruta_scripts=ruta+'/scripts_fimda'
print (ruta_scripts)
if os.path.exists(ruta_scripts):
print ('Ruta identificada para búsqueda de scripst adicionales ===>',ruta_scripts)
else:
print ('La carpeta de scripst adicionales no existe, copiar en '+ruta_scripts+' ..!!!')
#Verificando que exista la nueva carpeta para la conversión de trayectorias
#nuevaruta = ruta+'/'+indir+'_XTC'
nuevaruta = ruta+indir+'_Dinamica'
print ( nuevaruta )
if not os.path.exists(nuevaruta):
os.makedirs(nuevaruta)
print ('Se ha creado la ruta ===>',nuevaruta)
else:
print ("La ruta "+nuevaruta+" existe..!!!")
print ('Obtenemos los archivos a convertir')
#Buscamos el archivo DCD, PDB y PSF para realizar las operaciones
for filename in os.listdir(ruta_old_traj):
if filename.endswith('.dcd'):
dcd_file=filename
if filename.endswith('.psf'):
psf_file=filename
if filename.endswith('.pdb'):
pdb_file=filename
print ('pdb file =>', pdb_file)
print ('psf file =>', psf_file)
print ('dcd file =>', dcd_file)
print ( 'Nos vemos a ....', ruta_old_traj )
os.chdir( ruta_old_traj )
print ('\nEjecutando CATDCD para convertir la trayectoria....')
output_catdcd=!catdcd -otype trr -o output.trr $dcd_file
print (output_catdcd.n)
print ('\nEjecutando TRJCONV para convertir la trayectoria....')
output_trjconv=!trjconv -f output.trr -o output.xtc -timestep 20
#print (output_trjconv.n)
print ('\nBorrando archivos temporales de conversión...')
output_rm=!rm output.trr
print ('\nMoviendo los archivos de salida al directorio '+nuevaruta)
source_file=ruta_old_traj+'/output.xtc'
dest_file=nuevaruta+'/output.xtc'
shutil.move(source_file,dest_file)
print ('\Copiando el archivo ionized.pdb a '+nuevaruta)
source_file=ruta_old_traj+'/ionized.pdb'
dest_file=nuevaruta+'/ionized.pdb'
shutil.copy(source_file,dest_file)
print ('\nCopiando el archivo ionized.psf a '+nuevaruta)
source_file=ruta_old_traj+'/ionized.psf'
dest_file=nuevaruta+'/ionized.psf'
shutil.copy(source_file,dest_file)
print('\nTrayectoria convertida, regresando a '+ruta)
os.chdir( ruta )
print ('Visualizando la nueva trayectoria')
file_psf=nuevaruta+'/'+psf_file
traj = nuevaruta+'/output.xtc'
!vmd $file_psf $traj
### Creando el directorio para el análisis del RMSD
#Verificando que exista la nueva carpeta para la conversión de trayectorias
#nuevaruta = ruta+'/'+indir+'_XTC'
ruta_rmsd = nuevaruta+'/rmsd'
print ( ruta_rmsd )
if not os.path.exists(ruta_rmsd):
os.makedirs(ruta_rmsd)
print ('Se ha creado la ruta ===>',ruta_rmsd)
else:
print ("La ruta "+ruta_rmsd+" existe..!!!")
print ( 'Nos vamos a ....', ruta_rmsd )
os.chdir( ruta_rmsd )
print ('Ejecutando el análisis de rmsd...')
!echo 3 3 | g_rms -f ../output.xtc -s ../ionized.pdb -a avgrp.xvg
#Inicializando vector
rmsd=[]
try:
archivo = open( 'rmsd.xvg' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
for linea in archivo.readlines():
fila = linea.strip()
sl = fila.split()
cadena=sl[0]
if (not '#' in cadena) and (not '@' in cadena):
num=float(sl[0])
#num2=float(sl[1])
num=num/1000
rmsd.append(repr(num)+'\t'+sl[1]+'\n')
i=i+1
#Escribiendo el archivo RMSD
f = open('rmsd.dat', 'w')
#f.write('@ title "RMSD" \n')
f.write('@ xaxis label " Time (ns)" \n')
f.write('@ xaxis label char size 1.480000\n')
f.write('@ xaxis bar linewidth 3.0\n')
f.write('@ xaxis ticklabel char size 1.480000\n')
f.write('@ yaxis label " RMSD (nm)" \n')
f.write('@ yaxis label char size 1.480000\n')
f.write('@ yaxis bar linewidth 3.0\n')
f.write('@ yaxis ticklabel char size 1.480000\n')
f.write('@ s0 line linewidth 1.5\n')
f.write('@TYPE xy \n')
#f.write('@ subtitle "C-alpha after lsq fit to C-alpha" \n')
f.write("".join(rmsd))
f.close()
#Cargando el archivo para visualizar en xmgrace
!xmgrace rmsd.dat
#Cargando la imagen generada en xmgrace
Image(filename='rmsd.png')
#Inicializando vector
rmsd_residue=[]
try:
archivo_rmsd = open( 'aver.xvg' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=1
for linea in archivo_rmsd.readlines():
fila = linea.strip()
sl = fila.split()
cadena=sl[0]
if (not '#' in cadena) and (not '@' in cadena):
num=int(sl[0])
print ('Residuo =>',num+1)
rmsd_residue.append(repr(num+1)+'\t'+sl[1]+'\n')
i=i+1
#Escribiendo el archivo RMSD_RESIDUE
f = open('rmsd_residue.dat', 'w')
#f.write('@ title "C-alpha" \n')
f.write('@ xaxis label "Residue" \n')
f.write('@ xaxis label char size 1.480000\n')
f.write('@ xaxis bar linewidth 3.0\n')
f.write('@ xaxis ticklabel char size 1.480000\n')
f.write('@ yaxis label " RMSD (nm)" \n')
f.write('@ yaxis label char size 1.480000\n')
f.write('@ yaxis bar linewidth 3.0\n')
f.write('@ yaxis ticklabel char size 1.480000\n')
f.write('@ s0 line linewidth 2.5\n')
f.write('@ s0 symbol 1\n')
f.write('@ s0 symbol size 1.000000\n')
f.write('@ s0 symbol color 1\n')
f.write('@ s0 symbol pattern 1\n')
f.write('@ s0 symbol fill color 2\n')
f.write('@ s0 symbol fill pattern 1\n')
f.write('@ s0 symbol linewidth 1.0\n')
f.write('@TYPE xy \n')
f.write("".join(rmsd_residue))
f.close()
!xmgrace rmsd_residue.dat
#Cargando la imagen generada en xmgrace
Image(filename='rmsd_residue.png')
data_rmsd=np.loadtxt('rmsd.xvg',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
pl.plot(data_rmsd[:,0]/1000, data_rmsd[:,1], linewidth = 2, markeredgewidth=3, color='black')
pl.xlabel("Time (ns)", fontsize = 40)
pl.ylabel('RMSD (nm)', fontsize = 40)
#pl.suptitle('RMSD', fontsize=50)
#pl.title('C-alpha after lsq fit to C-alpha', fontsize=30)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
data_rmsd_res=np.loadtxt('aver.xvg',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
pl.plot(data_rmsd_res[:,0]+1, data_rmsd_res[:,1], '-o', color='black', markersize=25,
markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, )
pl.xlabel("Residue", fontsize = 40)
pl.ylabel('RMSD (nm)', fontsize = 40)
#pl.title('C-alpha', fontsize=40)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
pl.xlim(0, len(data_rmsd_res[:,1]))
### Creando el directorio para el análisis del RMSF
#Verificando que exista la nueva carpeta para la conversión de trayectorias
ruta_rmsf = nuevaruta+'/rmsf'
print ( ruta_rmsf )
if not os.path.exists(ruta_rmsf):
os.makedirs(ruta_rmsf)
print ('Se ha creado la ruta ===>',ruta_rmsf)
else:
print ("La ruta "+ruta_rmsf+" existe..!!!")
print ( 'Nos vamos a ....', ruta_rmsf )
os.chdir( ruta_rmsf )
print ('Ejecutando el análisis de rmsf...')
!echo 3 | g_rmsf -f ../output.xtc -s ../ionized.pdb -oq bfac.pdb -o rmsf.xvg -res
#Inicializando vector
rmsf=[]
rmsf_x=[]
rmsf_y=[]
try:
file_rmsf = open( 'rmsf.xvg' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
for linea in file_rmsf.readlines():
fila = linea.strip()
sl = fila.split()
cadena=sl[0]
if (not '#' in cadena) and (not '@' in cadena):
print ('Residue =>',cadena)
rmsf.append(sl[0]+'\t'+sl[1]+'\n')
rmsf_x.append(int(sl[0]))
rmsf_y.append(float(sl[1]))
i=i+1
file_rmsf.close()
#Escribiendo el archivo RMSD
f = open('rmsf.dat', 'w')
#f.write('@ title "RMSF fluctuation" \n')
f.write('@ xaxis label " Residue" \n')
f.write('@ xaxis label char size 1.480000\n')
f.write('@ xaxis bar linewidth 3.0\n')
f.write('@ xaxis ticklabel char size 1.480000\n')
f.write('@ yaxis label "RMSF (nm)" \n')
f.write('@ yaxis label char size 1.480000\n')
f.write('@ yaxis bar linewidth 3.0\n')
f.write('@ yaxis ticklabel char size 1.480000\n')
f.write('@ s0 line linewidth 2.5\n')
f.write('@ s0 symbol 1\n')
f.write('@ s0 symbol size 1.000000\n')
f.write('@ s0 symbol color 1\n')
f.write('@ s0 symbol pattern 1\n')
f.write('@ s0 symbol fill color 2\n')
f.write('@ s0 symbol fill pattern 1\n')
f.write('@ s0 symbol linewidth 1.0\n')
f.write('@TYPE xy \n')
f.write("".join(rmsf))
f.close()
!xmgrace rmsf.dat
#Cargando la imagen generada en xmgrace
Image(filename='rmsf.png')
data_rmsf=np.loadtxt('rmsf.xvg',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
pl.plot(data_rmsf[:,0], data_rmsf[:,1], '-o', color='black', markersize=25,
markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, )
pl.xlabel("Residue", fontsize = 40)
pl.ylabel('RMSF (nm)', fontsize = 40)
#pl.title('RMSF Fluctuation', fontsize=40)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
pl.xlim(0, len(data_rmsf[:,1]))
#Inicializando vector
bfactors=[]
try:
file_bfactor = open( 'bfac.pdb' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
for linea in file_bfactor.readlines():
fila = linea.strip()
sl = fila.split()
if (sl[0]=='ATOM'):
#print (sl[0])
idresidue=fila[23:26]
bfactor=fila[60:66]
print (idresidue + '\t'+bfactor)
bfactors.append(idresidue+'\t'+bfactor+'\n')
#i=i+1
#Escribiendo el archivo BFACTOR.dat
f = open('bfactor.dat', 'w')
#f.write('@ title "B-factors" \n')
foo = 'baz "\\"'
f.write('@ xaxis label " Residue" \n')
f.write('@ xaxis label char size 1.480000\n')
f.write('@ xaxis bar linewidth 3.0\n')
f.write('@ xaxis ticklabel char size 1.480000\n')
f.write('@ yaxis label "B-factors (' +"\\"+'cE'+"\\"+'C)"\n')
f.write('@ yaxis label char size 1.480000\n')
f.write('@ yaxis bar linewidth 3.0\n')
f.write('@ yaxis ticklabel char size 1.480000\n')
f.write('@ s0 line linewidth 2.5\n')
f.write('@ s0 symbol 1\n')
f.write('@ s0 symbol size 1.000000\n')
f.write('@ s0 symbol color 1\n')
f.write('@ s0 symbol pattern 1\n')
f.write('@ s0 symbol fill color 2\n')
f.write('@ s0 symbol fill pattern 1\n')
f.write('@ s0 symbol linewidth 1.0\n')
f.write('@TYPE xy \n')
f.write("".join(bfactors))
f.close()
!xmgrace bfactor.dat
#Cargando la imagen generada en xmgrace
Image(filename='bfactor.png')
#Inicializando vector
bfactors=[]
try:
file_bfactor = open( 'bfac.pdb' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
print ('Residuo' + '\t'+'bfactor')
for linea in file_bfactor.readlines():
fila = linea.strip()
sl = fila.split()
if (sl[0]=='ATOM'):
#print (sl[0])
idresidue=fila[23:26]
bfactor=fila[60:66]
print (idresidue + '\t'+bfactor)
bfactors.append(idresidue+'\t'+bfactor+'\n')
#i=i+1
#Escribiendo el archivo BFACTOR.dat
f = open('bfactor.dat', 'w')
f.write("".join(bfactors))
f.close()
data_bfactor=np.loadtxt('bfactor.dat',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
#ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
pl.plot(data_bfactor[:,0], data_bfactor[:,1], '-o', color='black', markersize=25,
markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, )
pl.xlabel('Residue', fontsize = 40)
pl.ylabel('B-factors ('+ r'$\AA$'+')' , fontsize = 40)
#pl.title('B-Factors', fontsize=40)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
pl.xlim(0, len(data_bfactor[:,1]))
### Creando el directorio para el análisis del RMSF
#Verificando que exista la nueva carpeta para la conversión de trayectorias
ruta_ss = nuevaruta+'/estructura'
print ( ruta_ss )
if not os.path.exists(ruta_ss):
os.makedirs(ruta_ss)
print ('Se ha creado la ruta ===>',ruta_ss)
else:
print ("La ruta "+ruta_ss+" existe..!!!")
print ( 'Nos vamos a ....', ruta_ss )
os.chdir( ruta_ss )
print ('Ejecutando el análisis de esctructura secundaria...')
!echo 5 | do_dssp -f ../output.xtc -s ../ionized.pdb -o sec_est.xpm -tu ns
print ('\n Convirtiendo el archivo a ps...')
!xpm2ps -f sec_est.xpm -by 6 -bx .1 -o est_sec.eps
print('\nConvirtiendo a png...')
!convert -density 600 est_sec.eps -resize 1024x1024 est_sec.png
print ('Cargando el archivo...')
Image(filename='est_sec.png', width=1024)
### Creando el directorio para el análisis del r-gyro
#Verificando que exista la nueva carpeta para la conversión de trayectorias
ruta_rgyro = nuevaruta+'/rgyro'
print ( ruta_rgyro )
if not os.path.exists(ruta_rgyro):
os.makedirs(ruta_rgyro)
print ('Se ha creado la ruta ===>',ruta_rgyro)
else:
print ("La ruta "+ruta_rgyro+" existe..!!!")
print ( 'Nos vamos a ....', ruta_rgyro)
os.chdir( ruta_rgyro )
print ('Ejecutando el análisis de rgyro...')
!echo 3 | g_gyrate -f ../output.xtc -s ../ionized.pdb -o gyrate.xvg
#Inicializando vector
rgyro=[]
try:
file_rmsf = open( 'gyrate.xvg' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
for linea in file_rmsf.readlines():
fila = linea.strip()
sl = fila.split()
cadena=sl[0]
if (not '#' in cadena) and (not '@' in cadena):
num=float(sl[0])
#num2=float(sl[1])
num=num/1000
rgyro.append(repr(num)+'\t'+sl[1]+'\n')
i=i+1
#Escribiendo el archivo RGYRO.DAT
f = open('rgyro.dat', 'w')
#f.write('@ title "Radius of gyration" \n')
f.write('@ xaxis label " Time (ns)" \n')
f.write('@ xaxis label char size 1.480000\n')
f.write('@ xaxis bar linewidth 3.0\n')
f.write('@ xaxis ticklabel char size 1.480000\n')
f.write('@ yaxis label "Rg (nm)" \n')
f.write('@ yaxis label char size 1.480000\n')
f.write('@ yaxis bar linewidth 3.0\n')
f.write('@ yaxis ticklabel char size 1.480000\n')
f.write('@ s0 line linewidth 2.5\n')
f.write('@TYPE xy \n')
f.write("".join(rgyro))
f.close()
!xmgrace rgyro.dat
#Cargando la imagen generada en xmgrace
Image(filename='rgyro.png')
data_rgyro=np.loadtxt('gyrate.xvg',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
pl.plot(data_rgyro[:,0]/1000, data_rgyro[:,1], linewidth = 2, color='black')
pl.xlabel("Time (ns)", fontsize = 40)
pl.ylabel('Rg (nm)', fontsize = 40)
#pl.suptitle('Radius of gyration', fontsize=50)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
### Creando el directorio para el análisis del RMSF
#Verificando que exista la nueva carpeta para la conversión de trayectorias
ruta_helix = nuevaruta+'/rmsd_helix'
print ( ruta_helix )
if not os.path.exists(ruta_helix):
os.makedirs(ruta_helix)
print ('Se ha creado la ruta ===>',ruta_helix)
else:
print ("La ruta "+ruta_helix+" existe..!!!")
print ( 'Nos vamos a ....', ruta_helix)
os.chdir( ruta_helix )
num=input('Número de hélices con las que cuenta la proteína:')
print (num)
if (int(num)==1):
indices_ha1=input('Proporciona el rango de índices de la Hélice 1:')
print (indices_ha1)
r_helix_1=1
r_helix_2=0
r_helix_3=0
r_helix_4=0
if (int(num)==2):
indices_ha1=input('Proporciona el rango de índices de la Hélice 1:')
print (indices_ha1)
indices_ha2=input('Proporciona el rango de índices de la Hélice 2:')
print (indices_ha2)
r_helix_1=1
r_helix_2=1
r_helix_3=0
r_helix_4=0
if (int(num)==3):
indices_ha1=input('Proporciona el rango de índices de la Hélice 1:')
print (indices_ha1)
indices_ha2=input('Proporciona el rango de índices de la Hélice 2:')
print (indices_ha2)
indices_ha3=input('Proporciona el rango de índices de la Hélice 3:')
print (indices_ha3)
r_helix_1=1
r_helix_2=1
r_helix_3=1
r_helix_4=0
if (int(num)==4):
indices_ha1=input('Proporciona el rango de índices de la Hélice 1:')
print (indices_ha1)
indices_ha2=input('Proporciona el rango de índices de la Hélice 2:')
print (indices_ha2)
indices_ha3=input('Proporciona el rango de índices de la Hélice 3:')
print (indices_ha3)
indices_ha4=input('Proporciona el rango de índices de la Hélice 4:')
print (indices_ha4)
r_helix_1=1
r_helix_2=1
r_helix_3=1
r_helix_4=1
#Script para vmd de la Hélice Alfa 2
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
if (r_helix_1==1):
f = open('ha1.tcl', 'w')
print(f)
f.write('set psfFile '+ psf+' \n')
f.write('set dcdFile '+ dcd+' \n')
f.write('\nmol load psf $psfFile dcd $dcdFile\n')
f.write('set outfile ' +'[open ' +'rmsd_ha1.dat'+' w]\n')
f.write('set nf [molinfo top get numframes]\n')
f.write('\n#RMSD calculation loop\n')
f.write('set f1 [atomselect top "'+indices_ha1+' " frame 0]\n')
f.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f.write(' set sel [atomselect top "'+indices_ha1+' " frame $i]\n')
f.write(' $sel move [measure fit $sel $f1]\n')
f.write(' set time [expr {$i +1}]\n')
f.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f.write(' puts $outfile "$time $time"\n')
f.write('}\n')
f.write('close $outfile')
f.close()
if (r_helix_2==1):
f = open('ha2.tcl', 'w')
print(f)
f.write('set psfFile '+ psf+' \n')
f.write('set dcdFile '+ dcd+' \n')
f.write('\nmol load psf $psfFile dcd $dcdFile\n')
f.write('set outfile ' +'[open ' +'rmsd_ha2.dat'+' w]\n')
f.write('set nf [molinfo top get numframes]\n')
f.write('\n#RMSD calculation loop\n')
f.write('set f1 [atomselect top "'+indices_ha2+' " frame 0]\n')
f.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f.write(' set sel [atomselect top "'+indices_ha2+' " frame $i]\n')
f.write(' $sel move [measure fit $sel $f1]\n')
f.write(' set time [expr {$i +1}]\n')
f.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f.write(' puts $outfile "$time $time"\n')
f.write('}\n')
f.write('close $outfile')
f.close()
if (r_helix_3==1):
f = open('ha3.tcl', 'w')
print(f)
f.write('set psfFile '+ psf+' \n')
f.write('set dcdFile '+ dcd+' \n')
f.write('\nmol load psf $psfFile dcd $dcdFile\n')
f.write('set outfile ' +'[open ' +'rmsd_ha3.dat'+' w]\n')
f.write('set nf [molinfo top get numframes]\n')
f.write('\n#RMSD calculation loop\n')
f.write('set f1 [atomselect top "'+indices_ha3+' " frame 0]\n')
f.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f.write(' set sel [atomselect top "'+indices_ha3+' " frame $i]\n')
f.write(' $sel move [measure fit $sel $f1]\n')
f.write(' set time [expr {$i +1}]\n')
f.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f.write(' puts $outfile "$time $time"\n')
f.write('}\n')
f.write('close $outfile')
f.close()
if (r_helix_4==1):
f = open('ha4.tcl', 'w')
print(f)
f.write('set psfFile '+ psf+' \n')
f.write('set dcdFile '+ dcd+' \n')
f.write('\nmol load psf $psfFile dcd $dcdFile\n')
f.write('set outfile ' +'[open ' +'rmsd_ha4.dat'+' w]\n')
f.write('set nf [molinfo top get numframes]\n')
f.write('\n#RMSD calculation loop\n')
f.write('set f1 [atomselect top "'+indices_ha4+' " frame 0]\n')
f.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f.write(' set sel [atomselect top "'+indices_ha4+' " frame $i]\n')
f.write(' $sel move [measure fit $sel $f1]\n')
f.write(' set time [expr {$i +1}]\n')
f.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f.write(' puts $outfile "$time $time"\n')
f.write('}\n')
f.write('close $outfile')
f.close()
if (r_helix_1==1):
#Calculando con VMD hélice 1
!vmd -dispdev text < ha1.tcl
if (r_helix_2==1):
#Calculando con VMD hélice 2
!vmd -dispdev text < ha2.tcl
if (r_helix_3==1):
#Calculando con VMD hélice 3
!vmd -dispdev text < ha3.tcl
if (r_helix_4==1):
#Calculando con VMD hélice 4
!vmd -dispdev text < ha4.tcl
if (int(num)==1):
#Graficando
data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))
#pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3)
pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black')
pl.xlabel("Time (ns)", fontsize = 40)
pl.ylabel('RMSD (nm)', fontsize = 40)
#pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
#pl.title('RMSD Helix Alfa', fontsize=50)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
if (int(num)==2):
#Graficando
data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@'])
data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))
#pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3)
pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black')
pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red')
pl.xlabel("Time (ns)", fontsize = 40)
pl.ylabel('RMSD (nm)', fontsize = 40)
#pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
#pl.title('RMSD Helix Alfa', fontsize=50)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
if (int(num)==3):
#Graficando
data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@'])
data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@'])
data_ha3=np.loadtxt('rmsd_ha3.dat',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))
#pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3)
pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black')
pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red')
pl.plot(data_ha3[:,1]*0.02, data_ha3[:,0]/10, linewidth = 3, color='green')
pl.xlabel("Time (ns)", fontsize = 40)
pl.ylabel('RMSD (nm)', fontsize = 40)
#pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
#pl.title('RMSD Helix Alfa', fontsize=50)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
if (int(num)==4):
#Graficando
data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@'])
data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@'])
data_ha3=np.loadtxt('rmsd_ha3.dat',comments=['#', '@'])
data_ha4=np.loadtxt('rmsd_ha4.dat',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))
#pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3)
pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black')
pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red')
pl.plot(data_ha3[:,1]*0.02, data_ha3[:,0]/10, linewidth = 3, color='green')
pl.plot(data_ha4[:,1]*0.02, data_ha4[:,0]/10, linewidth = 3, color='blue')
pl.xlabel("Time (ns)", fontsize = 40)
pl.ylabel('RMSD (A)', fontsize = 40)
#pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
#pl.title('RMSD Helix Alfa', fontsize=50)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
### Creando el directorio para el análisis del SASA
### NOTA: se calcula con gromacs4 ya que arroja bien los resultados comparado con gromacs5
ruta_sasa = nuevaruta+'/sasa'
print ( ruta_sasa )
if not os.path.exists(ruta_sasa):
os.makedirs(ruta_sasa)
print ('Se ha creado la ruta ===>',ruta_sasa)
else:
print ("La ruta "+ruta_sasa+" existe..!!!")
print ( 'Nos vamos a ....', ruta_sasa )
os.chdir( ruta_sasa )
print ('Ejecutando el análisis de sasa con Gromacs 4 utilizando la opción 1 (protein)...')
!echo 1 1 | /opt/gromacs4/bin/g_sas -f ../output.xtc -s ../ionized.pdb -o solven-accessible-surface.xvg -oa atomic-sas.xvg -or residue-sas.xvg
#Inicializando vector
sasa_residuo=[]
try:
residue_sas = open( 'residue-sas.xvg' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
for linea in residue_sas.readlines():
fila = linea.strip()
sl = fila.split()
cadena=sl[0]
if (not '#' in cadena) and (not '@' in cadena):
print ('Residue =>',cadena)
sasa_residuo.append(sl[0]+'\t'+sl[1]+'\n')
i=i+1
#Escribiendo el archivo RMSD
f = open('sasa-residuo.dat', 'w')
#f.write('@ title "Area per residue over the trajectory" \n')
f.write('@ xaxis label " Residue " \n')
f.write('@ xaxis label char size 1.480000\n')
f.write('@ xaxis bar linewidth 3.0\n')
f.write('@ xaxis ticklabel char size 1.480000\n')
f.write('@ yaxis label "Area (nm' +"\\"+'S2'+"\\N"+')"\n')
f.write('@ yaxis label char size 1.480000\n')
f.write('@ yaxis bar linewidth 3.0\n')
f.write('@ yaxis ticklabel char size 1.480000\n')
f.write('@ s0 line linewidth 2.5\n')
f.write('@ s0 symbol 1\n')
f.write('@ s0 symbol size 1.000000\n')
f.write('@ s0 symbol color 1\n')
f.write('@ s0 symbol pattern 1\n')
f.write('@ s0 symbol fill color 2\n')
f.write('@ s0 symbol fill pattern 1\n')
f.write('@ s0 symbol linewidth 1.0\n')
f.write('@TYPE xy \n')
f.write("".join(sasa_residuo))
f.close()
!xmgrace sasa-residuo.dat
#Cargando la imagen generada en xmgrace
Image(filename='sasa-residuo.png')
data_sasa_residue=np.loadtxt('residue-sas.xvg',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
pl.plot(data_sasa_residue[:,0], data_sasa_residue[:,1], '-o', color='black', markersize=25,
markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, )
pl.xlabel("Residue", fontsize = 30)
#pl.ylabel('Area (nm2)', fontsize = 30)
pl.ylabel('Area ( nm'+ r'$\ ^2$'+')' , fontsize = 40)
#pl.title('Area per residue over the trajectory', fontsize=40)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
pl.xlim(0, len(data_sasa_residue[:,1]))
#Inicializando vector
sasa=[]
try:
sasafile = open( 'solven-accessible-surface.xvg' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
for linea in sasafile.readlines():
fila = linea.strip()
sl = fila.split()
cadena=sl[0]
if (not '#' in cadena) and (not '@' in cadena):
#print (cadena)
num=float(sl[0])
num=num/1000
sasa.append(repr(num)+'\t'+sl[1]+'\t'+sl[2]+'\t'+sl[3]+'\n')
i=i+1
cel2=float(sl[2])
print(cel2)
#Escribiendo el archivo RMSD
f = open('sasa.dat', 'w')
#f.write('@ title "Solven Accessible Surface" \n')
f.write('@ xaxis label " Time (ns) " \n')
f.write('@ xaxis label char size 1.480000\n')
f.write('@ xaxis bar linewidth 3.0\n')
f.write('@ xaxis ticklabel char size 1.480000\n')
f.write('@ yaxis label "Area (nm' +"\\"+'S2'+"\\N"+')"\n')
f.write('@ yaxis label char size 1.480000\n')
f.write('@ yaxis bar linewidth 3.0\n')
f.write('@ yaxis ticklabel char size 1.480000\n')
#f.write('@ s0 legend "Hydrophobic"\n')
#if (cel2>0):
#f.write('@ s1 legend "Hydrophilic"\n')
f.write('@TYPE xy \n')
f.write("".join(sasa))
f.close()
!xmgrace sasa.dat
#Cargando la imagen generada en xmgrace
Image(filename='sasa.png')
data_sasa=np.loadtxt('solven-accessible-surface.xvg',comments=['#', '@'])
#Engrosar marco
fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)
ax = fig.add_subplot(111)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
#ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
pl.xlabel("Time (ns)", fontsize = 40)
pl.ylabel('Area ( nm'+ r'$\ ^2$'+')' , fontsize = 40)
#pl.title('Solvent Accessible Surface', fontsize=50)
pl.xticks(fontsize=30)
pl.yticks(fontsize=30)
dato=data_sasa[:,2]
dato2=dato[0]
if (dato2>0):
pl.plot(data_sasa[:,0]/1000, data_sasa[:,1], linewidth = 2, color='black')
pl.plot(data_sasa[:,0]/1000, data_sasa[:,2], linewidth = 2, color='red')
else:
pl.plot(data_sasa[:,0]/1000, data_sasa[:,1], linewidth = 2, color='black')
### Creando el directorio para el análisis del SASA
### NOTA: se calcula con gromacs4 ya que arroja bien los resultados comparado con gromacs5
ruta_m_rmsd = nuevaruta+'/matriz'
print ( ruta_m_rmsd )
if not os.path.exists(ruta_m_rmsd):
os.makedirs(ruta_m_rmsd)
print ('Se ha creado la ruta ===>',ruta_m_rmsd)
else:
print ("La ruta "+ruta_m_rmsd+" existe..!!!")
print ( 'Nos vamos a ....', ruta_m_rmsd )
os.chdir( ruta_m_rmsd )
print ('\nCopiando el archivo rmsd_matrix.tcl a '+ruta_m_rmsd)
source_file=ruta_scripts+'/rmsd_matriz/rmsd_matrix.tcl'
dest_file=ruta_m_rmsd+'/rmsd_matrix.tcl'
shutil.copy(source_file,dest_file)
#print ( 'Nos vemos a ....', ruta_old_traj )
#os.chdir( ruta_old_traj )
file_dcd=ruta_old_traj+'/'+dcd_file
file_psf=ruta_old_traj+'/'+psf_file
print (file_dcd)
print ('\nEjecutando CATDCD para obtener 100 frames de la trayectoria original....')
output_catdcd=!catdcd -o 100.dcd -stride 50 $file_dcd
print (output_catdcd.n)
#Arrancando VMD para cargar el script rmsd_matrix.tcl
!vmd 100.dcd $file_psf
ruta_matriz=os.getcwd()
if os.path.isfile('salida.dat'):
print ('El archivo salida.dat existe')
else:
print ('El archivo salida.dat no existe.. ejecutar desde MATRIZ DE RMSD...')
#Creando el gráfico
data_matriz=np.loadtxt('salida.dat',comments=['#', '@'])
print(data_matriz.shape)
pl.figure(figsize=(20, 12), dpi=100)
imgplot = pl.imshow(data_matriz, origin='lower', cmap=pl.cm.Greens, interpolation='nearest')
#imgplot = pl.imshow(data_matriz, origin='lower', cmap=pl.cm.coolwarm, interpolation='nearest')
pl.xlabel("Time (ns)", fontsize = 30)
pl.ylabel('Time (ns)', fontsize = 30)
#pl.suptitle('RMSD', fontsize=50)
#pl.title('C-Alpha RMSD matrix', fontsize=40)
pl.xticks(fontsize=20)
pl.yticks(fontsize=20)
pl.xlim(0, 100)
pl.ylim(0, 100)
pl.colorbar()
### Creando el directorio para el análisis del RMSF
#Verificando que exista la nueva carpeta para la conversión de trayectorias
ruta_matriz_dm = nuevaruta+'/matriz_dm'
print ( ruta_matriz_dm )
if not os.path.exists(ruta_matriz_dm):
os.makedirs(ruta_matriz_dm)
print ('Se ha creado la ruta ===>',ruta_matriz_dm)
else:
print ("La ruta "+ruta_matriz_dm+" existe..!!!")
print ( 'Nos vamos a ....', ruta_matriz_dm )
os.chdir( ruta_matriz_dm )
!echo 4 | g_mdmat -f ../output.xtc -s ../ionized.pdb -mean average -frames frames -dt 10000
!xpm2ps -f frames.xpm -o frames.eps
!xpm2ps -f average.xpm -o average.eps
print('\nConvirtiendo a png...')
!convert -density 600 frames.eps -resize 1024x1024 frames.png
!convert -density 600 average.eps -resize 1024x1024 average.png
print ('Cargando el archivo average...')
Image(filename='average.png', width=800)
### Creando el directorio para el análisis de la libre energía
ruta_f_energy = nuevaruta+'/free_energy'
print ( ruta_f_energy )
if not os.path.exists(ruta_f_energy):
os.makedirs(ruta_f_energy)
print ('Se ha creado la ruta ===>',ruta_f_energy)
else:
print ("La ruta "+ruta_f_energy+" existe..!!!")
print ( 'Nos vamos a ....', ruta_f_energy )
os.chdir( ruta_f_energy )
#Solicita la temperatura
t=input('Temperatura a la cual se realizó la simulación:')
temperatura=int(t)
print ('Temperatura=>',temperatura)
print ('Ejecutando el análisis de rmsd...')
!echo 3 3 | g_rms -f ../output.xtc -s ../ionized.pdb -a avgrp.xvg
print ('Ejecutando el análisis de rgyro...')
!echo 3 | g_gyrate -f ../output.xtc -s ../ionized.pdb -o gyrate.xvg
print ('\nCopiando el archivo generateFES.py a '+ruta_f_energy)
source_file=ruta_scripts+'/free_energy/generateFES.py'
dest_file=ruta_f_energy+'/generateFES.py'
shutil.copy(source_file,dest_file)
#Cambiando permisos de ejecución
!chmod +x generateFES.py
#Cargando valores del RMSD
data_rmsd=np.loadtxt('rmsd.xvg',comments=['#', '@'])
#Cargnaod valores del R-GYRO
data_rgyro=np.loadtxt('gyrate.xvg',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del rmsd
min_rmsd=np.amin(data_rmsd[:,1])
max_rmsd=np.amax(data_rmsd[:,1])
print ('Minimo RMSD=>',min_rmsd)
print ('Máximo RMSD=>',max_rmsd)
#Obteniendo los valores máximo y mínimo del r-gyro
min_rgyro=np.amin(data_rgyro[:,1])
max_rgyro=np.amax(data_rgyro[:,1])
print ('Minimo RGYRO=>',min_rgyro)
print ('Máximo RGYRO=>',max_rgyro)
#Creando los archivos de entrada para el script
np.savetxt('rmsd.dat',data_rmsd[:,1], fmt='%1.7f')
np.savetxt('rgyro.dat',data_rgyro[:,1], fmt='%1.7f')
!paste rgyro.dat rmsd.dat > fes.dat
#Ejecutando el script de FES
!python generateFES.py fes.dat $min_rgyro $max_rgyro $min_rmsd $max_rmsd 200 200 $temperatura FEES.dat
#Cargando el archivo generado para plotear con matplotlib
data_fes=np.loadtxt('FEES.dat',comments=['#', '@'])
# This loads the magics for gnuplot
%load_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "free_energy.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "Rg (nm)
set ylabel "RMSD (nm)"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "FEES.dat" with pm3d
### Creando el directorio para el análisis del PCA
ruta_pca = nuevaruta+'/pca'
print ( ruta_pca )
if not os.path.exists(ruta_pca):
os.makedirs(ruta_pca)
print ('Se ha creado la ruta ===>',ruta_pca)
else:
print ("La ruta "+ruta_pca+" existe..!!!")
print ( 'Nos vamos a ....', ruta_pca )
os.chdir( ruta_pca )
#Calculando matriz de covarianza
!echo 1 1 | g_covar -s ../ionized.pdb -f ../output.xtc -o eigenvalues.xvg -v eigenvectors.trr -xpma covar.xpm
!echo 1 1 | g_anaeig -s ../ionized.pdb -f ../output.xtc -v eigenvectors.trr -eig eigenvalues.xvg -first 1 -last 2 -2d 2dproj_1_2.xvg
#pcaX, pcaY=np.loadtxt('2dproj_1_2.xvg',comments=['#', '@'], unpack=True)
data_pca=np.loadtxt('2dproj_1_2.xvg',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del pca
min_pcaX=np.amin(data_pca[:,0])
max_pcaX=np.amax(data_pca[:,0])
print ('Minimo PCA_X=>',min_pcaX)
print ('Máximo PCA_X=>',max_pcaX)
min_pcaY=np.amin(data_pca[:,1])
max_pcaY=np.amax(data_pca[:,1])
print ('Minimo PCA_Y=>',min_pcaY)
print ('Máximo PCA_Y=>',max_pcaY)
#Creando los archivos de entrada para el script
np.savetxt('PCA.dat',data_pca, fmt='%1.5f')
#Copiando el script generateFES de la carpeta Free_energy
print ('\nCopiando el archivo generateFES.py a '+ruta_pca+ ' desde '+ ruta_f_energy)
source_file=ruta_f_energy+'/generateFES.py'
dest_file=ruta_pca+'/generateFES.py'
shutil.copy(source_file,dest_file)
#Ejecutando el script de FES
!python generateFES.py PCA.dat $min_pcaX $max_pcaX $min_pcaY $max_pcaY 200 200 $temperatura FEES_PCA.dat
#Volver a cargar el kernel de gnuplot para limpiar su buffer
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "pca.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "projection on eigenvector 1 (nm)"
set ylabel "projection on eigenvector 2 (nm)"
set title " "
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "FEES_PCA.dat" with pm3d
from htmd import *
### Creando el directorio para el análisis de los RMSD de los puentes
ruta_rmsd_diedros = nuevaruta+'/rmsd_diedros'
print ( ruta_rmsd_diedros )
if not os.path.exists(ruta_rmsd_diedros):
os.makedirs(ruta_rmsd_diedros)
print ('Se ha creado la ruta ===>',ruta_rmsd_diedros)
else:
print ("La ruta "+ruta_rmsd_diedros+" existe..!!!")
print ( 'Nos vamos a ....', ruta_rmsd_diedros)
os.chdir( ruta_rmsd_diedros )
# Cargando la molécula
mol = Molecule('../ionized.pdb')
# Solicitando los datos de entrada
px1l=input('Índice del DB1 izquierdo:')
px1r=input('Índice del DB1 derecho:')
px2l=input('Índice del DB2 izquierdo:')
px2r=input('Índice del DB2 derecho:')
revisa1=1
revisa2=1
if (revisa1>0):
#Obteniendo lado izquierdo del DB1
x1l_name=mol.get('name','resname CYS and noh and resid '+px1l)
x1l_index=mol.get('index','resname CYS and noh and resid '+px1l)
x1l_resid=mol.get('resid','resname CYS and noh and resid '+px1l)
#Obteniendo lado derecho del DB1
x1r_name=mol.get('name','resname CYS and noh and resid '+px1r)
x1r_index=mol.get('index','resname CYS and noh and resid '+px1r)
x1r_resid=mol.get('resid','resname CYS and noh and resid '+px1r)
if (revisa2>0):
#Obteniendo el lado izquierdo del DB2
x2l_name=mol.get('name','resname CYS and noh and resid '+px2l)
x2l_index=mol.get('index','resname CYS and noh and resid '+px2l)
x2l_resid=mol.get('resid','resname CYS and noh and resid '+px2l)
#Obteniendo el lado derecho del DB2
x2r_name=mol.get('name','resname CYS and noh and resid '+px2r)
x2r_index=mol.get('index','resname CYS and noh and resid '+px2r)
x2r_resid=mol.get('resid','resname CYS and noh and resid '+px2r)
#Obteniendo la lista de índices de los puentes
print ('Generando la lista de los índices para enviarlos')
db1x1l=[]
db1x2l=[]
db1x3m=[]
db1x2r=[]
db1x1r=[]
db1l_name_l=[]
db1l_index_l=[]
db1r_name_l=[]
db1r_index_l=[]
db2l_name_l=[]
db2l_index_l=[]
db2r_name_l=[]
db2r_index_l=[]
db3l_name_l=[]
db3l_index_l=[]
db3r_name_l=[]
db3r_index_l=[]
if (revisa1>0):
#Obteniendo los índices del DB1
for i in range(len(x1l_name)):
if (x1l_name[i]=='N' or x1l_name[i]=='CA' or x1l_name[i]=='CB' or x1l_name[i]=='SG'):
db1l_name_l.append(str(x1l_name[i]))
db1l_index_l.append(str(x1l_index[i]))
for i in range(len(x1r_name)):
if (x1r_name[i]=='N' or x1r_name[i]=='CA' or x1r_name[i]=='CB' or x1r_name[i]=='SG'):
db1r_name_l.append(str(x1r_name[i]))
db1r_index_l.append(str(x1r_index[i]))
print ('DB1 X1L =>',db1l_name_l)
print (db1l_index_l)
print ('DB1 X1R =>',db1r_name_l)
print (db1r_index_l)
if (revisa2>0):
#Obteniendo los índices del DB2
for i in range(len(x2l_name)):
if (x2l_name[i]=='N' or x2l_name[i]=='CA' or x2l_name[i]=='CB' or x2l_name[i]=='SG'):
db2l_name_l.append(str(x2l_name[i]))
db2l_index_l.append(str(x2l_index[i]))
for i in range(len(x2r_name)):
if (x2r_name[i]=='N' or x2r_name[i]=='CA' or x2r_name[i]=='CB' or x2r_name[i]=='SG'):
db2r_name_l.append(str(x2r_name[i]))
db2r_index_l.append(str(x2r_index[i]))
print ('DB2 X1L =>',db2l_name_l)
print (db2l_index_l)
print ('DB2 X1R =>',db2r_name_l)
print (db2r_index_l)
#Generando el DB1 completo ordenado
filas=8
col=2
DB1_i=[]
DB1_N=[]
DB2_i=[]
DB2_N=[]
DB3_i=[]
DB3_N=[]
for i in range(0,filas):
DB1_N.append([' '])
DB1_i.append(['0'])
DB2_N.append([' '])
DB2_i.append(['0'])
DB3_N.append([' '])
DB3_i.append(['0'])
if (revisa1>0):
#Cargando índices para el puente 1
for i in range(len(db1l_name_l)):
if db1l_name_l[i]=='N':
DB1_N[0] = db1l_name_l[i]
DB1_i[0]='index '+db1l_index_l[i]
if db1l_name_l[i]=='CA':
DB1_N[1] = db1l_name_l[i]
DB1_i[1]='index '+db1l_index_l[i]
if db1l_name_l[i]=='CB':
DB1_N[2] = db1l_name_l[i]
DB1_i[2]='index '+db1l_index_l[i]
if db1l_name_l[i]=='SG':
DB1_N[3] = db1l_name_l[i]
DB1_i[3]='index '+db1l_index_l[i]
for i in range(len(db1r_name_l)):
if db1r_name_l[i]=='SG':
DB1_N[4] = db1r_name_l[i]
DB1_i[4]='index '+db1r_index_l[i]
if db1r_name_l[i]=='CB':
DB1_N[5] = db1r_name_l[i]
DB1_i[5]='index '+db1r_index_l[i]
if db1r_name_l[i]=='CA':
DB1_N[6] = db1r_name_l[i]
DB1_i[6]='index '+db1r_index_l[i]
if db1r_name_l[i]=='N':
DB1_N[7] = db1r_name_l[i]
DB1_i[7]='index '+db1r_index_l[i]
print ('Puente DB1 = resid '+px1l+':'+px1r)
print ('Names DB1=>',DB1_i)
print ('Index DB1=>',DB1_N)
print ('\n')
if (revisa2>0):
#Cargando índices para el puente 2
for i in range(len(db2l_name_l)):
if db2l_name_l[i]=='N':
DB2_N[0] = db2l_name_l[i]
DB2_i[0]='index '+db2l_index_l[i]
if db2l_name_l[i]=='CA':
DB2_N[1] = db2l_name_l[i]
DB2_i[1]='index '+db2l_index_l[i]
if db2l_name_l[i]=='CB':
DB2_N[2] = db2l_name_l[i]
DB2_i[2]='index '+db2l_index_l[i]
if db2l_name_l[i]=='SG':
DB2_N[3] = db2l_name_l[i]
DB2_i[3]='index '+db2l_index_l[i]
for i in range(len(db2r_name_l)):
if db2r_name_l[i]=='SG':
DB2_N[4] = db2r_name_l[i]
DB2_i[4]='index '+db2r_index_l[i]
if db2r_name_l[i]=='CB':
DB2_N[5] = db2r_name_l[i]
DB2_i[5]='index '+db2r_index_l[i]
if db2r_name_l[i]=='CA':
DB2_N[6] = db2r_name_l[i]
DB2_i[6]='index '+db2r_index_l[i]
if db2r_name_l[i]=='N':
DB2_N[7] = db2r_name_l[i]
DB2_i[7]='index '+db2r_index_l[i]
print ('Puente DB2 = resid '+px2l+':'+px2r)
print ('Names DB2=>',DB2_i)
print ('Index DB2=>',DB2_N)
print ('\n')
if (revisa1>0):
#Creando script para DB1_x1l
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f1 = open('DB1_x1l.tcl', 'w')
print(f1)
f1.write('set psfFile '+ psf+' \n')
f1.write('set dcdFile '+ dcd+' \n')
f1.write('\nmol load psf $psfFile dcd $dcdFile\n')
f1.write('set outfile ' +'[open ' +'db1_x1l.dat'+' w]\n')
f1.write('set nf [molinfo top get numframes]\n')
f1.write('\n#RMSD calculation loop\n')
f1.write('set f1 [atomselect top "'+DB1_i[0]+' or '+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' " frame 0]\n')
f1.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f1.write(' set sel [atomselect top "'+DB1_i[0]+' or '+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' " frame $i]\n')
f1.write(' $sel move [measure fit $sel $f1]\n')
f1.write(' set time [expr {$i +1}]\n')
f1.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f1.write(' puts $outfile " $time"\n')
f1.write('}\n')
f1.write('close $outfile')
f1.close()
#Creando script para DB1_x2l
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f2 = open('DB1_x2l.tcl', 'w')
print(f2)
f2.write('set psfFile '+ psf+' \n')
f2.write('set dcdFile '+ dcd+' \n')
f2.write('\nmol load psf $psfFile dcd $dcdFile\n')
f2.write('set outfile ' +'[open ' +'db1_x2l.dat'+' w]\n')
f2.write('set nf [molinfo top get numframes]\n')
f2.write('\n#RMSD calculation loop\n')
f2.write('set f1 [atomselect top "'+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' " frame 0]\n')
f2.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f2.write(' set sel [atomselect top "'+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' " frame $i]\n')
f2.write(' $sel move [measure fit $sel $f1]\n')
f2.write(' set time [expr {$i +1}]\n')
f2.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f2.write(' puts $outfile " $time"\n')
f2.write('}\n')
f2.write('close $outfile')
f2.close()
#Creando script para DB1_x3m
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f3 = open('DB1_x3m.tcl', 'w')
print(f3)
f3.write('set psfFile '+ psf+' \n')
f3.write('set dcdFile '+ dcd+' \n')
f3.write('\nmol load psf $psfFile dcd $dcdFile\n')
f3.write('set outfile ' +'[open ' +'db1_x3m.dat'+' w]\n')
f3.write('set nf [molinfo top get numframes]\n')
f3.write('\n#RMSD calculation loop\n')
f3.write('set f1 [atomselect top "'+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' " frame 0]\n')
f3.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f3.write(' set sel [atomselect top "'+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' " frame $i]\n')
f3.write(' $sel move [measure fit $sel $f1]\n')
f3.write(' set time [expr {$i +1}]\n')
f3.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f3.write(' puts $outfile " $time"\n')
f3.write('}\n')
f3.write('close $outfile')
f3.close()
#Creando script para DB1_x2r
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f4 = open('DB1_x2r.tcl', 'w')
print(f4)
f4.write('set psfFile '+ psf+' \n')
f4.write('set dcdFile '+ dcd+' \n')
f4.write('\nmol load psf $psfFile dcd $dcdFile\n')
f4.write('set outfile ' +'[open ' +'db1_x2r.dat'+' w]\n')
f4.write('set nf [molinfo top get numframes]\n')
f4.write('\n#RMSD calculation loop\n')
f4.write('set f1 [atomselect top "'+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' " frame 0]\n')
f4.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f4.write(' set sel [atomselect top "'+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' " frame $i]\n')
f4.write(' $sel move [measure fit $sel $f1]\n')
f4.write(' set time [expr {$i +1}]\n')
f4.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f4.write(' puts $outfile " $time"\n')
f4.write('}\n')
f4.write('close $outfile')
f4.close()
#Creando script para DB1_x1r
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f5 = open('DB1_x1r.tcl', 'w')
print(f5)
f5.write('set psfFile '+ psf+' \n')
f5.write('set dcdFile '+ dcd+' \n')
f5.write('\nmol load psf $psfFile dcd $dcdFile\n')
f5.write('set outfile ' +'[open ' +'db1_x1r.dat'+' w]\n')
f5.write('set nf [molinfo top get numframes]\n')
f5.write('\n#RMSD calculation loop\n')
f5.write('set f1 [atomselect top "'+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' or '+DB1_i[7]+' " frame 0]\n')
f5.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f5.write(' set sel [atomselect top "'+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' or '+DB1_i[7]+' " frame $i]\n')
f5.write(' $sel move [measure fit $sel $f1]\n')
f5.write(' set time [expr {$i +1}]\n')
f5.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f5.write(' puts $outfile " $time"\n')
f5.write('}\n')
f5.write('close $outfile')
f5.close()
if (revisa2>0):
##########################################################################################
## Creando los archivos para DB2
#######################################################################################
#Creando script para DB2_x1l
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f6 = open('DB2_x1l.tcl', 'w')
print(f6)
f6.write('set psfFile '+ psf+' \n')
f6.write('set dcdFile '+ dcd+' \n')
f6.write('\nmol load psf $psfFile dcd $dcdFile\n')
f6.write('set outfile ' +'[open ' +'db2_x1l.dat'+' w]\n')
f6.write('set nf [molinfo top get numframes]\n')
f6.write('\n#RMSD calculation loop\n')
f6.write('set f1 [atomselect top "'+DB2_i[0]+' or '+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' " frame 0]\n')
f6.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f6.write(' set sel [atomselect top "'+DB2_i[0]+' or '+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' " frame $i]\n')
f6.write(' $sel move [measure fit $sel $f1]\n')
f6.write(' set time [expr {$i +1}]\n')
f6.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f6.write(' puts $outfile " $time"\n')
f6.write('}\n')
f6.write('close $outfile')
f6.close()
#Creando script para DB1_x2l
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f7 = open('DB2_x2l.tcl', 'w')
print(f7)
f7.write('set psfFile '+ psf+' \n')
f7.write('set dcdFile '+ dcd+' \n')
f7.write('\nmol load psf $psfFile dcd $dcdFile\n')
f7.write('set outfile ' +'[open ' +'db2_x2l.dat'+' w]\n')
f7.write('set nf [molinfo top get numframes]\n')
f7.write('\n#RMSD calculation loop\n')
f7.write('set f1 [atomselect top "'+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' " frame 0]\n')
f7.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f7.write(' set sel [atomselect top "'+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' " frame $i]\n')
f7.write(' $sel move [measure fit $sel $f1]\n')
f7.write(' set time [expr {$i +1}]\n')
f7.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f7.write(' puts $outfile " $time"\n')
f7.write('}\n')
f7.write('close $outfile')
f7.close()
#Creando script para DB1_x3m
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f8 = open('DB2_x3m.tcl', 'w')
print(f8)
f8.write('set psfFile '+ psf+' \n')
f8.write('set dcdFile '+ dcd+' \n')
f8.write('\nmol load psf $psfFile dcd $dcdFile\n')
f8.write('set outfile ' +'[open ' +'db2_x3m.dat'+' w]\n')
f8.write('set nf [molinfo top get numframes]\n')
f8.write('\n#RMSD calculation loop\n')
f8.write('set f1 [atomselect top "'+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' " frame 0]\n')
f8.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f8.write(' set sel [atomselect top "'+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' " frame $i]\n')
f8.write(' $sel move [measure fit $sel $f1]\n')
f8.write(' set time [expr {$i +1}]\n')
f8.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f8.write(' puts $outfile " $time"\n')
f8.write('}\n')
f8.write('close $outfile')
f8.close()
#Creando script para DB1_x2r
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f9 = open('DB2_x2r.tcl', 'w')
print(f9)
f9.write('set psfFile '+ psf+' \n')
f9.write('set dcdFile '+ dcd+' \n')
f9.write('\nmol load psf $psfFile dcd $dcdFile\n')
f9.write('set outfile ' +'[open ' +'db2_x2r.dat'+' w]\n')
f9.write('set nf [molinfo top get numframes]\n')
f9.write('\n#RMSD calculation loop\n')
f9.write('set f1 [atomselect top "'+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' " frame 0]\n')
f9.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f9.write(' set sel [atomselect top "'+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' " frame $i]\n')
f9.write(' $sel move [measure fit $sel $f1]\n')
f9.write(' set time [expr {$i +1}]\n')
f9.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f9.write(' puts $outfile " $time"\n')
f9.write('}\n')
f9.write('close $outfile')
f9.close()
#Creando script para DB1_x1r
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print(psf)
f10 = open('DB2_x1r.tcl', 'w')
print(f10)
f10.write('set psfFile '+ psf+' \n')
f10.write('set dcdFile '+ dcd+' \n')
f10.write('\nmol load psf $psfFile dcd $dcdFile\n')
f10.write('set outfile ' +'[open ' +'db2_x1r.dat'+' w]\n')
f10.write('set nf [molinfo top get numframes]\n')
f10.write('\n#RMSD calculation loop\n')
f10.write('set f1 [atomselect top "'+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' or '+DB2_i[7]+' " frame 0]\n')
f10.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
f10.write(' set sel [atomselect top "'+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' or '+DB2_i[7]+' " frame $i]\n')
f10.write(' $sel move [measure fit $sel $f1]\n')
f10.write(' set time [expr {$i +1}]\n')
f10.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n')
f10.write(' puts $outfile " $time"\n')
f10.write('}\n')
f10.write('close $outfile')
f10.close()
if (revisa1>0):
#Calculando con VMD rmsd DB1 X1L
!vmd -dispdev text < DB1_x1l.tcl
#Calculando con VMD DB1 X2L
!vmd -dispdev text < DB1_x2l.tcl
#Calculando con VMD DB1 X3M
!vmd -dispdev text < DB1_x3m.tcl
#Calculando con VMD DB1 X2R
!vmd -dispdev text < DB1_x2r.tcl
#Calculando con VMD DB1 X1R
!vmd -dispdev text < DB1_x1r.tcl
if (revisa2>0):
#Calculando con VMD rmsd DB2 X1L
!vmd -dispdev text < DB2_x1l.tcl
#Calculando con VMD DB2 X2L
!vmd -dispdev text < DB2_x2l.tcl
#Calculando con VMD DB2 X3M
!vmd -dispdev text < DB2_x3m.tcl
#Calculando con VMD DB2 X2R
!vmd -dispdev text < DB2_x2r.tcl
#Calculando con VMD DB2 X1R
!vmd -dispdev text < DB2_x1r.tcl
escale_y=[]
fig = pl.figure(figsize=(25,8))
fig.subplots_adjust(hspace=.4, wspace=0.3)
#Formateando los valores de los ejes
#Engrosando marcos
ax = fig.add_subplot(2,5,1)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,2)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,3)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,4)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,5)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,6)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,7)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,8)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,9)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax = fig.add_subplot(2,5,10)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
if (revisa1>0):
#Datos de DB1
data_db1_x1l=np.loadtxt('db1_x1l.dat',comments=['#', '@'])
data_db1_x2l=np.loadtxt('db1_x2l.dat',comments=['#', '@'])
data_db1_x3m=np.loadtxt('db1_x3m.dat',comments=['#', '@'])
data_db1_x2r=np.loadtxt('db1_x2r.dat',comments=['#', '@'])
data_db1_x1r=np.loadtxt('db1_x1r.dat',comments=['#', '@'])
sub1 = fig.add_subplot(251) # instead of plt.subplot(2, 2, 1)
#sub1.set_title('DB1_X1L')
sub1.set_xlabel('Time (ns)')
sub1.set_ylabel('RMSD (nm)')
sub1.plot(data_db1_x1l[:,1]*0.02, data_db1_x1l[:,0]/10, color='black', linewidth = 1, label='DB1_X1L')
x1,x2,y1,y2=sub1.axis()
escale_y.append(y2)
sub2 = fig.add_subplot(252)
#sub2.set_title('DB1_X2L')
sub2.set_xlabel('Time (ns)')
sub2.set_ylabel('RMSD (nm)')
sub2.plot(data_db1_x2l[:,1]*0.02, data_db1_x2l[:,0]/10, color='black', linewidth = 1, label='DB1_X2L')
x1,x2,y1,y2=sub2.axis()
escale_y.append(y2)
sub3 = fig.add_subplot(253)
#sub3.set_title('DB1_X3M')
sub3.set_xlabel('Time (ns)')
sub3.set_ylabel('RMSD (nm)')
sub3.plot(data_db1_x3m[:,1]*0.02, data_db1_x3m[:,0]/10, color='black', linewidth = 1, label='DB1_X3M')
x1,x2,y1,y2=sub3.axis()
escale_y.append(y2)
sub4 = fig.add_subplot(254)
#sub4.set_title('DB1_X2R')
sub4.set_xlabel('Time (ns)')
sub4.set_ylabel('RMSD (nm)')
sub4.plot(data_db1_x2r[:,1]*0.02, data_db1_x2r[:,0]/10, color='black', linewidth = 1, label='DB1_X2R')
x1,x2,y1,y2=sub4.axis()
escale_y.append(y2)
sub5 = fig.add_subplot(255)
#sub5.set_title('DB1_X1R')
sub5.set_xlabel('Time (ns)')
sub5.set_ylabel('RMSD (nm)')
sub5.plot(data_db1_x1r[:,1]*0.02, data_db1_x1r[:,0]/10, color='black', linewidth = 1, label='DB1_X1R')
x1,x2,y1,y2=sub5.axis()
escale_y.append(y2)
if (revisa2>0):
#DAtos de DB2
data_db2_x1l=np.loadtxt('db2_x1l.dat',comments=['#', '@'])
data_db2_x2l=np.loadtxt('db2_x2l.dat',comments=['#', '@'])
data_db2_x3m=np.loadtxt('db2_x3m.dat',comments=['#', '@'])
data_db2_x2r=np.loadtxt('db2_x2r.dat',comments=['#', '@'])
data_db2_x1r=np.loadtxt('db2_x1r.dat',comments=['#', '@'])
#Ploteando DB2
sub6 = fig.add_subplot(256)
#sub6.set_title('DB2_X1L')
sub6.set_xlabel('Time (ns)')
sub6.set_ylabel('RMSD (nm)')
sub6.plot(data_db2_x1l[:,1]*0.02, data_db2_x1l[:,0]/10, color='red', linewidth = 1, label='DB2_X1L')
x1,x2,y1,y2=sub6.axis()
escale_y.append(y2)
sub7 = fig.add_subplot(257)
#sub7.set_title('DB2_X2L')
sub7.set_xlabel('Time (ns)')
sub7.set_ylabel('RMSD (nm)')
sub7.plot(data_db2_x2l[:,1]*0.02, data_db2_x2l[:,0]/10, color='red', linewidth = 1, label='DB2_X2L')
x1,x2,y1,y2=sub7.axis()
escale_y.append(y2)
sub8 = fig.add_subplot(258)
#sub8.set_title('DB2_X3M')
sub8.set_xlabel('Time (ns)')
sub8.set_ylabel('RMSD (nm)')
sub8.plot(data_db2_x3m[:,1]*0.02, data_db2_x3m[:,0]/10, color='red', linewidth = 1, label='DB2_X3M')
x1,x2,y1,y2=sub8.axis()
escale_y.append(y2)
sub9 = fig.add_subplot(259)
#sub9.set_title('DB2_X2R')
sub9.set_xlabel('Time (ns)')
sub9.set_ylabel('RMSD (nm)')
sub9.plot(data_db2_x2r[:,1]*0.02, data_db2_x2r[:,0]/10, color='red', linewidth = 1, label='DB2_X2R')
x1,x2,y1,y2=sub9.axis()
escale_y.append(y2)
sub10 = fig.add_subplot(2,5,10)
#sub10.set_title('DB2_X1R')
sub10.set_xlabel('Time (ns)')
sub10.set_ylabel('RMSD (nm)')
sub10.plot(data_db2_x1r[:,1]*0.02, data_db2_x1r[:,0]/10, color='red', linewidth = 1, label='DB2_X1R')
x1,x2,y1,y2=sub10.axis()
escale_y.append(y2)
#escale_y
escale_y.sort(reverse=True)
escale_y
##Cambiando los ejes de las y
sub1.axis((x1,x2,y1,escale_y[0]))
sub2.axis((x1,x2,y1,escale_y[0]))
sub3.axis((x1,x2,y1,escale_y[0]))
sub4.axis((x1,x2,y1,escale_y[0]))
sub5.axis((x1,x2,y1,escale_y[0]))
sub6.axis((x1,x2,y1,escale_y[0]))
sub7.axis((x1,x2,y1,escale_y[0]))
sub8.axis((x1,x2,y1,escale_y[0]))
sub9.axis((x1,x2,y1,escale_y[0]))
sub10.axis((x1,x2,y1,escale_y[0]))
### Creando el directorio para el análisis de las distancias de enlace de los puentes
ruta_diedros = nuevaruta+'/diedros_intra'
print ( ruta_diedros )
if not os.path.exists(ruta_diedros):
os.makedirs(ruta_diedros)
print ('Se ha creado la ruta ===>',ruta_diedros)
else:
print ("La ruta "+ruta_diedros+" existe..!!!")
print ( 'Nos vamos a ....', ruta_diedros)
os.chdir( ruta_diedros )
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
if (revisa1>0):
#Creando script para DB1_x1l
d1 = open('dihed_DB1_x1l.tcl', 'w')
print(d1)
d1.write('set psfFile '+ psf+' \n')
d1.write('set dcdFile '+ dcd+' \n')
d1.write('\nmol load psf $psfFile dcd $dcdFile\n')
d1.write('set outfile ' +'[open ' +'dihed_db1_x1l.dat'+' w]\n')
d1.write('set nf [molinfo top get numframes]\n')
d1.write(' \n')
d1.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[0]+'"] get index]\n')
d1.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[1]+'"] get index]\n')
d1.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[2]+'"] get index]\n')
d1.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n')
d1.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d1.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d1.write(' set x [measure dihed $dihed frame $i]\n')
d1.write(' set time [expr {$i +1}]\n')
d1.write(' puts $outfile "$time $x"\n')
d1.write('}\n')
d1.close()
#Creando script para DB1_x2l
d2 = open('dihed_DB1_x2l.tcl', 'w')
print(d2)
d2.write('set psfFile '+ psf+' \n')
d2.write('set dcdFile '+ dcd+' \n')
d2.write('\nmol load psf $psfFile dcd $dcdFile\n')
d2.write('set outfile ' +'[open ' +'dihed_db1_x2l.dat'+' w]\n')
d2.write('set nf [molinfo top get numframes]\n')
d2.write(' \n')
d2.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[1]+'"] get index]\n')
d2.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[2]+'"] get index]\n')
d2.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n')
d2.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n')
d2.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d2.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d2.write(' set x [measure dihed $dihed frame $i]\n')
d2.write(' set time [expr {$i +1}]\n')
d2.write(' puts $outfile "$time $x"\n')
d2.write('}\n')
d2.close()
#Creando script para DB1_x3m
d3 = open('dihed_DB1_x3m.tcl', 'w')
print(d3)
d3.write('set psfFile '+ psf+' \n')
d3.write('set dcdFile '+ dcd+' \n')
d3.write('\nmol load psf $psfFile dcd $dcdFile\n')
d3.write('set outfile ' +'[open ' +'dihed_db1_x3m.dat'+' w]\n')
d3.write('set nf [molinfo top get numframes]\n')
d3.write(' \n')
d3.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[2]+'"] get index]\n')
d3.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n')
d3.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n')
d3.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[5]+'"] get index]\n')
d3.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d3.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d3.write(' set x [measure dihed $dihed frame $i]\n')
d3.write(' set time [expr {$i +1}]\n')
d3.write(' puts $outfile "$time $x"\n')
d3.write('}\n')
d3.close()
#Creando script para DB1_x2r
d4 = open('dihed_DB1_x2r.tcl', 'w')
print(d4)
d4.write('set psfFile '+ psf+' \n')
d4.write('set dcdFile '+ dcd+' \n')
d4.write('\nmol load psf $psfFile dcd $dcdFile\n')
d4.write('set outfile ' +'[open ' +'dihed_db1_x2r.dat'+' w]\n')
d4.write('set nf [molinfo top get numframes]\n')
d4.write(' \n')
d4.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n')
d4.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n')
d4.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[5]+'"] get index]\n')
d4.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[6]+'"] get index]\n')
d4.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d4.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d4.write(' set x [measure dihed $dihed frame $i]\n')
d4.write(' set time [expr {$i +1}]\n')
d4.write(' puts $outfile "$time $x"\n')
d4.write('}\n')
d4.close()
#Creando script para DB1_x1r
d5 = open('dihed_DB1_x1r.tcl', 'w')
print(d5)
d5.write('set psfFile '+ psf+' \n')
d5.write('set dcdFile '+ dcd+' \n')
d5.write('\nmol load psf $psfFile dcd $dcdFile\n')
d5.write('set outfile ' +'[open ' +'dihed_db1_x1r.dat'+' w]\n')
d5.write('set nf [molinfo top get numframes]\n')
d5.write(' \n')
d5.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n')
d5.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[5]+'"] get index]\n')
d5.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[6]+'"] get index]\n')
d5.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[7]+'"] get index]\n')
d5.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d5.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d5.write(' set x [measure dihed $dihed frame $i]\n')
d5.write(' set time [expr {$i +1}]\n')
d5.write(' puts $outfile "$time $x"\n')
d5.write('}\n')
d5.close()
if (revisa2>0):
#####################################################################
########## Puente 2
##########################################3
#Creando script para DB2_x1l
d6 = open('dihed_DB2_x1l.tcl', 'w')
print(d6)
d6.write('set psfFile '+ psf+' \n')
d6.write('set dcdFile '+ dcd+' \n')
d6.write('\nmol load psf $psfFile dcd $dcdFile\n')
d6.write('set outfile ' +'[open ' +'dihed_db2_x1l.dat'+' w]\n')
d6.write('set nf [molinfo top get numframes]\n')
d6.write(' \n')
d6.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[0]+'"] get index]\n')
d6.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[1]+'"] get index]\n')
d6.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[2]+'"] get index]\n')
d6.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n')
d6.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d6.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d6.write(' set x [measure dihed $dihed frame $i]\n')
d6.write(' set time [expr {$i +1}]\n')
d6.write(' puts $outfile "$time $x"\n')
d6.write('}\n')
d6.close()
#Creando script para DB2_x2l
d7 = open('dihed_DB2_x2l.tcl', 'w')
print(d7)
d7.write('set psfFile '+ psf+' \n')
d7.write('set dcdFile '+ dcd+' \n')
d7.write('\nmol load psf $psfFile dcd $dcdFile\n')
d7.write('set outfile ' +'[open ' +'dihed_db2_x2l.dat'+' w]\n')
d7.write('set nf [molinfo top get numframes]\n')
d7.write(' \n')
d7.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[1]+'"] get index]\n')
d7.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[2]+'"] get index]\n')
d7.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n')
d7.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n')
d7.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d7.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d7.write(' set x [measure dihed $dihed frame $i]\n')
d7.write(' set time [expr {$i +1}]\n')
d7.write(' puts $outfile "$time $x"\n')
d7.write('}\n')
d7.close()
#Creando script para DB2_x3m
d8 = open('dihed_DB2_x3m.tcl', 'w')
print(d8)
d8.write('set psfFile '+ psf+' \n')
d8.write('set dcdFile '+ dcd+' \n')
d8.write('\nmol load psf $psfFile dcd $dcdFile\n')
d8.write('set outfile ' +'[open ' +'dihed_db2_x3m.dat'+' w]\n')
d8.write('set nf [molinfo top get numframes]\n')
d8.write(' \n')
d8.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[2]+'"] get index]\n')
d8.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n')
d8.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n')
d8.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[5]+'"] get index]\n')
d8.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d8.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d8.write(' set x [measure dihed $dihed frame $i]\n')
d8.write(' set time [expr {$i +1}]\n')
d8.write(' puts $outfile "$time $x"\n')
d8.write('}\n')
d8.close()
#Creando script para DB2_x2r
d9 = open('dihed_DB2_x2r.tcl', 'w')
print(d9)
d9.write('set psfFile '+ psf+' \n')
d9.write('set dcdFile '+ dcd+' \n')
d9.write('\nmol load psf $psfFile dcd $dcdFile\n')
d9.write('set outfile ' +'[open ' +'dihed_db2_x2r.dat'+' w]\n')
d9.write('set nf [molinfo top get numframes]\n')
d9.write(' \n')
d9.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n')
d9.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n')
d9.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[5]+'"] get index]\n')
d9.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[6]+'"] get index]\n')
d9.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d9.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d9.write(' set x [measure dihed $dihed frame $i]\n')
d9.write(' set time [expr {$i +1}]\n')
d9.write(' puts $outfile "$time $x"\n')
d9.write('}\n')
d9.close()
#Creando script para DB2_x1r
d10 = open('dihed_DB2_x1r.tcl', 'w')
print(d10)
d10.write('set psfFile '+ psf+' \n')
d10.write('set dcdFile '+ dcd+' \n')
d10.write('\nmol load psf $psfFile dcd $dcdFile\n')
d10.write('set outfile ' +'[open ' +'dihed_db2_x1r.dat'+' w]\n')
d10.write('set nf [molinfo top get numframes]\n')
d10.write(' \n')
d10.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n')
d10.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[5]+'"] get index]\n')
d10.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[6]+'"] get index]\n')
d10.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[7]+'"] get index]\n')
d10.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n')
d10.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
d10.write(' set x [measure dihed $dihed frame $i]\n')
d10.write(' set time [expr {$i +1}]\n')
d10.write(' puts $outfile "$time $x"\n')
d10.write('}\n')
d10.close()
if (revisa1>0):
#Calculando con VMD rmsd DB1 X1L
!vmd -dispdev text < dihed_DB1_x1l.tcl
#Calculando con VMD DB1 X2L
!vmd -dispdev text < dihed_DB1_x2l.tcl
#Calculando con VMD DB1 X3M
!vmd -dispdev text < dihed_DB1_x3m.tcl
#Calculando con VMD DB1 X2R
!vmd -dispdev text < dihed_DB1_x2r.tcl
#Calculando con VMD DB1 X1R
!vmd -dispdev text < dihed_DB1_x1r.tcl
if (revisa2>0):
#Calculando con VMD rmsd DB2 X1L
!vmd -dispdev text < dihed_DB2_x1l.tcl
#Calculando con VMD DB2 X2L
!vmd -dispdev text < dihed_DB2_x2l.tcl
#Calculando con VMD DB2 X3M
!vmd -dispdev text < dihed_DB2_x3m.tcl
#Calculando con VMD DB2 X2R
!vmd -dispdev text < dihed_DB2_x2r.tcl
#Calculando con VMD DB2 X1R
!vmd -dispdev text < dihed_DB2_x1r.tcl
print ('\nCopiando el archivo generateFES.py a '+ruta_diedros)
source_file=ruta_f_energy+'/generateFES.py'
dest_file=ruta_diedros+'/generateFES.py'
shutil.copy(source_file,dest_file)
#Cambiando permisos de ejecución
!chmod +x generateFES.py
if (revisa1>0):
#Cargando valores del DB1_X1L
data_db1_x1l=np.loadtxt('dihed_db1_x1l.dat',comments=['#', '@'])
#Cargando valores del DB1_X1R
data_db1_x1r=np.loadtxt('dihed_db1_x1r.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del DB1_X1L
min_x1l=np.amin(data_db1_x1l[:,1])
max_x1l=np.amax(data_db1_x1l[:,1])
print ('Minimo DB1_X1L=>',min_x1l)
print ('Máximo DB1_X1L=>',max_x1l)
#Obteniendo los valores máximo y mínimo del DB1_X1R
min_x1r=np.amin(data_db1_x1r[:,1])
max_x1r=np.amax(data_db1_x1r[:,1])
print ('Minimo DB1_X1R=>',min_x1r)
print ('Máximo DB1_X1R=>',max_x1r)
#Creando los archivos de entrada para el script
np.savetxt('db1_x1l.dat',data_db1_x1l[:,1], fmt='%1.14f')
np.savetxt('db1_x1r.dat',data_db1_x1r[:,1], fmt='%1.14f')
!paste db1_x1l.dat db1_x1r.dat > DB1_x1_lr.dat
#Ejecutando el script de FES
!python generateFES.py DB1_x1_lr.dat $min_x1l $max_x1l $min_x1r $max_x1r 200 200 $temperatura XL1_XR1.dat
###################################################################
#Cargando valores del DB1_X2l
data_db1_x2l=np.loadtxt('dihed_db1_x2l.dat',comments=['#', '@'])
#Cargando valores del DB1_X1R
data_db1_x2r=np.loadtxt('dihed_db1_x2r.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del DB1_X1L
min_x2l=np.amin(data_db1_x2l[:,1])
max_x2l=np.amax(data_db1_x2l[:,1])
print ('Minimo DB1_X2L=>',min_x2l)
print ('Máximo DB1_X2L=>',max_x2l)
#Obteniendo los valores máximo y mínimo del DB1_X1R
min_x2r=np.amin(data_db1_x2r[:,1])
max_x2r=np.amax(data_db1_x2r[:,1])
print ('Minimo DB1_X2R=>',min_x2r)
print ('Máximo DB1_X2R=>',max_x2r)
#Creando los archivos de entrada para el script
np.savetxt('db1_x2l.dat',data_db1_x2l[:,1], fmt='%1.14f')
np.savetxt('db1_x2r.dat',data_db1_x2r[:,1], fmt='%1.14f')
!paste db1_x2l.dat db1_x2r.dat > DB1_x2_lr.dat
#Ejecutando el script de FES
!python generateFES.py DB1_x2_lr.dat $min_x2l $max_x2l $min_x2r $max_x2r 200 200 $temperatura XL2_XR2.dat
######################################################################################
#Generando los archivos para X3M
data_db1_x3m=np.loadtxt('dihed_db1_x3m.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del DB1_X1L
min_x3m=np.amin(data_db1_x3m[:,1])
max_x3m=np.amax(data_db1_x3m[:,1])
print ('Minimo DB1_X3M=>',min_x3m)
print ('Máximo DB1_X3M=>',max_x3m)
print ('Minimo DB1_X1L=>',min_x1l)
print ('Máximo DB1_X1L=>',max_x1l)
print ('Minimo DB1_X2L=>',min_x2l)
print ('Máximo DB1_X2L=>',max_x2l)
print ('Minimo DB1_X1R=>',min_x1r)
print ('Máximo DB1_X1R=>',max_x1r)
print ('Minimo DB1_X2R=>',min_x2r)
print ('Máximo DB1_X2R=>',max_x2r)
#Creando los archivos de entrada para el script
np.savetxt('db1_x3m.dat',data_db1_x3m[:,1], fmt='%1.14f')
!paste db1_x3m.dat db1_x1l.dat > DB1_x3m_x1l.dat
!paste db1_x3m.dat db1_x2l.dat > DB1_x3m_x2l.dat
!paste db1_x3m.dat db1_x1r.dat > DB1_x3m_x1r.dat
!paste db1_x3m.dat db1_x2r.dat > DB1_x3m_x2r.dat
#Ejecutando el script de FES
!python generateFES.py DB1_x3m_x1l.dat $min_x3m $max_x3m $min_x1l $max_x1l 200 200 $temperatura XM3_XL1.dat
!python generateFES.py DB1_x3m_x2l.dat $min_x3m $max_x3m $min_x2l $max_x2l 200 200 $temperatura XM3_XL2.dat
!python generateFES.py DB1_x3m_x1r.dat $min_x3m $max_x3m $min_x1r $max_x1r 200 200 $temperatura XM3_XR1.dat
!python generateFES.py DB1_x3m_x2r.dat $min_x3m $max_x3m $min_x2r $max_x2r 200 200 $temperatura XM3_XR2.dat
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db1_xl1_vs_xr1.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^L_1}"
set ylabel "{/=30 X@^R_1}"
set title "Free Energy Surface Intramolecular DB1"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "XL1_XR1.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db1_xl2_vs_xr2.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^L_2}"
set ylabel "{/=30 X@^R_2}"
set title "Free Energy Surface Intramolecular DB1"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "XL2_XR2.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db1_xm3_vs_xl1.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^M_3}"
set ylabel "{/=30 X@^L_1}"
set title "Free Energy Surface Intramolecular DB1"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "XM3_XL1.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db1_xm3_vs_xl2.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^M_3}"
set ylabel "{/=30 X@^L_2}"
set title "Free Energy Surface Intramolecular DB1"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "XM3_XL2.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db1_xm3_vs_xr2.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^M_3}"
set ylabel "{/=30 X@^R_2}"
set title "Free Energy Surface Intramolecular DB1"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "XM3_XR2.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db1_xm3_vs_xr1.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^M_3}"
set ylabel "{/=30 X@^R_1}"
set title "Free Energy Surface Intramolecular DB1"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "XM3_XR1.dat" with pm3d
if (revisa2>0):
#Cargando valores del DB2_X1L
data_db2_x1l=np.loadtxt('dihed_db2_x1l.dat',comments=['#', '@'])
#Cargando valores del DB1_X1R
data_db2_x1r=np.loadtxt('dihed_db2_x1r.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del DB2_X1L
min_db2_x1l=np.amin(data_db2_x1l[:,1])
max_db2_x1l=np.amax(data_db2_x1l[:,1])
print ('Minimo DB2_X1L=>',min_db2_x1l)
print ('Máximo DB2_X1L=>',max_db2_x1l)
#Obteniendo los valores máximo y mínimo del DB2_X1R
min_db2_x1r=np.amin(data_db2_x1r[:,1])
max_db2_x1r=np.amax(data_db2_x1r[:,1])
print ('Minimo DB2_X1R=>',min_db2_x1r)
print ('Máximo DB2_X1R=>',max_db2_x1r)
#Creando los archivos de entrada para el script
np.savetxt('db2_x1l.dat',data_db2_x1l[:,1], fmt='%1.14f')
np.savetxt('db2_x1r.dat',data_db2_x1r[:,1], fmt='%1.14f')
!paste db2_x1l.dat db2_x1r.dat > DB2_x1_lr.dat
#Ejecutando el script de FES
!python generateFES.py DB2_x1_lr.dat $min_db2_x1l $max_db2_x1l $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB2_XL1_XR1.dat
###################################################################
#Cargando valores del DB2_X2l
data_db2_x2l=np.loadtxt('dihed_db2_x2l.dat',comments=['#', '@'])
#Cargando valores del DB2_X1R
data_db2_x2r=np.loadtxt('dihed_db2_x2r.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del DB2_X1L
min_db2_x2l=np.amin(data_db2_x2l[:,1])
max_db2_x2l=np.amax(data_db2_x2l[:,1])
print ('Minimo DB2_X2L=>',min_db2_x2l)
print ('Máximo DB2_X2L=>',max_db2_x2l)
#Obteniendo los valores máximo y mínimo del DB2_X1R
min_db2_x2r=np.amin(data_db2_x2r[:,1])
max_db2_x2r=np.amax(data_db2_x2r[:,1])
print ('Minimo DB2_X2R=>',min_db2_x2r)
print ('Máximo DB2_X2R=>',max_db2_x2r)
#Creando los archivos de entrada para el script
np.savetxt('db2_x2l.dat',data_db2_x2l[:,1], fmt='%1.14f')
np.savetxt('db2_x2r.dat',data_db2_x2r[:,1], fmt='%1.14f')
!paste db2_x2l.dat db2_x2r.dat > DB2_x2_lr.dat
#Ejecutando el script de FES
!python generateFES.py DB2_x2_lr.dat $min_db2_x2l $max_db2_x2l $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB2_XL2_XR2.dat
######################################################################################
#Cargando valores del DB2_X3M
data_db2_x3m=np.loadtxt('dihed_db2_x3m.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del DB2_X3M
min_db2_x3m=np.amin(data_db2_x3m[:,1])
max_db2_x3m=np.amax(data_db2_x3m[:,1])
print ('Minimo DB2_X3M=>',min_db2_x3m)
print ('Máximo DB2_X3M=>',max_db2_x3m)
print ('Minimo DB2_X1R=>',min_db2_x1r)
print ('Máximo DB2_X1R=>',max_db2_x1r)
print ('Minimo DB2_X2R=>',min_db2_x2r)
print ('Máximo DB2_X2R=>',max_db2_x2r)
print ('Minimo DB2_X1L=>',min_db2_x1l)
print ('Máximo DB2_X1L=>',max_db2_x1l)
print ('Minimo DB2_X2L=>',min_db2_x2l)
print ('Máximo DB2_X2L=>',max_db2_x2l)
#Creando los archivos de entrada para el script
np.savetxt('db2_x3m.dat',data_db2_x3m[:,1], fmt='%1.14f')
!paste db2_x3m.dat db2_x1r.dat > DB2_x3m_x1r.dat
!paste db2_x3m.dat db2_x2r.dat > DB2_x3m_x2r.dat
!paste db2_x3m.dat db2_x1l.dat > DB2_x3m_x1l.dat
!paste db2_x3m.dat db2_x2l.dat > DB2_x3m_x2l.dat
#Ejecutando el script de FES
!python generateFES.py DB2_x3m_x1r.dat $min_db2_x3m $max_db2_x3m $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB2_XM3_XR1.dat
!python generateFES.py DB2_x3m_x2r.dat $min_db2_x3m $max_db2_x3m $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB2_XM3_XR2.dat
!python generateFES.py DB2_x3m_x1l.dat $min_db2_x3m $max_db2_x3m $min_db2_x1l $max_db2_x1l 200 200 $temperatura DB2_XM3_XL1.dat
!python generateFES.py DB2_x3m_x2l.dat $min_db2_x3m $max_db2_x3m $min_db2_x2l $max_db2_x2l 200 200 $temperatura DB2_XM3_XL2.dat
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db2_xl1_vs_xr1.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^L_1}"
set ylabel "{/=30 X@^R_1}"
set title "Free Energy Surface Intramolecular DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB2_XL1_XR1.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db2_xl2_vs_xr2.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^L_2}"
set ylabel "{/=30 X@^R_2}"
set title "Free Energy Surface Intramolecular DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB2_XL2_XR2.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db2_xm3_vs_xl1.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^M_3}"
set ylabel "{/=30 X@^L_1}"
set title "Free Energy Surface Intramolecular DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB2_XM3_XL1.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db2_xm3_vs_xl2.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^M_3}"
set ylabel "{/=30 X@^L_2}"
set title "Free Energy Surface Intramolecular DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB2_XM3_XL2.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db2_xm3_vs_xr2.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^M_3}"
set ylabel "{/=30 X@^R_2}"
set title "Free Energy Surface Intramolecular DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB2_XM3_XR2.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db2_xm3_vs_xr1.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 X@^M_3}"
set ylabel "{/=30 X@^R_1}"
set title "Free Energy Surface Intramolecular DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB2_XM3_XR1.dat" with pm3d
############################################
#### Intermolecular DB1- DB2 - X1L
############################################
#Creando el DB1-DB2-X1L
!paste db1_x1l.dat db2_x1l.dat > DB1_DB2_x1l.dat
print('Minimo DB1-X1L=>',min_x1l)
print('Máximo DB1-X1L=>',max_x1l)
print('Minimo DB2-X1L=>',min_db2_x1l)
print('Máximo DB2-X1L=>',max_db2_x1l)
#Ejecutando el script de FES
!python generateFES.py DB1_DB2_x1l.dat $min_x1l $max_x1l $min_db2_x1l $max_db2_x1l 200 200 $temperatura DB1_DB2_X1L.dat
#########################################
#### Intermolecular DB1- DB2 - X2L
############################################
#Creando el DB1-DB2-X2L
!paste db1_x2l.dat db2_x2l.dat > DB1_DB2_x2l.dat
print('Minimo DB1-X2L=>',min_x2l)
print('Máximo DB1-X2L=>',max_x2l)
print('Minimo DB2-X2L=>',min_db2_x2l)
print('Máximo DB2-X2L=>',max_db2_x2l)
#Ejecutando el script de FES
!python generateFES.py DB1_DB2_x2l.dat $min_x2l $max_x2l $min_db2_x2l $max_db2_x2l 200 200 $temperatura DB1_DB2_X2L.dat
############################################
#### Intermolecular DB1- DB2 - X3M
############################################
#Creando el DB1-DB2-X3M
!paste db1_x3m.dat db2_x3m.dat > DB1_DB2_x3m.dat
print('Minimo DB1-X3M=>',min_x3m)
print('Máximo DB1-X3M=>',max_x3m)
print('Minimo DB2-X3M=>',min_db2_x3m)
print('Máximo DB2-X3M=>',max_db2_x3m)
#Ejecutando el script de FES
!python generateFES.py DB1_DB2_x3m.dat $min_x3m $max_x3m $min_db2_x3m $max_db2_x3m 200 200 $temperatura DB1_DB2_X3M.dat
############################################
#### Intermolecular DB1- DB2 - X2R
############################################
#Creando el DB1-DB2-X2R
!paste db1_x2r.dat db2_x2r.dat > DB1_DB2_x2r.dat
print('Minimo DB1-X2R=>',min_x2r)
print('Máximo DB1-X2R=>',max_x2r)
print('Minimo DB2-X2R=>',min_db2_x2r)
print('Máximo DB2-X2R=>',max_db2_x2r)
#Ejecutando el script de FES
!python generateFES.py DB1_DB2_x2r.dat $min_x2r $max_x2r $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB1_DB2_X2R.dat
############################################
#### Intermolecular DB1- DB2 - X1R
############################################
#Creando el DB1-DB2-X1R
!paste db1_x1r.dat db2_x1r.dat > DB1_DB2_x1r.dat
print('Minimo DB1-X1R=>',min_x1r)
print('Máximo DB1-X1R=>',max_x1r)
print('Minimo DB2-X1R=>',min_db2_x1r)
print('Máximo DB2-X1R=>',max_db2_x1r)
#Ejecutando el script de FES
!python generateFES.py DB1_DB2_x1r.dat $min_x1r $max_x1r $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB1_DB2_X1R.dat
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "DB1_DB2_X1L.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 DB1 X@^L_1}"
set ylabel "{/=30 DB2 X@^L_1}"
set title "Free Energy Surface Intermolecular DB1-DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB1_DB2_X1L.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "DB1_DB2_X2L.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 DB1 X@^L_2}"
set ylabel "{/=30 DB2 X@^L_2}"
set title "Free Energy Surface Intermolecular DB1-DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB1_DB2_X2L.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "DB1_DB2_X3M.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 DB1 X@^M_3}"
set ylabel "{/=30 DB2 X@^M_3}"
set title "Free Energy Surface Intermolecular DB1-DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB1_DB2_X3M.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "DB1_DB2_X2R.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set xyplane 0
set pm3d interpolate 0,0
set xlabel "{/=30 DB1 X@^R_2}"
set ylabel "{/=30 DB2 X@^R_2}"
set title "Free Energy Surface Intermolecular DB1-DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "DB1_DB2_X2R.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "DB1_DB2_X1R.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 DB1 X@^R_1}"
set ylabel "{/=30 DB2 X@^R_1}"
set title "Free Energy Surface Intermolecular DB1-DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
set cbrange[8:10]
splot "DB1_DB2_X1R.dat" with pm3d
hist_escale_y=[]
fig = pl.figure(figsize=(25,8))
fig.subplots_adjust(hspace=.4, wspace=.3)
#subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
#left = 0.125 # the left side of the subplots of the figure
#right = 0.9 # the right side of the subplots of the figure
#bottom = 0.1 # the bottom of the subplots of the figure
#top = 0.9 # the top of the subplots of the figure
#wspace = 0.2 # the amount of width reserved for blank space between subplots
#hspace = 0.2 # the amount of height reserved for white space between subplots
#Formateando los valores de los ejes
#Engrosando marcos
ax = fig.add_subplot(2,5,1)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
ax = fig.add_subplot(2,5,2)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
ax = fig.add_subplot(2,5,3)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
ax = fig.add_subplot(2,5,4)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
ax = fig.add_subplot(2,5,5)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(3)
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
#Cargando valores del DB1
data_h_db1_x1l=np.loadtxt('db1_x1l.dat',comments=['#', '@'])
data_h_db1_x2l=np.loadtxt('db1_x2l.dat',comments=['#', '@'])
data_h_db1_x3m=np.loadtxt('db1_x3m.dat',comments=['#', '@'])
data_h_db1_x2r=np.loadtxt('db1_x2r.dat',comments=['#', '@'])
data_h_db1_x1r=np.loadtxt('db1_x1r.dat',comments=['#', '@'])
#Cargando valores del DB2
data_h_db2_x1l=np.loadtxt('db2_x1l.dat',comments=['#', '@'])
data_h_db2_x2l=np.loadtxt('db2_x2l.dat',comments=['#', '@'])
data_h_db2_x3m=np.loadtxt('db2_x3m.dat',comments=['#', '@'])
data_h_db2_x2r=np.loadtxt('db2_x2r.dat',comments=['#', '@'])
data_h_db2_x1r=np.loadtxt('db2_x1r.dat',comments=['#', '@'])
sub1 = fig.add_subplot(251) # instead of plt.subplot(2, 2, 1)
sub1.set_xlabel('Angle (Degree) ', fontsize=10)
sub1.set_ylabel('P(Angle)')
n1, bins1, rectangles1 = sub1.hist(data_h_db1_x1l,100, normed=True, color='black',histtype='step', linewidth=3)
n2, bins2, rectangles2 = sub1.hist(data_h_db2_x1l,100, normed=True, color='red',histtype='step', linewidth=3)
x1,x2,y1,y2=sub1.axis()
hist_escale_y.append(y2)
sub2 = fig.add_subplot(252) # instead of plt.subplot(2, 2, 1)
sub2.set_xlabel('Angle (Degree) ', fontsize=10)
sub2.set_ylabel('P(Angle)')
n1, bins1, rectangles1 = sub2.hist(data_h_db1_x2l,100, normed=True, color='black',histtype='step', linewidth=3)
n2, bins2, rectangles2 = sub2.hist(data_h_db2_x2l,100, normed=True, color='red',histtype='step', linewidth=3)
x1,x2,y1,y2=sub2.axis()
hist_escale_y.append(y2)
sub3 = fig.add_subplot(253) # instead of plt.subplot(2, 2, 1)
sub3.set_xlabel('Angle (Degree) ', fontsize=10)
sub3.set_ylabel('P(Angle)')
n1, bins1, rectangles1 = sub3.hist(data_h_db1_x3m,100, normed=True, color='black',histtype='step', linewidth=3)
n2, bins2, rectangles2 = sub3.hist(data_h_db2_x3m,100, normed=True, color='red',histtype='step', linewidth=3)
x1,x2,y1,y2=sub3.axis()
hist_escale_y.append(y2)
sub4 = fig.add_subplot(254) # instead of plt.subplot(2, 2, 1)
sub4.set_xlabel('Angle (Degree) ', fontsize=10)
sub4.set_ylabel('P(Angle)')
n1, bins1, rectangles1 = sub4.hist(data_h_db1_x2r,100, normed=True, color='black',histtype='step', linewidth=3)
n2, bins2, rectangles2 = sub4.hist(data_h_db2_x2r,100, normed=True, color='red',histtype='step', linewidth=3)
x1,x2,y1,y2=sub4.axis()
hist_escale_y.append(y2)
sub5 = fig.add_subplot(255) # instead of plt.subplot(2, 2, 1)
sub5.set_xlabel('Angle (Degree) ', fontsize=10)
sub5.set_ylabel('P(Angle)')
n1, bins1, rectangles1 = sub5.hist(data_h_db1_x1r,100, normed=True, color='black',histtype='step', linewidth=3)
n2, bins2, rectangles2 = sub5.hist(data_h_db2_x1r,100, normed=True, color='red',histtype='step', linewidth=3)
x1,x2,y1,y2=sub5.axis()
hist_escale_y.append(y2)
#escale_y
hist_escale_y.sort(reverse=True)
hist_escale_y
##Cambiando los ejes de las y
sub1.axis((x1,x2,y1,hist_escale_y[0]))
sub2.axis((x1,x2,y1,hist_escale_y[0]))
sub3.axis((x1,x2,y1,hist_escale_y[0]))
sub4.axis((x1,x2,y1,hist_escale_y[0]))
sub5.axis((x1,x2,y1,hist_escale_y[0]))
### Creando el directorio para el análisis de las distancias de enlace de los puentes INTERMOLECULAR
ruta_bonds_puentes = nuevaruta+'/bonds_puentes'
print ( ruta_bonds_puentes )
if not os.path.exists(ruta_bonds_puentes):
os.makedirs(ruta_bonds_puentes)
print ('Se ha creado la ruta ===>',ruta_bonds_puentes)
else:
print ("La ruta "+ruta_bonds_puentes+" existe..!!!")
print ( 'Nos vamos a ....', ruta_bonds_puentes)
os.chdir( ruta_bonds_puentes )
print ('\nCopiando el archivo generateFES.py a '+ruta_bonds_puentes)
source_file=ruta_scripts+'/free_energy/generateFES.py'
dest_file=ruta_bonds_puentes+'/generateFES.py'
shutil.copy(source_file,dest_file)
#Cambiando permisos de ejecución
!chmod +x generateFES.py
psf=ruta_old_traj+'/'+psf_file
dcd=ruta_old_traj+'/'+dcd_file
print ('Puente DB1=>',DB1_N)
print ('Puente DB1=>',DB1_i)
print ('Puente DB2=>',DB2_N)
print ('Puente DB2=>',DB2_i)
puente=2
if (int(puente)==2):
#Creando script para Bond X1 Left
b1 = open('bond_DB1_left.tcl', 'w')
print(b1)
b1.write('set psfFile '+ psf+' \n')
b1.write('set dcdFile '+ dcd+' \n')
b1.write('\nmol load psf $psfFile dcd $dcdFile\n')
b1.write('set outfile ' +'[open ' +'bond_db1_left.dat'+' w]\n')
b1.write('set nf [molinfo top get numframes]\n')
b1.write(' \n')
b1.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[1]+'"] get index]\n')
b1.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[2]+'"] get index]\n')
b1.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n')
b1.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\n')
b1.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
b1.write(' set x [measure angle $angle frame $i]\n')
b1.write(' set time [expr {$i +1}]\n')
b1.write(' puts $outfile "$time $x"\n')
b1.write('}\n')
b1.close()
#Creando script para Bond X1 Right
b2 = open('bond_DB1_right.tcl', 'w')
print(b2)
b2.write('set psfFile '+ psf+' \n')
b2.write('set dcdFile '+ dcd+' \n')
b2.write('\nmol load psf $psfFile dcd $dcdFile\n')
b2.write('set outfile ' +'[open ' +'bond_db1_right.dat'+' w]\n')
b2.write('set nf [molinfo top get numframes]\n')
b2.write(' \n')
b2.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n')
b2.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[5]+'"] get index]\n')
b2.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[6]+'"] get index]\n')
b2.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\n')
b2.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
b2.write(' set x [measure angle $angle frame $i]\n')
b2.write(' set time [expr {$i +1}]\n')
b2.write(' puts $outfile "$time $x"\n')
b2.write('}\n')
b2.close()
#Creando script para Bond DB2 X1 Left
b3 = open('bond_DB2_left.tcl', 'w')
print(b3)
b3.write('set psfFile '+ psf+' \n')
b3.write('set dcdFile '+ dcd+' \n')
b3.write('\nmol load psf $psfFile dcd $dcdFile\n')
b3.write('set outfile ' +'[open ' +'bond_db2_left.dat'+' w]\n')
b3.write('set nf [molinfo top get numframes]\n')
b3.write(' \n')
b3.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[1]+'"] get index]\n')
b3.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[2]+'"] get index]\n')
b3.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n')
b3.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\n')
b3.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
b3.write(' set x [measure angle $angle frame $i]\n')
b3.write(' set time [expr {$i +1}]\n')
b3.write(' puts $outfile "$time $x"\n')
b3.write('}\n')
b3.close()
#Creando script para Bond DB2 X1 Right
b4 = open('bond_DB2_right.tcl', 'w')
print(b4)
b4.write('set psfFile '+ psf+' \n')
b4.write('set dcdFile '+ dcd+' \n')
b4.write('\nmol load psf $psfFile dcd $dcdFile\n')
b4.write('set outfile ' +'[open ' +'bond_db2_right.dat'+' w]\n')
b4.write('set nf [molinfo top get numframes]\n')
b4.write(' \n')
b4.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n')
b4.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[5]+'"] get index]\n')
b4.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[6]+'"] get index]\n')
b4.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\n')
b4.write('for {set i 0} {$i < $nf} {incr i 1} {\n')
b4.write(' set x [measure angle $angle frame $i]\n')
b4.write(' set time [expr {$i +1}]\n')
b4.write(' puts $outfile "$time $x"\n')
b4.write('}\n')
b4.close()
#Calculando con VMD bond DB1 Left
!vmd -dispdev text < bond_DB1_left.tcl
#Calculando con VMD bond DB1 Right
!vmd -dispdev text < bond_DB1_right.tcl
#Calculando con VMD bond DB2 Left
!vmd -dispdev text < bond_DB2_left.tcl
#Calculando con VMD bond DB2 Right
!vmd -dispdev text < bond_DB2_right.tcl
#Cargando valores del DB1
data_bond_db1_left=np.loadtxt('bond_db1_left.dat',comments=['#', '@'])
#Cargando valores del DB1_X1R
data_bond_db1_right=np.loadtxt('bond_db1_right.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del DB1 Left
min_bond1_left=np.amin(data_bond_db1_left[:,1])
max_bond1_left=np.amax(data_bond_db1_left[:,1])
print ('Minimo DB1_Left=>',min_bond1_left)
print ('Máximo DB1_Left=>',max_bond1_left)
#Obteniendo los valores máximo y mínimo del DB1 Right
min_bond1_right=np.amin(data_bond_db1_right[:,1])
max_bond1_right=np.amax(data_bond_db1_right[:,1])
print ('Minimo DB1_Right=>',min_bond1_right)
print ('Máximo DB1_Right=>',max_bond1_right)
#Creando los archivos de entrada para el script
np.savetxt('bond_DB1_left.dat',data_bond_db1_left[:,1], fmt='%1.14f')
np.savetxt('bond_DB1_right.dat',data_bond_db1_right[:,1], fmt='%1.14f')
!paste bond_DB1_left.dat bond_DB1_right.dat > angles_DB1.dat
#Ejecutando el script de FES
!python generateFES.py angles_DB1.dat $min_bond1_left $max_bond1_left $min_bond1_right $max_bond1_right 200 200 $temperatura Angles_DB1.dat
###################################################################3
#Cargando valores del DB2
data_bond_db2_left=np.loadtxt('bond_db2_left.dat',comments=['#', '@'])
#Cargando valores del DB1_X1R
data_bond_db2_right=np.loadtxt('bond_db2_right.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del DB2 Left
min_bond2_left=np.amin(data_bond_db2_left[:,1])
max_bond2_left=np.amax(data_bond_db2_left[:,1])
print ('Minimo DB2_Left=>',min_bond2_left)
print ('Máximo DB2_Left=>',max_bond2_left)
#Obteniendo los valores máximo y mínimo del DB2 Right
min_bond2_right=np.amin(data_bond_db2_right[:,1])
max_bond2_right=np.amax(data_bond_db2_right[:,1])
print ('Minimo DB2_Right=>',min_bond2_right)
print ('Máximo DB2_Right=>',max_bond2_right)
#Creando los archivos de entrada para el script
np.savetxt('bond_DB2_left.dat',data_bond_db2_left[:,1], fmt='%1.14f')
np.savetxt('bond_DB2_right.dat',data_bond_db2_right[:,1], fmt='%1.14f')
!paste bond_DB2_left.dat bond_DB2_right.dat > angles_DB2.dat
#Ejecutando el script de FES
!python generateFES.py angles_DB2.dat $min_bond2_left $max_bond2_left $min_bond2_right $max_bond2_right 200 200 $temperatura Angles_DB2.dat
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db1_a1_a2.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 C@^1_{/Symbol a}}-{/=30 C@^1_{/Symbol b}}-{/=30 S@^1_{/Symbol g}}"
set ylabel "{/=30 C@^2_{/Symbol a}}-{/=30 C@^2_{/Symbol b}}-{/=30 S@^2_{/Symbol g}}"
set title "Free Energy Surface Angles DB1"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "Angles_DB1.dat" with pm3d
# This loads the magics for gnuplot
%reload_ext gnuplot_kernel
#Configurando la salida para GNUplot
%gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;;
%%gnuplot
set output "db2_a1_a2.png"
set palette model RGB
set palette defined ( 0 '#000090',\
1 '#000fff',\
2 '#0090ff',\
3 '#0fffee',\
4 '#90ff70',\
5 '#ffee00',\
6 '#ff7000',\
7 '#ee0000',\
8 '#7f0000')
set view map
set dgrid3d
set pm3d interpolate 0,0
set xlabel "{/=30 C@^1_{/Symbol a}}-{/=30 C@^1_{/Symbol b}}-{/=30 S@^1_{/Symbol g}}"
set ylabel "{/=30 C@^2_{/Symbol a}}-{/=30 C@^2_{/Symbol b}}-{/=30 S@^2_{/Symbol g}}"
set title "Free Energy Surface Angles DB2"
##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente
#set cbrange[8:10]
splot "Angles_DB2.dat" with pm3d
bonds_escale_y=[]
#Cargando valores del DB1
data_h_db1_left=np.loadtxt('bond_DB1_left.dat',comments=['#', '@'])
data_h_db1_right=np.loadtxt('bond_DB1_right.dat',comments=['#', '@'])
#Cargando valores del DB2
data_h_db2_left=np.loadtxt('bond_DB2_left.dat',comments=['#', '@'])
data_h_db2_right=np.loadtxt('bond_DB2_right.dat',comments=['#', '@'])
#Engrosar marco
figb=pl.figure(figsize=(12, 10), dpi=100, linewidth=3.0)
figb.subplots_adjust(hspace=.5)
ax = figb.add_subplot(221)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
ax = figb.add_subplot(222)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
ax = figb.add_subplot(223)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
ax = figb.add_subplot(224)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(4)
#Formateando los valores de los ejes
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
bond1 = figb.add_subplot(221) # instead of plt.subplot(2, 2, 1)
#bond1.set_title('CA1 - CB1 - SY1')
bond1.set_xlabel('Angle (Degree)')
bond1.set_ylabel('P (Angle)')
n, bins, rectangles = bond1.hist(data_h_db1_left,100, normed=True, color='black',histtype='step',linewidth=3)
x1,x2,y1,y2=bond1.axis()
bonds_escale_y.append(y2)
bond2 = figb.add_subplot(222) # instead of plt.subplot(2, 2, 1)
#bond2.set_title('CA2 - CB2 - SY2')
bond2.set_xlabel('Angle (Degree)')
bond2.set_ylabel('P (Angle)')
n, bins, rectangles = bond2.hist(data_h_db1_right,100, normed=True, color='black',histtype='step', linewidth=3)
x1,x2,y1,y2=bond2.axis()
bonds_escale_y.append(y2)
bond3 = figb.add_subplot(223) # instead of plt.subplot(2, 2, 1)
#bond3.set_title('CA1 - CB1 - SY1')
bond3.set_xlabel('Angle (Degree)')
bond3.set_ylabel('P (Angle)')
n, bins, rectangles = bond3.hist(data_h_db2_left,100, normed=True, color='red',histtype='step', linewidth=3)
x1,x2,y1,y2=bond3.axis()
bonds_escale_y.append(y2)
bond4 = figb.add_subplot(224) # instead of plt.subplot(2, 2, 1)
#bond4.set_title('CA2 - CB2 - SY2')
bond4.set_xlabel('Angle (Degree)')
bond4.set_ylabel('P (Angle)')
n, bins, rectangles = bond4.hist(data_h_db2_right,100, normed=True, color='red',histtype='step', linewidth=3)
x1,x2,y1,y2=bond4.axis()
bonds_escale_y.append(y2)
#escale_y
bonds_escale_y.sort(reverse=True)
bonds_escale_y
##Cambiando los ejes de las y
sub1.axis((x1,x2,y1,bonds_escale_y[0]))
sub2.axis((x1,x2,y1,bonds_escale_y[0]))
sub3.axis((x1,x2,y1,bonds_escale_y[0]))
sub4.axis((x1,x2,y1,bonds_escale_y[0]))
### Creando el directorio para el análisis de los puentes
ruta_clusters = nuevaruta+'/clusters'
print ( ruta_clusters )
if not os.path.exists(ruta_clusters):
os.makedirs(ruta_clusters)
print ('Se ha creado la ruta ===>',ruta_clusters)
else:
print ("La ruta "+ruta_clusters+" existe..!!!")
print ( 'Nos vamos a ....', ruta_clusters)
os.chdir( ruta_clusters )
!echo 1 1 | g_cluster -f ../output.xtc -s ../ionized.pdb -method gromos -cl out.pdb -g out.log -cutoff 0.2
!vmd out.pdb
### Creando el directorio para el análisis de colorByRMSF
ruta_colorByRMSF = nuevaruta+'/colorByRMSF'
print ( ruta_colorByRMSF )
if not os.path.exists(ruta_colorByRMSF):
os.makedirs(ruta_colorByRMSF)
print ('Se ha creado la ruta ===>',ruta_colorByRMSF)
else:
print ("La ruta "+ruta_colorByRMSF+" existe..!!!")
print ( 'Nos vamos a ....', ruta_colorByRMSF)
os.chdir( ruta_colorByRMSF )
print ('\nCopiando el archivo colorByRMSF.vmd a '+ruta_colorByRMSF)
source_file=ruta_scripts+'/colorByRMSF/colorByRMSF.vmd'
dest_file=ruta_colorByRMSF+'/colorByRMSF.vmd'
shutil.copy(source_file,dest_file)
print ('Ejecutando el análisis de rmsf...')
!echo 1 | g_rmsf -f ../output.xtc -s ../ionized.pdb -oq bfac.pdb -o rmsf.xvg
#Calculando el mínimo y máximo del rmsf
#Cargando valores del RMSF
data_rmsf_gcolor=np.loadtxt('rmsf.xvg',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del RMSF
min_rmsf_gcolor=np.amin(data_rmsf_gcolor[:,1])
max_rmsf_gcolor=np.amax(data_rmsf_gcolor[:,1])
print ('Minimo_RMSF=>',min_rmsf_gcolor)
print ('Máximo_RMSF=>',max_rmsf_gcolor)
# Cargando el pdb con VMD
!vmd ../ionized.pdb
print ( 'Nos vamos a ....', ruta_colorByRMSF )
os.chdir( ruta_colorByRMSF )
#Inicializando vector
rmsf=[]
rmsf_x=[]
rmsf_y=[]
try:
file_Bfactor = open( 'bfac.pdb' )
new_bfactor=open('bfac_new.pdb','w')
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
for linea in file_Bfactor.readlines():
fila = linea.strip()
sl = fila.split()
cadena=sl[0]
if (cadena=='ATOM'):
if (len(sl)==12):
new_bfactor.write(linea)
else:
x=linea[0:60]
tempFactor=linea[60:66]
#print (x)
#print(tempFactor)
y=fila[67:]
#print (y)
enviar=x+' '+tempFactor+y
new_bfactor.write(enviar+'\n')
#print(enviar)
else:
#print (linea)
new_bfactor.write(linea)
new_bfactor.close()
!gedit bfac_new.pdb
#Inicializando vector
bfactors_color=[]
try:
file_bfactor_color = open( 'bfac_new.pdb' )
except IOError:
print ('No se pudo abrir el archivo o no existe·..')
i=0
for linea in file_bfactor_color.readlines():
fila = linea.strip()
sl = fila.split()
if (sl[0]=='ATOM'):
#print (sl[0])
idresidue=fila[23:26]
bfactor=fila[60:66]
#print (idresidue + '\t'+bfactor)
bfactors_color.append(idresidue+'\t'+bfactor+'\n')
#i=i+1
#Escribiendo el archivo BFACTOR.dat
f = open('protein_bfactor.dat', 'w')
#f.write('@ title "B-factors" \n')
f.write('@ xaxis label " Residue" \n')
f.write('@ xaxis label char size 1.480000\n')
f.write('@ xaxis bar linewidth 5.0\n')
f.write('@ xaxis ticklabel char size 1.480000\n')
f.write('@ yaxis label "B-factors (' +"\\"+'cE'+"\\"+'C)"\n')
f.write('@ yaxis label char size 1.480000\n')
f.write('@ yaxis bar linewidth 5.0\n')
f.write('@ yaxis ticklabel char size 1.480000\n')
f.write('@ s0 line linewidth 7\n')
f.write('@ s0 symbol 1\n')
f.write('@ s0 symbol size 1.000000\n')
f.write('@ s0 symbol color 1\n')
f.write('@ s0 symbol pattern 1\n')
f.write('@ s0 symbol fill color 2\n')
f.write('@ s0 symbol fill pattern 1\n')
f.write('@ s0 symbol linewidth 1.0\n')
f.write('@TYPE xy \n')
f.write("".join(bfactors_color))
f.close()
!xmgrace protein_bfactor.dat
#Cargando la imagen generada en xmgrace
Image(filename='protein_bfactor.png')
#Calculando el mínimo y máximo del rmsf
#Cargando valores del RMSF
data_bfactor_color=np.loadtxt('protein_bfactor.dat',comments=['#', '@'])
#Obteniendo los valores máximo y mínimo del RMSF
min_bfactor_color=np.amin(data_bfactor_color[:,1])
max_bfactor_color=np.amax(data_bfactor_color[:,1])
print ('Minimo_B-Factor=>',min_bfactor_color)
print ('Máximo_B-Factor=>',max_bfactor_color)
!chimera bfac_new.pdb
##Cargando la imagen generada
print ('Cargando el archivo...')
Image(filename='image.png')
### Creando el directorio para el análisis del SASA en el directorio de VMD
print ('Nos vamos a ', ruta)
os.chdir( ruta )
output_find=!find /usr/local -maxdepth 2 -type d -name vmd
print (output_find)
ruta_vmd=output_find[0]
print (ruta_vmd)
ruta_vmd_sasa = ruta_vmd+'/plugins/noarch/tcl/iceVMD1.0'
print ( ruta_vmd_sasa )
if not os.path.exists(ruta_vmd_sasa):
os.makedirs(ruta_vmd_sasa)
print ('Se ha creado la ruta ===>',ruta_vmd_sasa)
else:
print ("La ruta "+ruta_vmd_sasa+" existe..!!!")
print ( 'Nos vamos a ....', ruta_vmd_sasa )
os.chdir( ruta_vmd_sasa )
#Copiando los archivos generados a la carpeta plugins de VMD
print ('\nCopiando los archivos generados a '+ruta_vmd_sasa)
source_file=ruta_scripts+'/iceVMD1.0/colorplot.tcl'
dest_file=ruta_vmd_sasa+'/colorplot.tcl'
shutil.copy(source_file,dest_file)
source_file=ruta_scripts+'/iceVMD1.0/multiplot.tcl'
dest_file=ruta_vmd_sasa+'/multiplot.tcl'
shutil.copy(source_file,dest_file)
source_file=ruta_scripts+'/iceVMD1.0/pkgIndex.tcl'
dest_file=ruta_vmd_sasa+'/pkgIndex.tcl'
shutil.copy(source_file,dest_file)
source_file=ruta_scripts+'/iceVMD1.0/vmdICE.tcl'
dest_file=ruta_vmd_sasa+'/vmdICE.tcl'
shutil.copy(source_file,dest_file)
print('\nArchivos copiados.. Regresando a... '+nuevaruta)
os.chdir( nuevaruta )
### Creando el directorio para la graficación del sasa
ruta_sasaColor = nuevaruta+'/sasaColor'
print ( ruta_sasaColor )
if not os.path.exists(ruta_sasaColor):
os.makedirs(ruta_sasaColor)
print ('Se ha creado la ruta ===>',ruta_sasaColor)
else:
print ("La ruta "+ruta_sasaColor+" existe..!!!")
print ( 'Nos vamos a ....', ruta_sasaColor )
os.chdir( ruta_sasaColor )
print ('\nCopiando el archivo de configuracion a '+ruta_sasaColor)
source_file=ruta_scripts+'/iceVMD1.0/vmdrc'
dest_file=ruta_sasaColor+'/.vmdrc'
shutil.copy(source_file,dest_file)
!vmd ../ionized.psf ../output.xtc
#Borrando los archivos del vmd
!rm -r $ruta_vmd_sasa
### Creando el directorio para la graficación del rgyro
ruta_gyroColor = nuevaruta+'/color_rgyro'
print ( ruta_gyroColor )
if not os.path.exists(ruta_gyroColor):
os.makedirs(ruta_gyroColor)
print ('Se ha creado la ruta ===>',ruta_gyroColor)
else:
print ("La ruta "+ruta_gyroColor+" existe..!!!")
print ( 'Nos vamos a ....', ruta_gyroColor )
os.chdir( ruta_gyroColor )
print ('\nCopiando el script colorRgyro.tcl a '+ruta_gyroColor)
source_file=ruta_scripts+'/colorRgyro/colorRgyro.tcl'
dest_file=ruta_gyroColor+'/colorRgyro.tcl'
shutil.copy(source_file,dest_file)
!vmd ../ionized.psf ../output.xtc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Navigating and Selecting Cells
Step2: We Can Run a Cell to Multiple Times
Step3: Code Libraries can be imported via a Code Cell
Step4: Clearing Cell Outputs
Step5: Working Code Cells Harder
Step6: Notebooks Can be Support Interactive Widgets that Help You Explore a Dataset
Step7: Using ipywidgets interact()
Step8: Specify the Range of Values Applied to an interact() slider
Step9: Specify the Step Size of a Slider
|
<ASSISTANT_TASK:>
Python Code:
print('hello world')
a=1
b=2
a+b
print(a)
#Run this cell multiple times
a=a+1
a
import numpy as np
np.pi
!ls
ls -l
import pandas as pd
pd.DataFrame({'col1':[1,2],'col2':['x','y']})
%matplotlib inline
import matplotlib.pyplot as plt
# Create 1000 evenly-spaced values from 0 to 2 pi
x = np.linspace(0, 2*np.pi, 1000)
#Plot a sine wave over those values
y = np.sin(x)
plt.plot(x, y)
#You can prevent the display of object details returned from the plot by:
## - adding a semi-colon (;) at the end of the final statement
#If no frequency value is specified, use the default setting: f=1
def sinplot(f=1):
#Define a range of x values
x = np.linspace(0, 2*np.pi, 1000)
#Plot a sine wave with the specified frequency over that range
y = np.sin(f*x)
#Plot the chart
plt.plot(x, y)
plt.show()
sinplot(f=3)
from ipywidgets import interact
interact(sinplot, f=5)
interact(sinplot, f=(0,20));
interact(sinplot, f=(0,20,5));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we make some plots and calculate some figures. For example, here's how the correlation (Pearson's r) changes by game number
Step2: Here's how the slope changes by game
Step3: You'll note that although the predictivity (as measured by r) is best at 40 games, the slope is still not 1--meaning we expect some regression still.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from pylab import *
%matplotlib inline
from pyarrow import ArrowIOError
from scrapenhl2.scrape import teams, team_info, schedules
from scrapenhl2.manipulate import manipulate as manip
generate = False
fname = '/Users/muneebalam/Desktop/team_game_data.csv'
if generate:
team_dfs = []
for season in range(2010, 2018):
for team in schedules.get_teams_in_season(season):
# Read in the PBP and get CF and CA counts by game, regular season only
try:
df = teams.get_team_pbp(season, team).query('Game < 30001')
except ArrowIOError:
continue
df = manip.filter_for_five_on_five(manip.filter_for_corsi(df))
df.loc[:, 'CFCA'] = df.Team.apply(lambda x: 'CF' if x == team_info.team_as_id(team) else 'CA')
grouped = df[['Game', 'CFCA']].assign(Count=1).groupby(['Game', 'CFCA'], as_index=False).count()
grouped = grouped.sort_values('Game') \
.pivot_table(index='Game', columns='CFCA', values='Count')
# Add in game number
grouped.loc[:, 'GameNum'] = 1
grouped.loc[:, 'GameNum'] = grouped.GameNum.cumsum()
# Calculate cumulative sums
grouped.loc[:, 'YTD_CF'] = grouped['CF'].cumsum()
grouped.loc[:, 'YTD_CA'] = grouped['CA'].cumsum()
grouped.loc[:, 'YTD_CF%'] = grouped['YTD_CF'] / (grouped['YTD_CF'] + grouped['YTD_CA'])
# Get EOY numbers and subtract
totals = grouped.max()
grouped.loc[:, 'ROY_CF'] = totals['YTD_CF'] - grouped['YTD_CF']
grouped.loc[:, 'ROY_CA'] = totals['YTD_CA'] - grouped['YTD_CA']
grouped.loc[:, 'ROY_CF%'] = grouped['ROY_CF'] / (grouped['ROY_CF'] + grouped['ROY_CA'])
team_dfs.append(grouped[['GameNum', 'YTD_CF%', 'ROY_CF%']].assign(Season=season,
Team=team_info.team_as_str(team)))
print('Done with', season)
data = pd.concat(team_dfs)
data.to_csv(fname, index=False)
data = pd.read_csv(fname)
data.head()
# Drop short season and current season, and get correlations
data2 = data[(data.Season != 2012) & (data.Season != 2017)].dropna()
corrs = data2.drop({'Season', 'Team'}, axis=1).groupby('GameNum').corr().drop('ROY_CF%', axis=1)
corrs = corrs[corrs['YTD_CF%'] < 1]
corrs = corrs.reset_index().drop('level_1', axis=1).rename(columns={'YTD_CF%': 'r'})
plot(corrs.GameNum, corrs.r)
xlabel('Game number')
ylabel('R')
title('Correlation between YTD CF% and ROY CF%')
# Now look at the slope
from scipy.stats import linregress
def get_slope(df):
m, b, r, p, e = linregress(df['YTD_CF%'], df['ROY_CF%'])
return m
plot(data2.groupby('GameNum').apply(get_slope), label = 'Slope')
xlabel('Game Number')
plot(corrs.r, label = 'R')
title('Correlation between YTD CF% and ROY CF%')
legend(loc=2)
tmp = data2[data2.GameNum == 40]
x = 'YTD_CF%'
y = 'ROY_CF%'
scatter(tmp[x], tmp[y], label='_nolegend')
xlabel('Season to date CF%')
ylabel('Rest of season CF%')
title('YTD and ROY CF% at game 40')
m, b, r, p, e = linregress(tmp[x], tmp[y])
xs = arange(0, 1, 0.01)
ys = m * xs + b
xlimits = xlim()
ylimits = ylim()
plot(xs, ys, color='k', ls='--', label='Best slope')
plot(xs, xs, color='k', ls=':', label='Slope=1')
xlim(*xlimits)
ylim(*ylimits)
legend(loc=2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download FastQ files (1.6Gb)
Step2: Download reference and annotation files
Step3: The Reference must be altered to rename the header so that they
Step4: Sequana_coverage analysis (using the library)
Step5: Chromosome selection
Step6: Sequana_coverage analysis (using the standalone)
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
matplotlib.rcParams['figure.figsize'] = [10,7]
import synapseclient
l = synapseclient.login()
l.get("syn10641621", downloadLocation=".", ifcollision="overwrite.local")
l.get("syn10641896", downloadLocation=".", ifcollision="overwrite.local")
!sequana_coverage --download-reference CU329670
!sequana_coverage --download-reference CU329671
!sequana_coverage --download-reference CU329672
!sequana_coverage --download-reference X54421
!sequana_coverage --download-genbank CU329670
!sequana_coverage --download-genbank CU329671
!sequana_coverage --download-genbank CU329672
!sequana_coverage --download-genbank X54421
!cat CU*gbk X*gbk > S_pombe.gbk
!cat CU*.fa X*.fa> S_pombe.fa
files = ['CU329670.fa', "CU329671.fa", "CU329672.fa", "X54421.fa"]
with open("S_pombe.fa", "w") as fout:
for filename in files:
with open(filename, "r") as fin :
for line in fin.readlines():
if line.startswith(">"):
start, end = line.split(None, 1)
accession = start[1:].rsplit("|", 1)[1]
line = ">" + accession + " " + end
fout.write(line)
# The mapping to obtain the sorted BAM file (uses BWA behing the scene)
!time sequana_mapping \
--file1 M14-19_J29_01_TAAGGCGA-TATCCTCT_L002_R1_001.fastq.gz \
--file2 M14-19_J29_01_TAAGGCGA-TATCCTCT_L002_R2_001.fastq.gz \
--reference S_pombe.fa --thread 4
# Build the BED file (unfiltered)
! time samtools depth -d 30000 S_pombe.fa.sorted.bam -aa > S_pombe.bed
%%time
from sequana import GenomeCov
# If chromosome length is >5Mb, we split the data. Here we now it is 5.5Mb, so let us
#slightly increase the chunksize.
b = GenomeCov("S_pombe.bed", "S_pombe.gbk", chunksize=6000000, low_threshold=-4, high_threshold=4)
b.compute_gc_content("S_pombe.fa")
chrom = b.chr_list[0]
chrom.run(20001, circular=True)
chrom.plot_coverage()
_ = ylim([0, 400])
chrom.plot_rois(0,1000000)
chrom.plot_gc_vs_coverage(bins=[80, 60], Nlevels=6)
!sequana_coverage --input S_pombe.filtered.bed --genbank S_pombe.gbk --reference S_pombe.fa
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def countBits(n ) :
count = 0 ;
while(n ) :
count += 1 ;
n >>= 1 ;
return count ;
n = 32 ;
print("Minimum ▁ value ▁ of ▁ K ▁ is ▁ = ", countBits(n ) ) ;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
Step5: Problem 1
Step7: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
Step8: Problem 2
Step9: Problem 3
Step10: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step11: Problem 4
Step12: Problem 5
Step13: Problem 6
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import SGDClassifier
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
from sklearn.metrics import accuracy_score
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
for str in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']:
root = 'notMNIST_small'
path = os.listdir('{}/{}'.format(root, str))[0]
display(Image('{}/{}/{}'.format(root, str, path)))
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
with open(test_datasets[0], 'rb') as f:
a = pickle.load(f)
plt.imshow(a[0])
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
with open(pickle_file, 'rb') as f:
data = pickle.load(f)
train_dataset = data['train_dataset']
valid_dataset = data['valid_dataset']
test_dataset = data['test_dataset']
train_labels = data['train_labels']
valid_labels = data['valid_labels']
test_labels = data['test_labels']
train_0 = train_dataset[0]
train_1 = train_dataset[1]
diff = train_1 - train_0
plt.imshow(train_0)
plt.imshow(train_1)
plt.imshow(diff)
np.mean(np.abs(diff))
np.array_equal(train_0, train_1)
import hashlib
class MyImage:
def __init__(self, image):
self.image = image
def __eq__(self, other):
return np.array_equal(self.image, other.image)
def __hash__(self):
return hash(hashlib.sha1(self.image).hexdigest())
train_myimages = set(list(map(lambda x: MyImage(x), train_dataset)))
valid_myimages = set(list(map(lambda x: MyImage(x), valid_dataset)))
valid_myimages_no_overlap = valid_myimages - train_myimages
print(len(valid_myimages))
print(len(valid_myimages_no_overlap))
valid_dataset_no_overlap = np.array(list(map(lambda x: x.image, valid_myimages_no_overlap)))
valid_dataset_no_overlap.shape
train_labels
def reshape_dataset(dataset):
shape = dataset.shape
return dataset.reshape(shape[0], shape[1] * shape[2])
reshape_dataset(valid_dataset).shape
model = SGDClassifier(loss='log', alpha=0.01, penalty='l2')
model.fit(reshape_dataset(train_dataset), train_labels)
pred = model.predict(reshape_dataset(valid_dataset))
accuracy_score(pred, valid_labels)
pred = model.predict(reshape_dataset(test_dataset))
accuracy_score(pred, test_labels)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: Obtaining the IMDb movie review dataset
Step3: Shuffling the DataFrame
Step4: Optional
Step5: <hr>
Step6: Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts
Step7: As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created
Step8: <br>
Step9: When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency
Step10: As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is is
Step11: If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors
Step12: <br>
Step13: <br>
Step14: <br>
Step15: Note
Step16: <hr>
Step17: By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx) to the cross_val_score scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Step18: As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score earlier.
Step19: As we can see, the result above is consistent with the average score computed the cross_val_score.
Step20: End comment.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,sklearn,nltk
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
print(count.vocabulary_)
print(bag.toarray())
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
from sklearn.linear_model import LogisticRegression
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
if Version(sklearn_version) < '0.18':
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
else:
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
gs.best_score_
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
series = pd.Series([np.array([1,2,3,4]), np.array([5,6,7,8]), np.array([9,10,11,12])], index=['file1', 'file2', 'file3'])
def g(s):
return pd.DataFrame.from_records(s.values,index=s.index)
df = g(series.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's load the processed data and feature scale Age and Fare
Step2: Select the features from data, and convert to numpy arrays
Step3: We want to perform algorithm tuning with CV now, to avoid information leak, let's create a hold out set
Step4: Below is a simple example of algorithm tuning with the rbf kernel of SVM.
Step5: It seems that gamma is good in a broad range, Let's just take the middle of the flat part.
Step6: Of course, in real life you should perform parameter grid search in both C and gamma. Let's try out our new GridSearchCV tools learned in the morning.
Step7: Are we nessary perform better than the simpler model?
|
<ASSISTANT_TASK:>
Python Code:
#import all the needed package
import numpy as np
import scipy as sp
import re
import pandas as pd
import sklearn
from sklearn.cross_validation import train_test_split,cross_val_score
from sklearn import metrics
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
from sklearn.svm import SVC
data = pd.read_csv('data/train_processed.csv', index_col=0)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(data[['Age', 'Fare']])
data[['Age', 'Fare']] = scaler.transform(data[['Age', 'Fare']])
data["PSA"] = data["Pclass"]*data["Female"]*data["Age"]
data["SP"] = data["SibSp"]+data["Parch"]
data.head()
feature_cols=['PSA','SP','Pclass','Age','SibSp','Parch','Fare','Female','Title_Dr','Title_Lady','Title_Master','Title_Miss','Title_Mr','Title_Mrs','Title_Rev','Title_Sir']
X=data[feature_cols].values
y=data['Survived'].values
#create a holdout set
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=42)
#tune the gamma parameter with our training set
scores_mean=[]
scores_std=[]
model=SVC()
model.C=1
gammas=np.logspace(-3,1,50)
for gamma in gammas:
model.gamma=gamma
scores=cross_val_score(model,X_train,y_train,cv=10,scoring='accuracy')
scores_mean.append(np.mean(scores))
scores_std.append(np.std(scores))
plt.semilogx(gammas,scores_mean,'.')
plt.show()
model.gamma=0.1
model.fit(X_train,y_train)
y_predta=model.predict(X_train)
y_pred=model.predict(X_test)
train_score=metrics.accuracy_score(y_train,y_predta)
test_score=metrics.accuracy_score(y_test,y_pred)
scores=cross_val_score(model,X,y,cv=10,scoring='accuracy')
cvscore=np.mean(scores)
cvscore_std=np.std(scores)
print(train_score,test_score,cvscore,cvscore_std)
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.grid_search import GridSearchCV
C_range = np.logspace(-3, 3, 10)
gamma_range = np.logspace(-3, 3, 10)
param_grid = dict(gamma=gamma_range, C=C_range)
cv = StratifiedShuffleSplit(y_train, n_iter=5, test_size=0.3, random_state=42)
grid = GridSearchCV(SVC(kernel='rbf'), scoring="accuracy", param_grid=param_grid, cv=cv)
grid.fit(X_train, y_train)
print("The best parameters are %s with a score of %0.4f" % (grid.best_params_, grid.best_score_))
model.gamma=grid.best_params_['gamma']
model.C=grid.best_params_['C']
model.fit(X_train,y_train)
y_predta=model.predict(X_train)
y_pred=model.predict(X_test)
train_score=metrics.accuracy_score(y_train,y_predta)
test_score=metrics.accuracy_score(y_test,y_pred)
scores=cross_val_score(model,X,y,cv=10,scoring='accuracy')
cvscore=np.mean(scores)
cvscore_std=np.std(scores)
print(train_score,test_score,cvscore,cvscore_std)
model.fit(X,y)
holdout_data = pd.read_csv('data/test_processed.csv')
# rescale age and fare as we did for training data. This is critical
# Note that we can (and should) use the same scaler object we fit above to the training data
holdout_data[['Age', 'Fare']] = scaler.transform(holdout_data[['Age', 'Fare']])
holdout_data["PSA"] = holdout_data["Pclass"]*holdout_data["Female"]*holdout_data["Age"]
holdout_data["SP"] = holdout_data["SibSp"]+holdout_data["Parch"]
#use our new features.
#feature_cols=['Pclass','Age','SibSp','Parch','Fare','Female','Title_Dr','Title_Lady','Title_Master','Title_Miss','Title_Mr','Title_Mrs','Title_Rev','Title_Sir']
X_holdout=holdout_data[feature_cols]
X_holdout.head()
y_holdout=model.predict(X_holdout)
samplesubmit = pd.read_csv("data/titanic_submit_example.csv")
samplesubmit["Survived"]=np.int32(y_holdout)
samplesubmit.to_csv("data/titanic_submit_fancytitle.csv",index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start by running Monte Carlo code for a single sphere
Step2: Sample sphere boundary sizes
Step3: Run Monte Carlo for each of the sphere boundary sizes
Step4: Sample distribution of sphere boundary radii
Step5: Calculate reflectance of bulk polydisperse film
Step6: Plot results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import time
import structcol as sc
import structcol.refractive_index as ri
from structcol import montecarlo as mc
from structcol import detector as det
from structcol import phase_func_sphere as pfs
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.misc import factorial
import os
# Properties of the source
wavelengths = sc.Quantity(np.arange(400., 801.,10),'nm') # wavelengths at which to calculate reflectance
# Geometric properties of the sample
num_diams = 3 # number of diams from which to sample,
# higher number takes longer but gives a more precise calculation
sphere_boundary_diam_mean = sc.Quantity(10,'um') # mean diameter of the microspheres
pdi = 0.2 # poldispersity index
particle_radius = sc.Quantity(160,'nm') # radii of the two species of particles
volume_fraction_bulk = sc.Quantity(0.63,'') # volume fraction of the spheres in the bulk film
volume_fraction_particles = sc.Quantity(0.55, '') # volume fraction of the particles in the sphere boundary
bulk_thickness = sc.Quantity('50 um') # thickness of the bulk film
boundary = 'sphere' # geometry of sample
boundary_bulk = 'film' # geometry of the bulk sample
# Refractive indices
n_particle = ri.n('vacuum', wavelengths) # refractive index of particle
n_matrix = ri.n('polystyrene', wavelengths) + 2e-5*1j # refractive index of matrix
n_matrix_bulk = ri.n('vacuum', wavelengths) # refractive index of the bulk matrix
n_medium = ri.n('vacuum', wavelengths) # refractive index of medium outside the bulk sample.
# Monte Carlo parameters
ntrajectories = 500 # number of trajectories to run with a spherical boundary
nevents = 300 # number of scattering events for each trajectory in a spherical boundary
ntrajectories_bulk = 1000 # number of trajectories to run in the bulk film
nevents_bulk = 300 # number of events to run in the bulk film
# Plot settings
sns.set_style('white') # sets white plotting background
# calculate diameter list to sample from
sphere_boundary_diameters = pfs.calc_diam_list(num_diams, sphere_boundary_diam_mean, pdi, equal_spacing = False)
reflectance_sphere = np.zeros(wavelengths.size)
p_bulk = np.zeros((sphere_boundary_diameters.size, wavelengths.size, 200))
mu_scat_bulk = sc.Quantity(np.zeros((sphere_boundary_diameters.size, wavelengths.size)),'1/um')
mu_abs_bulk = sc.Quantity(np.zeros((sphere_boundary_diameters.size, wavelengths.size)),'1/um')
for j in range(sphere_boundary_diameters.size):
# print radius to keep track of where we are in calculation
print('diameter: ' + str(sphere_boundary_diameters[j]))
for i in range(wavelengths.size):
# caculate the effective index of the sample
n_sample = ri.n_eff(n_particle[i], n_matrix[i], volume_fraction_particles)
# Calculate the phase function and scattering and absorption coefficients from the single scattering model
# (this absorption coefficient is of the scatterer, not of an absorber added to the system)
p, mu_scat, mu_abs = mc.calc_scat(particle_radius, n_particle[i], n_sample,
volume_fraction_particles, wavelengths[i])
# Initialize the trajectories
r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_matrix_bulk[i], n_sample,
boundary, sample_diameter = sphere_boundary_diameters[j])
r0 = sc.Quantity(r0, 'um')
k0 = sc.Quantity(k0, '')
W0 = sc.Quantity(W0, '')
# Create trajectories object
trajectories = mc.Trajectory(r0, k0, W0)
# Generate a matrix of all the randomly sampled angles first
sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)
# Create step size distribution
step = mc.sample_step(nevents, ntrajectories, mu_scat)
# Run photons
trajectories.absorb(mu_abs, step)
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.move(step)
# Calculate reflection and transmition
(refl_indices,
trans_indices,
_, _, _,
refl_per_traj, trans_per_traj,
_,_,_,_,
reflectance_sphere[i],
_,_, norm_refl, norm_trans) = det.calc_refl_trans(trajectories, sphere_boundary_diameters[j],
n_matrix_bulk[i], n_sample, boundary,
run_fresnel_traj = False,
return_extra = True)
### Calculate phase function and lscat ###
# use output of calc_refl_trans to calculate phase function, mu_scat, and mu_abs for the bulk
p_bulk[j,i,:], mu_scat_bulk[j,i], mu_abs_bulk[j,i] = pfs.calc_scat_bulk(refl_per_traj, trans_per_traj,
trans_indices,
norm_refl, norm_trans,
volume_fraction_bulk,
sphere_boundary_diameters[j],
n_matrix_bulk[i],
wavelengths[i],
plot=False, phi_dependent=False)
# sample
sphere_diams_sampled = pfs.sample_diams(pdi, sphere_boundary_diameters,
sphere_boundary_diam_mean,
ntrajectories_bulk,
nevents_bulk)
# plot
sns.distplot(np.ndarray.flatten(sphere_diams_sampled), kde = False)
plt.ylabel('number sampled')
plt.xlabel('diameter (' + str(sphere_boundary_diameters.units) + ')')
reflectance_bulk_poly = np.zeros(wavelengths.size)
for i in range(wavelengths.size):
# print the wavelength keep track of where we are in calculation
print('wavelength: ' + str(wavelengths[i]))
# Initialize the trajectories
r0, k0, W0 = mc.initialize(nevents_bulk, ntrajectories_bulk, n_medium[i], n_matrix_bulk[i],
boundary_bulk)
r0 = sc.Quantity(r0, 'um')
W0 = sc.Quantity(W0, '')
k0 = sc.Quantity(k0, '')
# Sample angles and calculate step size based on sampled radii
sintheta, costheta, sinphi, cosphi, step, _, _ = pfs.sample_angles_step_poly(nevents_bulk, ntrajectories_bulk,
p_bulk[:,i,:],
sphere_diams_sampled,
mu_scat_bulk[:,i],
param_list = sphere_boundary_diameters)
# Create trajectories object
trajectories = mc.Trajectory(r0, k0, W0)
# Run photons
trajectories.absorb(mu_abs_bulk[0,i], step) # Note: polydisperse absorption does not currently work in the bulk
# so we arbitrarily use index 0, assuming that all scattering events
# have the same amount of absorption
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.move(step)
# calculate reflectance
reflectance_bulk_poly[i], transmittance = det.calc_refl_trans(trajectories, bulk_thickness,
n_medium[i], n_matrix_bulk[i], boundary_bulk)
plt.figure()
plt.plot(wavelengths, reflectance_bulk_poly, linewidth = 3)
plt.ylim([0,1])
plt.xlim([400,800])
plt.xlabel('Wavelength (nm)')
plt.ylabel('Reflectance')
plt.title('Bulk Reflectance')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SVD of a random matrix
Step2: Dealing with missing values
Step3: Eigendecomposition of a square symmetric matrix
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import unicode_literals, division, print_function, absolute_import
import numpy as np
np.random.seed(28)
import matplotlib.pyplot as plt
import tensorflow as tf
tf.set_random_seed(28)
import keras
from simec import SimilarityEncoder
%matplotlib inline
%load_ext autoreload
%autoreload 2
def msqe(A, B):
# compute the mean squared error between two matrices A and B
# obviously, A and B should be the same shape...
return np.mean((A - B) ** 2)
# generate a random matrix
n_input = 500
n_output = 700
A = np.random.rand(n_input, n_output)
# compute its SVD
U, s, Vh = np.linalg.svd(A, full_matrices=True)
U.shape, s.shape, Vh.shape
# make the eigenvalues of A a bit more extreme
S = np.zeros((n_input, n_output))
s[0] = s[1]+1
s[:10] *= 50.
s[10:20] *= 20.
s[20:100] *= 10.
S[:n_input, :n_input] = np.diag(s)
# recompute A and scale it to be in a somewhat reasonable range
A = np.dot(U, np.dot(S, Vh))
A = A/np.max(np.abs(A))
# recompute SVD again
U, s, Vh = np.linalg.svd(A, full_matrices=True)
S = np.zeros((n_input, n_output))
S[:n_input, :n_input] = np.diag(s)
# check that eigenvectors are orthogonal
np.dot(Vh[:100,:], Vh[:100,:].T)
# inspect eigenvalues
plt.plot(s)
# mean squared error of approximation decreases with more embedding dim
for e_dim in [2, 10, 25, 50, 75, 100, 250, 400, 500]:
print("mse with %3i e_dim: %.8f" % (e_dim, msqe(A, np.dot(U[:,:e_dim], np.dot(S[:e_dim,:e_dim], Vh[:e_dim,:])))))
# factorize the matrix with a simec
X = np.eye(n_input)
mses = []
e_dims = [2, 10, 25, 50, 75, 100, 250, 400, 500, 750, 1000]
l_rates = {400: 0.004, 500: 0.0034, 750: 0.0032, 1000: 0.003}
for e_dim in e_dims:
model = SimilarityEncoder(n_input, e_dim, n_output, orth_reg=0.001 if e_dim > 500 else 0., opt=keras.optimizers.Adamax(lr=0.005 if e_dim <= 250 else l_rates[e_dim]))
model.fit(X, A, epochs=50)
mse = msqe(A, model.predict(X))
mses.append(mse)
print("mse with %4i e_dim: %.8f" % (e_dim, mse))
for i, e_dim in enumerate(e_dims):
print("mse with %4i e_dim: %.8f" % (e_dim, mses[i]))
# factorize the matrix with a simec - transpose works just as well
# this time we use the embedding we learned before as the weights of the last layer
# to get the mapping function for the other side of the equation (which is of course kind of useless
# here since we don't actually map feature vectors but only identiy vectors)
X1 = np.eye(n_input)
X2 = np.eye(n_output)
mses1 = []
mses2 = []
mses3 = []
e_dims = [2, 10, 25, 50, 75, 100, 250, 400, 500, 750, 1000]
for e_dim in e_dims:
model = SimilarityEncoder(n_input, e_dim, n_output, opt=keras.optimizers.Adamax(lr=0.005 if e_dim <= 250 else l_rates[e_dim]))
model.fit(X1, A, epochs=50)
mse = msqe(A, model.predict(X1))
mses1.append(mse)
print("mse with %4i e_dim: %.8f" % (e_dim, mse))
Y1 = model.transform(X1)
model = SimilarityEncoder(n_output, e_dim, n_input, W_ll=Y1.T, wll_frozen=True, opt=keras.optimizers.Adamax(lr=0.005 if e_dim <= 250 else l_rates[e_dim]))
model.fit(X2, A.T, epochs=50)
mse = msqe(A.T, model.predict(X2))
mses2.append(mse)
print("mse with %4i e_dim: %.8f" % (e_dim, mse))
Y2 = model.transform(X2)
# the dot product of both embeddings should also approximate A
mse = msqe(A, np.dot(Y1, Y2.T))
mses3.append(mse)
for i, e_dim in enumerate(e_dims):
print("mse with %4i e_dim: %.8f / %.8f / %.8f" % (e_dim, mses1[i], mses2[i], mses3[i]))
X = np.eye(n_input)
missing_targets = [0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
e_dims = [2, 10, 50, 100, 250, 500]
for e_dim in e_dims:
np.random.seed(15)
mses = []
mses_svd = []
mse_svd = msqe(A, np.dot(U[:,:e_dim], np.dot(S[:e_dim,:e_dim], Vh[:e_dim,:])))
for m in missing_targets:
print(m)
A_noisy = A.copy()
A_noisy[np.random.rand(*A_noisy.shape)<=m] = -100
model = SimilarityEncoder(n_input, e_dim, n_output, mask_value=-100, l2_reg_emb=0.00001,
l2_reg_out=0. if m < 0.7 else 0.00001, opt=keras.optimizers.Adamax(lr=0.025 if e_dim < 50 else 0.01))
model.fit(X, A_noisy, epochs=60)
mse = msqe(A, model.predict(X))
mses.append(mse)
A_noisy[A_noisy == -100] = np.mean(A)
U_n, s_n, Vh_n = np.linalg.svd(A_noisy, full_matrices=True)
S_n = np.zeros((n_input, n_output))
S_n[:n_input, :n_input] = np.diag(s_n)
mses_svd.append(msqe(A, np.dot(U_n[:,:e_dim], np.dot(S_n[:e_dim,:e_dim], Vh_n[:e_dim,:]))))
print(mses)
plt.figure();
plt.plot([0, missing_targets[-1]], [mse_svd, mse_svd], '--', linewidth=0.5, label='SVD noise free');
plt.plot(missing_targets, mses_svd, '-o', markersize=3, label='SVD');
plt.plot(missing_targets, mses, '-o', markersize=3, label='SimEc SVD');
plt.legend(loc=0);
plt.title('Matrix factorization of A (%i embedding dim)' % e_dim);
plt.xticks(missing_targets, missing_targets);
plt.xlabel('Fraction of Missing Entries');
plt.ylabel('Mean Squared Error');
# get 2 square symmetric matrices as AA^T and A^TA
S1 = np.dot(A, A.T)
S2 = np.dot(A.T, A)
# make sure their range of values is still sort of reasonable (for >100 we should probably rescale)
np.max(S1), np.min(S1), np.max(S2), np.min(S2)
# the corresponding SVD eigenvalues and -vectors should work here as well
for e_dim in [2, 10, 25, 50, 75, 100, 250, 400, 500]:
print("mse with %3i e_dim: %11.8f" % (e_dim, msqe(S1, np.dot(U[:,:e_dim], np.dot(S[:e_dim,:e_dim]**2, U.T[:e_dim,:])))))
S_out = np.zeros((n_output, n_output))
S_out[:n_input, :n_input] = S[:n_input, :n_input]
for e_dim in [2, 10, 25, 50, 75, 100, 250, 500, 700]:
print("mse with %3i e_dim: %11.8f" % (e_dim, msqe(S2, np.dot(Vh.T[:,:e_dim], np.dot(S_out[:e_dim,:e_dim]**2, Vh[:e_dim,:])))))
# factorize the similarity matrix S1 with a simec
X = np.eye(n_input)
mses1 = []
mses2 = []
e_dims = [2, 10, 25, 50, 75, 100, 250, 400, 500, 750, 1000]
l_rates = [0.1, 0.1, 0.1, 0.1, 0.1, 0.05, 0.05, 0.01, 0.01, 0.01, 0.01]
for i, e_dim in enumerate(e_dims):
model = SimilarityEncoder(n_input, e_dim, S1.shape[1], s_ll_reg=1., S_ll=S1, opt=keras.optimizers.Adamax(lr=l_rates[i]))
model.fit(X, S1, epochs=100)
mse = msqe(S1, model.predict(X))
mses1.append(mse)
print("mse with %4i e_dim: %11.8f" % (e_dim, mse))
# scalar product of emedding should also approximate S1
Y = model.transform(X)
mse = msqe(S1, np.dot(Y, Y.T))
mses2.append(mse)
print("mse with %4i e_dim: %11.8f" % (e_dim, mse))
for i, e_dim in enumerate(e_dims):
print("mse with %4i e_dim: %11.8f (%11.8f)" % (e_dim, mses1[i], mses2[i]))
# factorize the similarity matrix S2 with a simec
X = np.eye(n_output)
mses1 = []
mses2 = []
e_dims = [2, 10, 25, 50, 75, 100, 250, 400, 500, 750, 1000]
l_rates = [0.1, 0.1, 0.1, 0.1, 0.1, 0.05, 0.05, 0.01, 0.01, 0.01, 0.01]
for i, e_dim in enumerate(e_dims):
model = SimilarityEncoder(n_output, e_dim, S2.shape[1], s_ll_reg=1., S_ll=S2, opt=keras.optimizers.Adamax(lr=l_rates[i]))
model.fit(X, S2, epochs=60)
mse = msqe(S2, model.predict(X))
mses1.append(mse)
print("mse with %4i e_dim: %11.8f" % (e_dim, mse))
# scalar product of emedding should also approximate S1
Y = model.transform(X)
mse = msqe(S2, np.dot(Y, Y.T))
mses2.append(mse)
print("mse with %4i e_dim: %11.8f" % (e_dim, mse))
for i, e_dim in enumerate(e_dims):
print("mse with %4i e_dim: %11.8f (%11.8f)" % (e_dim, mses1[i], mses2[i]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: All bench info will be stored in this dictionary so it's easy to compare with other platforms.
Step2: What's the platform
Step3: Make sure all motors are turned off to avoid breaking anything
Step4: We find the synchronization loop for pos/speed/load and monkey patch them for monitoring.
Step5: Now, we define our monitor and plotting functions.
Step6: We also define this follow trajectory function, which applies a sinus on one motor (choosen below) and plot how close is its real position from the target one
Step7: Now choose which motor you want to use for the follow trajectory test. It should be able to move freely from -20 to +20 degrees.
Step8: Benchmark
Step9: Normal usage
Step10: Without primitives
Step11: Without all sensors
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
from ipywidgets import interact
%pylab inline
results = {}
import platform
p = platform.platform()
print(p)
results['platform'] = p
import sys
v = sys.version
print(v)
results['python'] = v
import pypot
import poppy.creatures
results['pypot'] = pypot.__version__
print('Pypot version: {}'.format(results['pypot']))
results['poppy-creature'] = poppy.creatures.__version__
print('Poppy-creature version: {}'.format(results['poppy-creature']))
from poppy.creatures import installed_poppy_creatures
RobotCls = None
def robot_selector(robot):
global RobotCls
RobotCls = robot
interact(robot_selector, robot=installed_poppy_creatures);
robot = RobotCls()
results['robot'] = RobotCls
for m in robot.motors:
m.compliant = True
import time
from pypot.dynamixel.syncloop import MetaDxlController
from pypot.dynamixel.controller import PosSpeedLoadDxlController
meta_controllers = [c for c in robot._controllers if isinstance(c, MetaDxlController)]
controllers = [cc for cc in c.controllers for c in meta_controllers if isinstance(cc, PosSpeedLoadDxlController)]
for c in controllers:
c.stop()
for c in controllers:
def wrapped_update():
if not hasattr(c, 't'):
c.t = []
c.t.append(time.time())
c.update()
c._update = wrapped_update
for c in controllers:
c.start()
import psutil
def monitor(controllers, duration):
for c in controllers:
c.stop()
c.t = []
c.start()
cpu = []
start = time.time()
while time.time() - start < duration:
time.sleep(1.0)
cpu.append(psutil.cpu_percent())
print('Avg CPU usage: {}%'.format(mean(cpu)))
return {c: array(c.t) for c in controllers}
def freq_plot(logs):
for c, t in logs.items():
dt = diff(t)
freq = 1.0 / dt
print('Avg frq for controller {}: {}ms STD={}ms'.format(c.ids, freq.mean(), freq.std()))
hist(freq)
xlim(0, 100)
def follow_trajectory(motor, duration=5, freq=50):
t = linspace(0, duration, duration * freq)
a1, f1 = 10.0, 1.0
a2, f2 = 5.0, 0.5
traj = a1 * sin(2 * pi * f1 * t) + a2 * sin(2 * pi * f2 * t)
rec = []
motor.compliant = False
motor.moving_speed = 0
motor.goal_position = 0
time.sleep(1.)
for p in traj:
motor.goal_position = p
rec.append(motor.present_position)
time.sleep(1.0 / freq)
motor.compliant = True
plot(traj)
plot(rec)
motor = None
def motor_selector(m):
global motor
motor = getattr(robot, m)
interact(motor_selector, m=[m.name for m in robot.motors]);
duration = 30
d = monitor(controllers, duration)
freq_plot(d)
results['normal'] = d
follow_trajectory(motor)
for p in robot.primitives:
p.stop()
robot._primitive_manager.stop()
d = monitor(controllers, duration)
freq_plot(d)
results['without primitive'] = d
follow_trajectory(motor)
for s in robot.sensors:
s.close()
d = monitor(controllers, duration)
freq_plot(d)
results['without sensor'] = d
follow_trajectory(motor)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We are going to download some prices, just as an example. We'll work on Apple (AAPL), Alphabet (former Google, GOOGL), Microsoft (MSFT), McDonald's (MCD), Coca-Cola (KO) over a ten-year period.
Step2: Now, let's calculate an annualized return for each of the stocks over the entire period.
Step3: Then we could compute an annual volatility over the period, daily sampled.
Step4: Let's join these stats into a single dataframe.
Step5: And let's make a chart, plotting volatities on X-axis and returns on Y-axis.
Step6: A few considerations
Step7: In order to calculate volatility, we need a covariance matrix. Luckily enough, Pandas has a function to do exactly that!
Step8: Let's check that the roots of the diagonal of the matrix are the (annualized) standard deviations of the stocks.
Step9: The volatility of the portfolio is easily computed.
Step10: So, we can plot the risk-reward profile of the portfolio on the previous chart.
Step11: We can notice that the portfolio risk profile is similar to old economy stocks, but the historical return benefits from the higher returns of tech stocks. Diversification at work!
Step12: Let's compute first two moments of the simulated portfolios.
Step13: And let's plot those fancy random portfolios on the risk-reward chart.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import pandas_datareader.data as web
import matplotlib.pyplot as plt
import numpy as np
# Defines the chart color scheme using Tableu's Tableau10
plt.style.use('https://gist.githubusercontent.com/mbonix/8478091db6a2e6836341c2bb3f55b9fc/raw/7155235ed03e235c38b66c160d402192ad4d94d9/tableau10.mplstyle')
%matplotlib inline
# List of stocks tickers
STOCKS = ['NASDAQ:AAPL', 'NASDAQ:GOOGL', 'NASDAQ:MSFT', 'NYSE:MCD', 'NYSE:KO']
# Analysis period
START = '12-30-2006'
END = '12-31-2016'
data = web.DataReader(STOCKS, 'google', pd.to_datetime(START), pd.to_datetime(END))
# Only Close prices are relevant for analysis
prices = data.loc['Close', :, :]
prices.tail(10)
compound_returns = (1 + (prices.iloc[-1, :]).to_frame().T / (prices.iloc[0, :])) ** 0.1 - 1
compound_returns.index = ['Compound Annual Return']
compound_returns
annual_volatility = prices.fillna(method='pad').pct_change().std().apply(lambda x: x * 260 ** 0.5).to_frame().T
annual_volatility.index = ['Annual Volatility']
annual_volatility
stock_moments = pd.concat((compound_returns, annual_volatility))
stock_moments
f = plt.figure(figsize=(10, 8));
ax = f.add_subplot(111);
for n in range(len(stock_moments.columns)):
ax.plot(stock_moments.loc['Annual Volatility', stock_moments.columns[n]], stock_moments.loc['Compound Annual Return', stock_moments.columns[n]], ls='', marker='o', label=stock_moments.columns[n]);
ax.legend(loc='best');
ax.set_xlabel(stock_moments.index[1]);
ax.set_ylabel(stock_moments.index[0]);
ax.set_title('Stock risk reward profile');
WEIGHTS = [0.2, 0.2, 0.2, 0.2, 0.2]
port_moments = pd.DataFrame(index=stock_moments.index, columns=['Equally-weighted Portfolio'])
port_moments.loc['Compound Annual Return', 'Equally-weighted Portfolio'] = (stock_moments.loc['Compound Annual Return', :] * WEIGHTS).sum()
port_moments
cov_mat = prices.fillna(method='pad').pct_change().cov().apply(lambda x: x * 260)
cov_mat
pd.DataFrame(data=[cov_mat.iloc[n, n] ** 0.5 for n in range(len(cov_mat.columns))], index=cov_mat.columns, columns=['Annual Volatility']).T
port_moments.loc['Annual Volatility', 'Equally-weighted Portfolio'] = np.array(WEIGHTS).dot(cov_mat.values).dot(np.array(WEIGHTS).T) ** 0.5
port_moments
f = plt.figure(figsize=(10, 8));
ax = f.add_subplot(111);
for n in range(len(stock_moments.columns)):
ax.plot(stock_moments.loc['Annual Volatility', stock_moments.columns[n]], stock_moments.loc['Compound Annual Return', stock_moments.columns[n]], ls='', marker='o', label=stock_moments.columns[n]);
ax.plot(port_moments.loc['Annual Volatility','Equally-weighted Portfolio'], port_moments.loc['Compound Annual Return', 'Equally-weighted Portfolio'], ls='', marker='d', label='Equally-weighted Portfolio');
ax.legend(loc='best');
ax.set_xlabel(stock_moments.index[1]);
ax.set_ylabel(stock_moments.index[0]);
ax.set_title('Stock risk reward profile');
wgts = pd.DataFrame(np.array(np.random.uniform(0, 1, (5000, 5))), columns=prices.columns)
w = wgts.apply(lambda x: x / wgts.sum(axis=1))
w
sim_ports = pd.DataFrame(index=w.index, columns=port_moments.index)
sim_ports.loc[:, 'Compound Annual Return'] = (stock_moments.loc['Compound Annual Return', :] * w).sum(axis=1)
sim_ports.loc[:, 'Annual Volatility'] = np.diag(w.values.dot(cov_mat.values).dot(w.values.T)) ** 0.5
sim_ports
f = plt.figure(figsize=(10, 8));
ax = f.add_subplot(111);
ax.plot(sim_ports.loc[0, 'Annual Volatility'], sim_ports.loc[0, 'Compound Annual Return'], ls='', marker='.', color='#cccccc', label='Random-weighted Portfolios');
for m in range(len(sim_ports.index)):
ax.plot(sim_ports.loc[m, 'Annual Volatility'], sim_ports.loc[m, 'Compound Annual Return'], ls='', marker='.', color='#cccccc');
for n in range(len(stock_moments.columns)):
ax.plot(stock_moments.loc['Annual Volatility', stock_moments.columns[n]], stock_moments.loc['Compound Annual Return', stock_moments.columns[n]], ls='', marker='o', label=stock_moments.columns[n]);
ax.plot(port_moments.loc['Annual Volatility','Equally-weighted Portfolio'], port_moments.loc['Compound Annual Return', 'Equally-weighted Portfolio'], ls='', marker='d', label='Equally-weighted Portfolio');
ax.legend(loc='best');
ax.set_xlabel(stock_moments.index[1]);
ax.set_ylabel(stock_moments.index[0]);
ax.set_title('Stock risk reward profile');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the Data
Step2: Check Class Imbalance
Step3: Model Construction and Cross-validation
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import seaborn as sn
with open('SMSSpamCollection.txt') as fh:
lines = list(fh)
data = [(line.split()[0], ' '.join(line.split()[1:])) for line in lines]
data_df = pd.DataFrame.from_records(data, columns=['label', 'text'])
counts = data_df.label.value_counts()
ax = counts.plot(kind='bar', rot=0)
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.model_selection import StratifiedKFold, cross_val_predict
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
pipeline = Pipeline([
('counter', CountVectorizer(min_df=5, max_df=0.3, lowercase=True)),
('model', MultinomialNB(fit_prior=False))
])
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=1000)
predictions = cross_val_predict(
pipeline,
data_df.text.values,
data_df.label.values,
cv=cv)
report = classification_report(predictions, data_df.label.values)
print(report)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set credentials.<br>
Step2: Set the dates.<br>
Step3: Category list.<br>
Step4: Set the query.<br>
Step5: Get bulk data from arXiv.
Step6: Set target language and create the instance.<br>
Step7: Execute translations
|
<ASSISTANT_TASK:>
Python Code:
import os
from modules.DataArxiv import get_date
from modules.DataArxiv import execute_query
from modules.Translate import Translate
CREDENTIALS_JSON = "credentials.json"
CREDENTIALS_PATH = os.path.normpath(
os.path.join(os.getcwd(), CREDENTIALS_JSON)
)
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = CREDENTIALS_PATH
REF_DATE, PREV_DATE = get_date()
#REF_DATE, PREV_DATE = get_date(baseday='20170420', beforeNdays=7)
CATEGORY_LIST = {
'ml' : ["cat:stat.ML","cat:cs.AI","cat:cs.CC","cat:cs.CE","cat:cs.CG","cat:cs.CV","cat:cs.DC","cat:cs.IR","cat:cs.IT","cat:cs.NE"]
, 'ph' : ["hep-ph"]
, 'th' : ["hep-th"]
}
CATEGORY_KEY = 'ml'
CATEGORY = "+OR+".join(CATEGORY_LIST[CATEGORY_KEY])
QUERY = '''({})+AND+submittedDate:[{}0000+TO+{}0000]'''.format(
CATEGORY,PREV_DATE,REF_DATE
)
BULK = execute_query(QUERY, prune=True, start=0, max_results=200)
TARGET_LANG = 'ja'
TRANSLATE_CLIENT = Translate(TARGET_LANG)
TRANSLATE_CLIENT.check_arxiv(BULK, nmt=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter() # bag of words here
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)} ## create the word-to-index dictionary here
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here, we simulate data using a literature-curated boolean gene
Step2: Plot the four realizations of time series.
Step3: Compute further visualizations.
Step4: Inspecting the genes of the fixed poitns, we can make the following annotation.
Step5: Reconstructing progression and branching using DPT
Step6: The "cuts" into branches are quite arbitrary. Use PAGA instead.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scanpy.api as sc
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.settings.set_figure_params(dpi=80) # low dpi (dots per inch) yields small inline figures
sc.logging.print_versions()
adata = sc.tl.sim('krumsiek11')
sc.pl.sim(adata)
sc.tl.tsne(adata)
sc.pp.neighbors(adata, n_neighbors=30)
sc.tl.draw_graph(adata)
fate_labels = {0: 'progenitor', 159: 'monocyte', 319: 'erythrocyte',
459: 'megacaryocyte', 619: 'neutrophil'}
adata.uns['highlights'] = fate_labels
cell_type = np.array(['progenitor' for i in range(adata.n_obs)])
cell_type[80:160] = 'monocyte'
cell_type[240:320] = 'erythrocyte'
cell_type[400:480] = 'megakaryocyte'
cell_type[560:640] = 'neutrophil'
adata.obs['cell_type'] = cell_type
sc.pl.tsne(adata)
sc.pl.draw_graph(adata)
sc.pl.tsne(adata, color='cell_type')
sc.pl.draw_graph(adata, color='cell_type')
adata.uns['iroot'] = 0
sc.tl.dpt(adata, n_branchings=2)
sc.pl.tsne(adata, color='dpt_pseudotime')
sc.pl.draw_graph(adata, color='dpt_pseudotime')
sc.pl.diffmap(adata, color='dpt_pseudotime', projection='3d')
sc.pl.tsne(adata, color='dpt_groups')
sc.pl.draw_graph(adata, color='dpt_groups', title='simulated data: DPT groups')
sc.pl.diffmap(adata, color='dpt_groups', projection='3d')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The cuboctahedron and icosahedron are related by having the same edge length. The ratio of the two, in terms of volume, is
Step2: Icosa * sfactor = Cubocta.
Step3: The cuboctahedron that jitterbugs into an icosahedron takes twenty regular tetrahedrons -- in volume, eight of them so formed (the other twelve paired in six half-octahedra) -- into twenty irregular tetrahedrons in the corresponding regular icosahedron (same surface edge lengths).
Step4: From Figure 988.00 in Synergetics
Step5: Verifying S Module Volume
Step6: Lets look at the S module in more detail, and compute its volume from scratch, using a Python formula.
Step7: Setting a = 2 give us the following edges table
Step8: The S Train
Step9: SmallGuy's edges are 2R times 1/sfactor, since linear change is a 3rd root of volumetric change (when shape is held constant).
Step10: When going in the other direction (smaller to bigger), apply the S factor directly (not the reciprocal) since the volumes increase.
Step11: The SuperRT is the RT defined by the Jitterbug icosa (JB_icosa) and its dual, the Pentagonal Dodecahedron of tetravolume $3\sqrt{2}(\phi^2 + 1)$.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("1VXDejQcAWY")
import gmpy2
gmpy2.get_context().precision=200
root2 = gmpy2.sqrt(2)
root7 = gmpy2.sqrt(7)
root5 = gmpy2.sqrt(5)
root3 = gmpy2.sqrt(3)
# phi
𝜙 = (gmpy2.sqrt(5) + 1)/2
# Synergetics modules
Smod = (𝜙 **-5)/2
Emod = (root2/8) * (𝜙 ** -3)
sfactor = Smod/Emod
print("sfactor: {:60.57}".format(sfactor))
sfactor = 2 * root2 * 𝜙 ** -2 # 2 * (7 - 3 * root5).sqrt()
print("sfactor: {:60.57}".format(sfactor))
# sfactor in terms of phi-scaled emods
e3 = Emod * 𝜙 ** -3
print("sfactor: {:60.57}".format(24*Emod + 8*e3))
# length of skew icosa edge EF Fig 988.13A below, embedded in
# octa of edge a=2
EF = 2 * gmpy2.sqrt(7 - 3 * root5)
print("sfactor: {:60.57}".format(EF))
icosatet = 1/sfactor
icosatet
JB_icosa = 20 * icosatet
print("Icosahedron: {:60.57}".format(JB_icosa)) # for volume of JB icosahedron
icosa_within = 2.5 * sfactor * sfactor
icosa_within
smod = (4 - icosa_within)/24
print("smod: {:60.57}".format(smod))
(𝜙**-5)/2
print("smod: {:60.57}".format(smod))
import tetvols
# assume a = 1 D
a = 1
# common apex is F
FH = 1/𝜙
FE = sfactor/2
FG = root3 * FE/2
# connecting the base (same order, i.e. H, E, G)
HE = (3 - root5)/2
EG = FE/2
GH = EG
Smod = tetvols.ivm_volume((FH, FE, FG, HE, EG, GH))
print("smod: {:60.57}".format(Smod))
print("Octa Edge = 1")
print("FH: {:60.57}".format(FH))
print("FE: {:60.57}".format(FE))
print("FG: {:60.57}".format(FG))
print("HE: {:60.57}".format(HE))
print("EG: {:60.57}".format(EG))
print("GH: {:60.57}".format(GH))
print("Octa Edge = 2")
print("FH: {:60.57}".format(FH * 2))
print("FE: {:60.57}".format(FE * 2))
print("FG: {:60.57}".format(FG * 2))
print("HE: {:60.57}".format(HE * 2))
print("EG: {:60.57}".format(EG * 2))
print("GH: {:60.57}".format(GH * 2))
SmallGuy = 20 * (1/sfactor) ** 3
SmallGuy
print("SmallGuy: {:60.57}".format(SmallGuy))
print("SmallGuy Edge: {:56.54}".format(2 * (1/sfactor))) # SmallGuy edge
print("Icosahedron: {:56.53}".format(JB_icosa)) # for volume of JB icosahedron
Syn3 = gmpy2.sqrt(gmpy2.mpq(9,8))
JB_icosa = SmallGuy * sfactor * sfactor
print("JB Icosa: {:60.57}".format(JB_icosa))
JB_cubocta = JB_icosa * sfactor
print("JB Cubocta: {:60.57}".format(JB_cubocta))
SuperRT = JB_cubocta * Syn3
SuperRT # 20*S3
print("SuperRT: {:60.57}".format(SuperRT))
volume1 = SuperRT - JB_icosa
volume2 = (4 - 24*Smod) * (1/sfactor)
print("volume1: {:60.57}".format(volume1))
print("volume2: {:60.57}".format(volume2))
# one more application of the 1/sfactor gives the 2.5 cubocta
print("Edged 1 Cubocta: {:60.57}".format(volume2 * (1/sfactor)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Counts
Step2: Classes Preview
Step3: Let's Add Labels
Step4: Vizualize All Spectra in a Class
Step5: Wavelength Ranges
Step6: Supremum
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import collections
import math
import matplotlib.pyplot as plt
import h5py
import csv
LABELS_FILE = 'data/ondrejov-dataset.csv'
with open(LABELS_FILE, newline='') as f:
labels = list(csv.DictReader(f))
counts = collections.Counter(map(lambda x: x['label'], labels))
print(counts)
for label, cnt in counts.items():
percents = cnt / len(labels) * 100
print('{} is {}%'.format(label, round(percents, 2)))
idx = np.arange(len(counts))
rects = plt.bar(idx, list(map(lambda x: x[1], sorted(counts.items()))))
plt.xticks(idx, ('emission', 'absorption', 'unknown', 'double-peak'))
plt.ylabel('number of spectra')
plt.xlabel('class')
plt.title('portion of each class in Ondřejov dataset');
f = h5py.File('data/data.hdf5')
spectra = f['spectra']
def plot_class(spectrum, ax, class_name):
ax.plot(spectrum[0], spectrum[1])
ax.set_title(class_name)
ax.set_xlabel('wavelength (Angstrom)')
ax.set_ylabel('flux')
ax.axvline(x=6562.8, color='black', label='H-alpha', alpha=0.25)
ax.legend()
fig, axs = plt.subplots(3, 1)
idents = ['lb160035', 'a201403300026', 'si220021']
classes = ['emission', 'absorption', 'double-peak']
for ident, ax, cl in zip(idents, axs, classes):
plot_class(spectra[ident], ax, cl)
fig.tight_layout()
for spectrum in labels:
ident = spectrum['id'].split('/')[-1]
spectra[ident].attrs['label'] = int(spectrum['label'])
fig, (ax0, ax1, ax3) = plt.subplots(3, 1)
axs = [ax0, ax1, None, ax3]
for ident, data in spectra.items():
label = spectra[ident].attrs['label']
if label == 2:
continue
axs[label].plot(data[0], data[1], alpha=0.1, lw=0.5)
fig.tight_layout()
# find spectrum which start with highest value
# x is tuple x[1] are values, [0, 0] is first wavelength
wave_starts = dict(map(lambda x: (x[0], x[1][0, 0]), spectra.items()))
starts_n, starts_bins, _ = plt.hist(list(wave_starts.values()))
plt.title('wavelenght starts')
starts_n, starts_bins
infimum = list(reversed(sorted(wave_starts.items(), key=lambda x: x[1])))[0][1]
print('infimum:', math.ceil(infimum), 'Angstrom')
list(reversed(sorted(wave_starts.items(), key=lambda x: x[1])))[:10]
def plot_spectrum(ident):
spectrum = spectra[ident]
plt.plot(spectrum[0], spectrum[1], label=ident)
plot_spectrum('la220044')
plot_spectrum('a201504060008')
plot_spectrum('a201504060037')
plot_spectrum('td210007')
plot_spectrum('qd260023')
plt.legend();
# find spectrum which end with lowest value
# x is tuple x[1] are values, [0, 0] is first wavelength
wave_ends = dict(map(lambda x: (x[0], x[1][0, -1]), spectra.items()))
ends_n, ends_bins, _ = plt.hist(list(wave_ends.values()))
plt.title('wavelenght ends')
ends_n, ends_bins
supremum = list(sorted(wave_ends.items(), key=lambda x: x[1]))[0][1]
print('supremum:', math.floor(supremum), 'Angstrom')
list(sorted(wave_ends.items(), key=lambda x: x[1]))[:10]
plot_spectrum('pb060015')
plot_spectrum('lb160035')
plt.legend();
f.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a raw data set, then compute season and apply basic filters
Step2: Show the sample counts for each parameter
Step3: Export TSS to a CSV file
|
<ASSISTANT_TASK:>
Python Code:
import numpy
import wqio
import pynsqd
import pycvc
def get_cvc_parameter(nsqdparam):
try:
cvcparam = list(filter(
lambda p: p['nsqdname'] == nsqdparam, pycvc.info.POC_dicts
))[0]['cvcname']
except IndexError:
cvcparam = numpy.nan
return cvcparam
def fix_nsqd_bacteria_units(df, unitscol='units'):
df[unitscol] = df[unitscol].replace(to_replace='MPN/100 mL', value='CFU/100 mL')
return df
nsqd_params = [
p['nsqdname']
for p in pycvc.info.POC_dicts
]
raw_data = pynsqd.NSQData().data
clean_data = (
raw_data
.query("primary_landuse != 'Unknown'")
.query("parameter in @nsqd_params")
.query("fraction == 'Total'")
.query("epa_rain_zone == 1")
.assign(station='outflow')
.assign(cvcparam=lambda df: df['parameter'].apply(get_cvc_parameter))
.assign(season=lambda df: df['start_date'].apply(wqio.utils.getSeason))
.drop('parameter', axis=1)
.rename(columns={'cvcparam': 'parameter'})
.pipe(fix_nsqd_bacteria_units)
.query("primary_landuse == 'Residential'")
)
clean_data.groupby(by=['parameter', 'season']).size().unstack(level='season')
(
clean_data
.query("parameter == 'Total Suspended Solids'")
.to_csv('NSQD_Res_TSS.csv', index=False)
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's plot a random 100 images
Step2: Now, let use the Neural Network with 1 hidden layers. The number of neurons in each layer is X_train.shape[1] which is 400 in our example (excluding the extra bias unit).
|
<ASSISTANT_TASK:>
Python Code:
scaler = StandardScaler()
mnist = fetch_mldata('MNIST original')
# converting data to be of type float .astype(float) to supress
# data conversion warrning during scaling
X= pd.DataFrame(scaler.fit_transform(mnist['data'].astype(float)))
y= pd.DataFrame(mnist['target'].astype(int))
# This function plots the given sample set of images as a grid with labels
# if labels are available.
def plot_sample(S,labels=None):
m, n = S.shape;
example_width = int(np.round(np.sqrt(n)));
example_height = int((n / example_width));
# Compute number of items to display
display_rows = int(np.floor(np.sqrt(m)));
display_cols = int(np.ceil(m / display_rows));
fig = plt.figure()
for i in range(0,m):
arr = S[i,:]
arr = arr.reshape((example_width,example_height))
ax = fig.add_subplot(display_rows,display_cols , i+1)
ax.imshow(arr, aspect='auto', cmap=plt.get_cmap('gray'))
if labels is not None:
ax.text(0,0, '{}'.format(labels[i]), bbox={'facecolor':'white', 'alpha':0.8,'pad':2})
ax.axis('off')
plt.show()
samples = X.sample(100)
plot_sample(samples.as_matrix())
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
# since the data we have is one big array, we want to split it into training
# and testing sets, the split is 70% goes to training and 30% of data for testing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
neural_network =(80,)
# for this excersize we are using MLPClassifier with lbfgs optimizer (the family of quasi-Newton methods). In my simple
# experiments it produces good quality outcome
clf = MLPClassifier(solver='lbfgs', alpha=1, hidden_layer_sizes=neural_network)
clf.fit(X_train, y_train[0].ravel())
# So after the classifier is trained, lets see what it predicts on the test data
prediction = clf.predict(X_test)
quality = np.where(prediction == y_test[0].ravel(),1,0)
print ("Percentage of correct results is {:.04f}".format(accuracy_score(y_test,prediction)))
# I am going to use the same test set of data and will select random 48 examples from it.
# The top left corner is the prediction from the Neural Network
# please note that 0 is represented as 10 in this data set
samples = X_test.sample(100)
plot_sample(samples.as_matrix(),clf.predict(samples))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: (1b) Plural
Step3: (1c) Aplicando a função ao RDD
Step4: Nota
Step5: (1e) Tamanho de cada palavra
Step6: (1f) RDDs de pares e tuplas
Step7: Parte 2
Step8: (2b) Calculando as contagens
Step9: (2c) reduceByKey
Step10: (2d) Agrupando os comandos
Step11: Parte 3
Step12: (3b) Calculando a Média de contagem de palavras
Step14: Parte 4
Step16: (4b) Normalizando o texto
Step17: (4c) Carregando arquivo texto
Step18: (4d) Extraindo as palavras
Step19: Conforme deve ter percebido, o uso da função map() gera uma lista para cada linha, criando um RDD contendo uma lista de listas.
Step20: Nota
Step21: (4e) Remover linhas vazias
Step22: (4f) Contagem de palavras
Step24: Parte 5
Step28: (5b) Valores Categóricos
|
<ASSISTANT_TASK:>
Python Code:
from pyspark import SparkContext
sc =SparkContext()
ListaPalavras = ['gato', 'elefante', 'rato', 'rato', 'gato']
palavrasRDD = sc.parallelize(ListaPalavras, 4)
print type(palavrasRDD)
# EXERCICIO
def Plural(palavra):
Adds an 's' to `palavra`.
Args:
palavra (str): A string.
Returns:
str: A string with 's' added to it.
return "{0}{1}".format(palavra,"s")#<COMPLETAR>
print Plural('gato')
help(Plural)
assert Plural('rato')=='ratos', 'resultado incorreto!'
print 'OK'
# EXERCICIO
pluralRDD = palavrasRDD.map(Plural)#<COMPLETAR>
print pluralRDD.collect()
assert pluralRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!'
print 'OK'
# EXERCICIO
pluralLambdaRDD = palavrasRDD.map(lambda x: "{0}{1}".format(x,"s"))#<COMPLETAR>
print pluralLambdaRDD.collect()
assert pluralLambdaRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!'
print 'OK'
# EXERCICIO
pluralTamanho = (pluralRDD.map(lambda x: len(x))
#<COMPLETAR>
).collect()
print pluralTamanho
assert pluralTamanho==[5,9,5,5,5], 'valores incorretos'
print "OK"
# EXERCICIO
palavraPar = palavrasRDD.map(lambda x: (x,1))#<COMPLETAR>
print palavraPar.collect()
assert palavraPar.collect() == [('gato',1),('elefante',1),('rato',1),('rato',1),('gato',1)], 'valores incorretos!'
print "OK"
# EXERCICIO
palavrasGrupo = palavraPar.groupByKey()
for chave, valor in palavrasGrupo.collect():
print '{0}: {1}'.format(chave, list(valor))
assert sorted(palavrasGrupo.mapValues(lambda x: list(x)).collect()) == [('elefante', [1]), ('gato',[1, 1]), ('rato',[1, 1])], 'Valores incorretos!'
print "OK"
# EXERCICIO
contagemGroup = palavrasGrupo.mapValues(lambda x: sum (x))#<COMPLETAR>
print contagemGroup.collect()
assert list(sorted(contagemGroup.collect()))==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
# EXERCICIO
from operator import add
contagem = palavraPar.reduceByKey(add)#<COMPLETAR>
print contagem.collect()
assert sorted(contagem.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
# EXERCICIO
contagemFinal = (palavrasRDD.map(lambda x:(x,1)).reduceByKey(add)
#<COMPLETAR>
#<COMPLETAR>
)
contagemFinal = contagemFinal.collect()
print contagemFinal
assert sorted(contagemFinal)==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
# EXERCICIO
palavrasUnicas = (palavrasRDD.map(lambda x:(x,1)).reduceByKey(lambda y,z:1)).collect()#<COMPLETAR>
palavrasUnicas = len(palavrasUnicas)
print palavrasUnicas
assert palavrasUnicas==3, 'valor incorreto!'
print "OK"
# EXERCICIO
# add é equivalente a lambda x,y: x+y
palavrasRDD2 = sc.parallelize(contagemFinal)
#print palavrasRDD2.collect()
total = (palavrasRDD2.map(lambda x:(x[1])).reduce(add))
media = total / float(palavrasUnicas)
print total
print round(media, 2)
assert round(media, 2)==1.67, 'valores incorretos!'
print "OK"
# EXERCICIO
def contaPalavras(chavesRDD):
Creates a pair RDD with word counts from an RDD of words.
Args:
chavesRDD (RDD of str): An RDD consisting of words.
Returns:
RDD of (str, int): An RDD consisting of (word, count) tuples.
return (chavesRDD.map(lambda x: (x,1)).reduceByKey(add)
#<COMPLETAR>
#<COMPLETAR>
)
print contaPalavras(palavrasRDD).collect()
assert sorted(contaPalavras(palavrasRDD).collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
# EXERCICIO
import re
def removerPontuacao(texto):
Removes punctuation, changes to lower case, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after
punctuation is removed.
Args:
texto (str): A string.
Returns:
str: The cleaned up string.
return re.sub(r'[^A-Za-z0-9 ]', '', texto).strip().lower()
print removerPontuacao('Ola, quem esta ai??!')
print removerPontuacao(' Sem espaco e_sublinhado!')
assert removerPontuacao(' O uso de virgulas, embora permitido, nao deve contar. ')=='o uso de virgulas embora permitido nao deve contar', 'string incorreta!'
print "OK"
# Apenas execute a célula
import os.path
import urllib2
url = 'http://www.gutenberg.org/cache/epub/100/pg100.txt' # url do livro
arquivo = os.path.join('Data','Aula02','shakespeare.txt') # local de destino: 'Data/Aula02/shakespeare.txt'
if os.path.isfile(arquivo): # verifica se já fizemos download do arquivo
print 'Arquivo já existe!'
else:
try:
response = urllib2.urlopen(url)
arquivo = (response.read()).split() #ja gera uma lista de palavras
except IOError:
print 'Impossível fazer o download: {0}'.format(url)
# lê o arquivo com textFile e aplica a função removerPontuacao
shakesRDD = (sc.textFile(arquivo).map(removerPontuacao))
# zipWithIndex gera tuplas (conteudo, indice) onde indice é a posição do conteudo na lista sequencial
# Ex.: sc.parallelize(['gato','cachorro','boi']).zipWithIndex() ==> [('gato',0), ('cachorro',1), ('boi',2)]
# sep.join() junta as strings de uma lista através do separador sep. Ex.: ','.join(['a','b','c']) ==> 'a,b,c'
print '\n'.join(shakesRDD
.zipWithIndex()
.map(lambda (linha, num): '{0}: {1}'.format(num,linha))
.take(15)
)
# EXERCICIO
shakesPalavrasRDD = shakesRDD.map(lambda x: x.split())#<COMPLETAR>
total = shakesPalavrasRDD.count()
print shakesPalavrasRDD.take(5)
print total
# EXERCICIO
shakesPalavrasRDD = shakesRDD.flatMap(lambda x: x.split())
total = shakesPalavrasRDD.count()
print shakesPalavrasRDD.top(5)
print total
#assert total==927631 or total == 928908, "valor incorreto de palavras!"
#print "OK"
assert shakesPalavrasRDD.top(5)==[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],'lista incorreta de palavras'
print "OK"
# EXERCICIO
shakesLimpoRDD = shakesPalavrasRDD.filter(lambda x: len(x)>0)#<COMPLETAR>
total = shakesLimpoRDD.count()
print total
assert total==882996, 'valor incorreto!'
print "OK"
# EXERCICIO
#print contaPalavras(shakesLimpoRDD).collect()
top15 = contaPalavras(shakesLimpoRDD).takeOrdered(15, lambda x: -x[1])#<COMPLETAR>
print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15))
assert top15 == [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),
(u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),
(u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],'valores incorretos!'
print "OK"
import numpy as np
# Vamos criar uma função pNorm que recebe como parâmetro p e retorna uma função que calcula a pNorma
def pNorm(p):
Generates a function to calculate the p-Norm between two points.
Args:
p (int): The integer p.
Returns:
Dist: A function that calculates the p-Norm.
def Dist(x,y):
return np.power(np.power(np.abs(x-y),p).sum(),1/float(p))
return Dist
# Vamos criar uma RDD com valores numéricos
numPointsRDD = sc.parallelize(enumerate(np.random.random(size=(10,100))))
# EXERCICIO
# Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma
cartPointsRDD = numPointsRDD.cartesian(numPointsRDD)#<COMPLETAR>
# Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2))
# DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD
cartPointsParesRDD = cartPointsRDD.map(lambda ((x1,x2),(y1,y2)):((x1,y1),(x2,y2)))#<COMPLETAR>
#print cartPointsParesRDD
#Aplique um mapa para calcular a Distância Euclidiana entre os pares
Euclid = pNorm(2)
distRDD = cartPointsParesRDD.map(lambda ((x1,y1),(x2,y2)): Euclid(x2,y2))#<COMPLETAR>
#print(distRDD.collect())
# Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor
# e utilizando os comandos internos do pyspark para o cálculo da min, max, mean
#statRDD = distRDD.<COMPLETAR>
#minv, maxv, meanv = statRDD.<COMPLETAR>, statRDD.<COMPLETAR>, statRDD.<COMPLETAR>
minv, maxv, meanv = distRDD.min(), distRDD.max(), distRDD.mean()#<COMPLETAR>
print minv, maxv, meanv
assert (minv.round(2), maxv.round(2), meanv.round(2))==(0.0, 4.70, 3.65), 'Valores incorretos'
print "OK"
# Vamos criar uma função para calcular a distância de Hamming
def Hamming(x,y):
Calculates the Hamming distance between two binary vectors.
Args:
x, y (np.array): Array of binary integers x and y.
Returns:
H (int): The Hamming distance between x and y.
return (x!=y).sum()
# Vamos criar uma função para calcular a distância de Jaccard
def Jaccard(x,y):
Calculates the Jaccard distance between two binary vectors.
Args:
x, y (np.array): Array of binary integers x and y.
Returns:
J (int): The Jaccard distance between x and y.
return (x==y).sum()/float( np.maximum(x,y).sum() )
# Vamos criar uma RDD com valores categóricos
catPointsRDD = sc.parallelize(enumerate([['alto', 'caro', 'azul'],
['medio', 'caro', 'verde'],
['alto', 'barato', 'azul'],
['medio', 'caro', 'vermelho'],
['baixo', 'barato', 'verde'],
]))
# EXERCICIO
# Crie um RDD de chaves únicas utilizando flatMap
chavesRDD = catPointsRDD.flatMap(lambda x: (x[1])).distinct()#.zipWithIndex()
chaves = dict((v,k) for k,v in enumerate(chavesRDD.collect()))
nchaves = len(chaves)
print chaves, nchaves
assert chaves=={'alto': 0, 'medio': 1, 'baixo': 2, 'barato': 3, 'azul': 4, 'verde': 5, 'caro': 6, 'vermelho': 7}, 'valores incorretos!'
print "OK"
assert nchaves==8, 'número de chaves incorreta'
print "OK"
def CreateNP(atributos,chaves):
Binarize the categorical vector using a dictionary of keys.
Args:
atributos (list): List of attributes of a given object.
chaves (dict): dictionary with the relation attribute -> index
Returns:
array (np.array): Binary array of attributes.
array = np.zeros(len(chaves))
for atr in atributos:
array[ chaves[atr] ] = 1
return array
# Converte o RDD para o formato binário, utilizando o dict chaves
binRDD = catPointsRDD.map(lambda rec: (rec[0],CreateNP(rec[1], chaves)))
binRDD.collect()
# EXERCICIO
# Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma
cartBinRDD = binRDD.cartesian(binRDD)#<COMPLETAR>
# Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2))
# DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD
cartBinParesRDD = cartBinRDD.map(lambda ((x1,x2),(y1,y2)):((x1,y1),(x2,y2)))#<COMPLETAR>
distRDD = cartPointsParesRDD.map(lambda ((x1,y1),(x2,y2)): Euclid(x2,y2))#<COMPLETAR>
# Aplique um mapa para calcular a Distância de Hamming e Jaccard entre os pares
hamRDD = cartBinParesRDD.map(lambda ((x1,y1),(x2,y2)): Hamming(x2,y2))#<COMPLETAR>
jacRDD = cartBinParesRDD.map(lambda ((x1,y1),(x2,y2)): Jaccard(x2,y2))#<COMPLETAR>
# Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor
# e utilizando os comandos internos do pyspark para o cálculo da min, max, mean
#statHRDD = hamRDD.<COMPLETAR>
#statJRDD = jacRDD.<COMPLETAR>
Hmin, Hmax, Hmean = hamRDD.min(), hamRDD.max(), hamRDD.mean()
Jmin, Jmax, Jmean = jacRDD.min(), jacRDD.max(), jacRDD.mean()
#Hmin, Hmax, Hmean = statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR>
#Jmin, Jmax, Jmean = statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR>
print "\t\tMin\tMax\tMean"
print "Hamming:\t{:.2f}\t{:.2f}\t{:.2f}".format(Hmin, Hmax, Hmean )
print "Jaccard:\t{:.2f}\t{:.2f}\t{:.2f}".format( Jmin, Jmax, Jmean )
assert (Hmin.round(2), Hmax.round(2), Hmean.round(2)) == (0.00,6.00,3.52), 'valores incorretos'
print "OK"
assert (Jmin.round(2), Jmax.round(2), Jmean.round(2)) == (0.33,2.67,1.14), 'valores incorretos'
print "OK"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Packages
Step3: User Inputs
Step4: Custom Metrics
Step6: Define TFMA model evaluation specs
Step7: Run Evaluation
Step11: Save Evaluation in HTML Visualisations
|
<ASSISTANT_TASK:>
Python Code:
!pip install tensorflow_model_analysis==0.37.0 pandas==1.3.5 google_cloud_storage==1.43.0
# Visualisation-specific imports
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.view import render_slicing_metrics
from ipywidgets.embed import embed_minimal_html
import os
from google.cloud import storage
# TFMA Evaluation
import pandas as pd
from google.protobuf import text_format
import tensorflow_model_analysis as tfma
This is the link to the predictions generated during the training pipeline, which are stored in GCS. These are the output of the "Predict Test Data"
component, and are saved in an Dataset Artefact called "predictions", which then act as the input to the "Evaluate test metrics for <challenger>/<champion> model"
component
csv_file = 'gs://alvaro-sandbox/pipeline_root/805011877165/tensorflow-train-pipeline-20220223132851/predict-tensorflow-model_-2494514806493544448/predictions'
label_column_name = "total_fare" # Label column name (this is the ground truth)
pred_column_name = "predictions" # Model prediction column name
metrics_names = ["MeanSquaredError"] # Metric used to evaluate the model. Could be more than one (["MeanSquaredError", "<metric_name>"]
custom_metrics = {"SquaredPearson": "squared_pearson"} # Custom metric used to evaluate the model. If None used, leave it as custom_metrics = {}. If more
# than use used, then custom_metrics = {"SquaredPearson": "squared_pearson", <"MetricName">:<"module_name">}
# Slicing types used during evaluation. If no slicing used, leave it as slicing_specs = []
slicing_specs=[
'feature_keys: ["payment_type"]',
'feature_keys: ["payment_type", "company"]',
'feature_values: [{key: "payment_type", value: "Cash"}]',
'feature_keys: ["company", "dayofweek"] feature_values: [{key: "payment_type", value: "Cash"}]',
]
# Location to pipeline assets. Used only if custom metrics are available
PIPELINE_FILES_GCS_PATH='gs://alvaro-sandbox/pipelines'
VERTEX_PROJECT_ID='datatonic-vertex-pipeline-dev'
# The custom metric module must be downloaded from GCS where it is being stored.
# If no custom metrics are used, this cell won't run anything.
if custom_metrics:
custom_metrics_path = f"{PIPELINE_FILES_GCS_PATH}/training/assets/tfma_custom_metrics"
storage_client = storage.Client(project=VERTEX_PROJECT_ID)
for custom_metric in custom_metrics.values():
with open(f"{custom_metric}.py", "wb") as fp:
storage_client.download_blob_to_file(f"{custom_metrics_path}/{custom_metric}.py", fp)
for custom_metric in custom_metrics.values():
assert f"{custom_metric}.py" in os.listdir(), f"Custom Metric module {custom_metric}.py could not be found at {custom_metrics_path}"
print(f"Downloaded custom metric module {custom_metric}.py to Notebook storage")
else:
print("No custom metrics were specified by the user")
df = pd.read_csv(csv_file) # Read predictions and convert to dataframe
# Iterate through all metrics
metrics_specs = ""
for metric in metrics_names:
metrics_specs += f'metrics {{ class_name: "{metric}" }}\n'
# Adding custom metrics if specified
if custom_metrics:
for class_name, module_name in custom_metrics.items():
metric_spec = f' {{ class_name: "{class_name}" module: "{module_name}" }}'
metrics_specs += f"metrics {metric_spec}\n"
# Iterate through all slices
slicing_spec_proto = "slicing_specs {}\n"
if slicing_specs:
for single_slice in slicing_specs:
slicing_spec_proto += f"slicing_specs {{ {single_slice} }}\n"
# Create evaluation configuration
protobuf =
## Model information
model_specs {{
label_key: "{0}"
prediction_key: "{1}"
}}
## Post export metric information
metrics_specs {{
{2}
}}
## Slicing information inc. overall
{3}
eval_config = text_format.Parse(
protobuf.format(
label_column_name, pred_column_name, metrics_specs, slicing_spec_proto
),
tfma.EvalConfig(),
)
print(eval_config)
eval_result = tfma.analyze_raw_data(df, eval_config=eval_config, output_path="eval_outputs/")
evaluation = eval_result.get_metrics_for_all_slices()
def get_key_value_pair(key_value_string):
String manipulation to obtain the key-value pair from the slicing specification. Currently TFMA only
supports having a single key-value pair as part of a slicing specification. If this changes, this
function must also change.
Args:
key_value_string (str): String containing the key-value pair. This string has the following naming convention:
'feature_keys: ["<feature_key>"] feature_values: [{key: "<key>", value: "<value>"}]'. The string
manipulation aims to obtain the <key> and <value> names.
Returns:
key (str): Key name given in slicing spec.
value (str): Value name given in slicing spec.
# Get key name
key = key_value_string\
.split("key:")[1]\
.split(",")[0]\
.replace('"',"")\
.replace("'","")\
.strip()
# Get value name
value = key_value_string\
.split("value:")[1]\
.split("}")[0]\
.replace('"',"")\
.replace("'","")\
.strip()
return key, value
def get_feature_keys(keys_string):
String manipulation to obtain all feature keys from a single slicing specification returned as a single list
Args:
keys_string (str): String containing the feature keys. This string has the following naming convention:
'feature_keys: ["<feature_one>", "<feature_two>"]'. The string manipulation aims to obtain
all of the <feature_XX> keys in a single list
Returns:
feature_keys (list): List containing all feature keys in the given slice
feature_keys = [] # Initialise empty list
# Get all keys as list of string
Need to convert string 'feature_keys: ["<feature_one>", "<feature_two>"]'
into list of strings ["<feature_one>", "<feature_two>"]
keys_list = keys_string\
.split("feature_keys:")[1]\
.lstrip()\
.split("[")[1]\
.split("]")[0]\
.split(",")
# Clean every string item in list
for onekey in keys_list:
keyname = onekey.replace('"',"").replace("'","").strip()
feature_keys.append(keyname)
return feature_keys
os.makedirs("html_outputs/", exist_ok=True) # Save files in this local folder
# Create an output file fore very slice type
for onespec in slicing_specs:
# If only feature keys are specified
if "feature_keys:" in onespec and "feature_values: " not in onespec:
spec_keys = get_feature_keys(onespec) # Get all keys as list of strings
specs = tfma.SlicingSpec(feature_keys=spec_keys) # Create slicing spec
plots_tfma = render_slicing_metrics(eval_result, slicing_spec=specs) # Plot metrics
embed_minimal_html(f'html_outputs/plots_{"_&_".join(spec_keys)}.html', views=[plots_tfma], title='Slicing Metrics')
# If only feature values are specified
elif "feature_values: " in onespec and "feature_keys:" not in onespec:
keyname, valname = get_key_value_pair(onespec) # Get key-value pair names
specs = tfma.SlicingSpec(feature_values={keyname:valname}) # Create slicing spec
plots_tfma = render_slicing_metrics(eval_result, slicing_spec=specs) # Plot metrics
embed_minimal_html(f'html_outputs/plots_{keyname}_-->_{valname}.html', views=[plots_tfma], title='Slicing Metrics')
# If a combination of feature keys and values are specified
elif "feature_keys:" in onespec and "feature_values: " in onespec:
keyname, valname = get_key_value_pair(onespec) # Get key-value pair names
spec_keys = get_feature_keys(onespec) # Get all keys as list of strings
specs = tfma.SlicingSpec(feature_keys=spec_keys,
feature_values={keyname:valname}) # Create slicing spec
plots_tfma = render_slicing_metrics(eval_result, slicing_spec=specs) # Plot metrics
embed_minimal_html(f'html_outputs/plots_{"_&_".join(spec_keys)}_<>_{keyname}_-->_{valname}.html', views=[plots_tfma], title='Slicing Metrics')
# Create a final plot without any slice, just for the overall metric
plots_tfma = render_slicing_metrics(eval_result)
embed_minimal_html(f'html_outputs/plots_overall.html', views=[plots_tfma], title='Slicing Metrics')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2. Display list of tables
Step4: 3. Reviewing the patient table
Step5: Questions
Step6: Questions
Step7: Questions
Step8: Questions
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import os
import sqlite3
# Plot settings
%matplotlib inline
plt.style.use('ggplot')
fontsize = 20 # size for x and y ticks
plt.rcParams['legend.fontsize'] = fontsize
plt.rcParams.update({'font.size': fontsize})
# Connect to the database - which is assumed to be in the current directory
fn = 'eicu_demo.sqlite3'
con = sqlite3.connect(fn)
cur = con.cursor()
query = \
SELECT type, name
FROM sqlite_master
WHERE type='table'
ORDER BY name;
list_of_tables = pd.read_sql_query(query,con)
list_of_tables
# query to load data from the patient table
query = \
SELECT *
FROM patient
print(query)
# run the query and assign the output to a variable
patient_tab = pd.read_sql_query(query,con)
# display the first few rows of the dataframe
patient_tab.head()
# list all of the columns in the table
patient_tab.columns
# select a limited number of columns to view
columns = ['uniquepid', 'patientunitstayid','gender','age','unitdischargestatus']
patient_tab[columns].head()
# what are the unique values for age?
age_col = 'age'
patient_tab[age_col].sort_values().unique()
# create a column containing numerical ages
# If ‘coerce’, then invalid parsing will be set as NaN
agenum_col = 'age_num'
patient_tab[agenum_col] = pd.to_numeric(patient_tab[age_col], errors='coerce')
patient_tab[agenum_col].sort_values().unique()
# try plotting a histogram of ages
figsize = (18,8)
patient_tab[agenum_col].plot(kind='hist',
figsize=figsize,
fontsize=fontsize,
bins=15)
# set threshold based on 99th quantile
adweight_col = 'admissionweight'
quant = patient_tab[adweight_col].quantile(0.99)
patient_tab[patient_tab[adweight_col] > quant] = None
# describe the admission weights
patient_tab[adweight_col].describe()
# set threshold based on 99th quantile
disweight_col = 'dischargeweight'
quant = patient_tab[disweight_col].quantile(0.99)
patient_tab[patient_tab[disweight_col] > quant] = None
# describe the discharge weights
patient_tab[disweight_col].describe()
patient_tab['weight_change'] = patient_tab[adweight_col] - patient_tab[disweight_col]
# plot the weight changes
figsize = (18,8)
patient_tab['weight_change'].plot(kind='hist',
figsize=figsize,
fontsize=fontsize,
bins=50)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute statistic
Step3: Visualize the clusters
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from scipy import stats as stats
import mne
from mne import spatial_src_connectivity
from mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
stc_fname = data_path + '/MEG/sample/sample_audvis-meg-lh.stc'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
# Load stc to in common cortical space (fsaverage)
stc = mne.read_source_estimate(stc_fname)
stc.resample(50, npad='auto')
# Read the source space we are morphing to
src = mne.read_source_spaces(src_fname)
fsave_vertices = [s['vertno'] for s in src]
stc = mne.morph_data('sample', 'fsaverage', stc, grade=fsave_vertices,
smooth=20, subjects_dir=subjects_dir)
n_vertices_fsave, n_times = stc.data.shape
tstep = stc.tstep
n_subjects1, n_subjects2 = 7, 9
print('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2))
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10
X2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10
X1[:, :, :] += stc.data[:, :, np.newaxis]
# make the activity bigger for the second set of subjects
X2[:, :, :] += 3 * stc.data[:, :, np.newaxis]
# We want to compare the overall activity levels for each subject
X1 = np.abs(X1) # only magnitude
X2 = np.abs(X2) # only magnitude
print('Computing connectivity.')
connectivity = spatial_src_connectivity(src)
# Note that X needs to be a list of multi-dimensional array of shape
# samples (subjects_k) x time x space, so we permute dimensions
X1 = np.transpose(X1, [2, 1, 0])
X2 = np.transpose(X2, [2, 1, 0])
X = [X1, X2]
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.0001
f_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,
n_subjects1 - 1, n_subjects2 - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu =\
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_threshold)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
fsave_vertices = [np.arange(10242), np.arange(10242)]
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A != condition B
brain = stc_all_cluster_vis.plot('fsaverage', hemi='both', colormap='mne',
views='lateral', subjects_dir=subjects_dir,
time_label='Duration significant (ms)')
brain.save_image('clusters.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reminder
Step2: If you print the results above, it will again be a JSON document, yet again unreadible. Let's again make it pretty.
|
<ASSISTANT_TASK:>
Python Code:
import googlemaps
from datetime import datetime
gmaps = googlemaps.Client(key='somesecretkeyhere')
# Geocoding an address
geocode_result = gmaps.geocode('1600 Amphitheatre Parkway, Mountain View, CA')
type(geocode_result)
from pprint import pprint
pprint(geocode_result)
# Look up an address with reverse geocoding
reverse_geocode_result = gmaps.reverse_geocode((40.714224, -73.961452))
pprint(reverse_geocode_result)
# Request directions via public transit
now = datetime.now()
directions_result = gmaps.directions("Yerevan, Armenia","Dilijan, Armenia", departure_time=now)
pprint(directions_result)
for i in directions_result:
legs = i["legs"]
for leg in legs:
dist = leg["distance"]
print dist
type(dist)
loc1 = raw_input("Please, provide the start location: ")+", Armenia"
loc2 = raw_input("Please, provide the end location: ")+", Armenia"
directions_result = gmaps.directions(loc1,loc2, departure_time=datetime.now())
for i in directions_result:
legs = i["legs"]
for leg in legs:
dist = leg["distance"]
print("\n")
print("The distance between "+loc1+" and "+loc2+" is "+dist["text"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: or read from a repository (a file or a database)
Step2: Note, that to execute the code above you will have to change the path where the data is stored. The dataset can be downloaded from http
Step3: You can access the data in the object as you would normally do in Python.
Step4: Reading from files
Step5: User defined functions
Step6: Now, instead of using lambda we will use the extractInformation(...) method to split and convert our dataset.
Step7: Transformations
Step8: You can combine more columns.
Step9: .filter(...)
Step10: .flatMap(...)
Step11: .distinct()
Step12: .sample(...)
Step13: Let's confirm that we really got 10% of all the records.
Step14: .leftOuterJoin(...)
Step15: If we used .join(...) method instead we would have gotten only the values for 'a' and 'b' as these two values intersect between these two RDDs.
Step16: Another useful method is the .intersection(...) that returns the records that are equal in both RDDs.
Step17: .repartition(...)
Step18: Actions
Step19: If you want somewhat randomized records you can use .takeSample(...) instead.
Step20: .reduce(...)
Step21: If the reducing function is not associative and commutative you will sometimes get wrong results depending how your data is partitioned.
Step22: I we were to reduce the data in a manner that we would like to divide the current result by the subsequent one, we would expect a value of 10
Step23: However, if you were to partition the data into 3 partitions, the result will be wrong.
Step24: The .reduceByKey(...) method works in a similar way to the .reduce(...) method but performs a reduction on a key-by-key basis.
Step25: .count()
Step26: It has the same effect as the method below but does not require shifting the data to the driver.
Step27: If your dataset is in a form of a key-value you can use the .countByKey() method to get the counts of distinct keys.
Step28: .saveAsTextFile(...)
Step29: To read it back, you need to parse it back as, as before, all the rows are treated as strings.
Step30: .foreach(...)
|
<ASSISTANT_TASK:>
Python Code:
data = sc.parallelize(
[('Amber', 22), ('Alfred', 23), ('Skye',4), ('Albert', 12),
('Amber', 9)])
data_from_file = sc.\
textFile(
'/Users/drabast/Documents/PySpark_Data/VS14MORT.txt.gz',
4)
data_heterogenous = sc.parallelize([('Ferrari', 'fast'), {'Porsche': 100000}, ['Spain','visited', 4504]]).collect()
data_heterogenous
data_heterogenous[1]['Porsche']
data_from_file.take(1)
def extractInformation(row):
import re
import numpy as np
selected_indices = [
2,4,5,6,7,9,10,11,12,13,14,15,16,17,18,
19,21,22,23,24,25,27,28,29,30,32,33,34,
36,37,38,39,40,41,42,43,44,45,46,47,48,
49,50,51,52,53,54,55,56,58,60,61,62,63,
64,65,66,67,68,69,70,71,72,73,74,75,76,
77,78,79,81,82,83,84,85,87,89
]
'''
Input record schema
schema: n-m (o) -- xxx
n - position from
m - position to
o - number of characters
xxx - description
1. 1-19 (19) -- reserved positions
2. 20 (1) -- resident status
3. 21-60 (40) -- reserved positions
4. 61-62 (2) -- education code (1989 revision)
5. 63 (1) -- education code (2003 revision)
6. 64 (1) -- education reporting flag
7. 65-66 (2) -- month of death
8. 67-68 (2) -- reserved positions
9. 69 (1) -- sex
10. 70 (1) -- age: 1-years, 2-months, 4-days, 5-hours, 6-minutes, 9-not stated
11. 71-73 (3) -- number of units (years, months etc)
12. 74 (1) -- age substitution flag (if the age reported in positions 70-74 is calculated using dates of birth and death)
13. 75-76 (2) -- age recoded into 52 categories
14. 77-78 (2) -- age recoded into 27 categories
15. 79-80 (2) -- age recoded into 12 categories
16. 81-82 (2) -- infant age recoded into 22 categories
17. 83 (1) -- place of death
18. 84 (1) -- marital status
19. 85 (1) -- day of the week of death
20. 86-101 (16) -- reserved positions
21. 102-105 (4) -- current year
22. 106 (1) -- injury at work
23. 107 (1) -- manner of death
24. 108 (1) -- manner of disposition
25. 109 (1) -- autopsy
26. 110-143 (34) -- reserved positions
27. 144 (1) -- activity code
28. 145 (1) -- place of injury
29. 146-149 (4) -- ICD code
30. 150-152 (3) -- 358 cause recode
31. 153 (1) -- reserved position
32. 154-156 (3) -- 113 cause recode
33. 157-159 (3) -- 130 infant cause recode
34. 160-161 (2) -- 39 cause recode
35. 162 (1) -- reserved position
36. 163-164 (2) -- number of entity-axis conditions
37-56. 165-304 (140) -- list of up to 20 conditions
57. 305-340 (36) -- reserved positions
58. 341-342 (2) -- number of record axis conditions
59. 343 (1) -- reserved position
60-79. 344-443 (100) -- record axis conditions
80. 444 (1) -- reserve position
81. 445-446 (2) -- race
82. 447 (1) -- bridged race flag
83. 448 (1) -- race imputation flag
84. 449 (1) -- race recode (3 categories)
85. 450 (1) -- race recode (5 categories)
86. 461-483 (33) -- reserved positions
87. 484-486 (3) -- Hispanic origin
88. 487 (1) -- reserved
89. 488 (1) -- Hispanic origin/race recode
'''
record_split = re\
.compile(
r'([\s]{19})([0-9]{1})([\s]{40})([0-9\s]{2})([0-9\s]{1})([0-9]{1})([0-9]{2})' +
r'([\s]{2})([FM]{1})([0-9]{1})([0-9]{3})([0-9\s]{1})([0-9]{2})([0-9]{2})' +
r'([0-9]{2})([0-9\s]{2})([0-9]{1})([SMWDU]{1})([0-9]{1})([\s]{16})([0-9]{4})' +
r'([YNU]{1})([0-9\s]{1})([BCOU]{1})([YNU]{1})([\s]{34})([0-9\s]{1})([0-9\s]{1})' +
r'([A-Z0-9\s]{4})([0-9]{3})([\s]{1})([0-9\s]{3})([0-9\s]{3})([0-9\s]{2})([\s]{1})' +
r'([0-9\s]{2})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([\s]{36})([A-Z0-9\s]{2})([\s]{1})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([\s]{1})([0-9\s]{2})([0-9\s]{1})' +
r'([0-9\s]{1})([0-9\s]{1})([0-9\s]{1})([\s]{33})([0-9\s]{3})([0-9\s]{1})([0-9\s]{1})')
try:
rs = np.array(record_split.split(row))[selected_indices]
except:
rs = np.array(['-99'] * len(selected_indices))
return rs
# return record_split.split(row)
data_from_file_conv = data_from_file.map(extractInformation)
data_from_file_conv.map(lambda row: row).take(1)
data_2014 = data_from_file_conv.map(lambda row: int(row[16]))
data_2014.take(10)
data_2014_2 = data_from_file_conv.map(lambda row: (row[16], int(row[16])))
data_2014_2.take(10)
data_filtered = data_from_file_conv.filter(lambda row: row[5] == 'F' and row[21] == '0')
data_filtered.count()
data_2014_flat = data_from_file_conv.flatMap(lambda row: (row[16], int(row[16]) + 1))
data_2014_flat.take(10)
distinct_gender = data_from_file_conv.map(lambda row: row[5]).distinct().collect()
distinct_gender
fraction = 0.1
data_sample = data_from_file_conv.sample(False, fraction, 666)
data_sample.take(1)
print('Original dataset: {0}, sample: {1}'.format(data_from_file_conv.count(), data_sample.count()))
rdd1 = sc.parallelize([('a', 1), ('b', 4), ('c',10)])
rdd2 = sc.parallelize([('a', 4), ('a', 1), ('b', '6'), ('d', 15)])
rdd3 = rdd1.leftOuterJoin(rdd2)
rdd3.take(5)
rdd4 = rdd1.join(rdd2)
rdd4.collect()
rdd5 = rdd1.intersection(rdd2)
rdd5.collect()
rdd1 = rdd1.repartition(4)
len(rdd1.glom().collect())
data_first = data_from_file_conv.take(1)
data_first
data_take_sampled = data_from_file_conv.takeSample(False, 1, 667)
data_take_sampled
rdd1.map(lambda row: row[1]).reduce(lambda x, y: x + y)
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 1)
works = data_reduce.reduce(lambda x, y: x / y)
works
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 3)
data_reduce.reduce(lambda x, y: x / y)
data_key = sc.parallelize([('a', 4),('b', 3),('c', 2),('a', 8),('d', 2),('b', 1),('d', 3)],4)
data_key.reduceByKey(lambda x, y: x + y).collect()
data_reduce.count()
len(data_reduce.collect()) # WRONG -- DON'T DO THIS!
data_key.countByKey().items()
data_key.saveAsTextFile('/Users/drabast/Documents/PySpark_Data/data_key.txt')
def parseInput(row):
import re
pattern = re.compile(r'\(\'([a-z])\', ([0-9])\)')
row_split = pattern.split(row)
return (row_split[1], int(row_split[2]))
data_key_reread = sc \
.textFile('/Users/drabast/Documents/PySpark_Data/data_key.txt') \
.map(parseInput)
data_key_reread.collect()
def f(x):
print(x)
data_key.foreach(f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Before applying artifact correction please learn about your actual artifacts
Step2: Define the ICA object instance
Step3: we avoid fitting ICA on crazy environmental artifacts that would
Step4: Plot ICA components
Step5: Component properties
Step6: we can see that the data were filtered so the spectrum plot is not
Step7: we can also take a look at multiple different components at once
Step8: Instead of opening individual figures with component properties, we can
Step9: Advanced artifact detection
Step10: We can take a look at the properties of that component, now using the
Step11: That component is showing a prototypical average vertical EOG time course.
Step12: These labels were used by the plotters and are added automatically
Step13: Exercise
Step14: What if we don't have an EOG channel?
Step15: The idea behind corrmap is that artefact patterns are similar across subjects
Step16: Remember, don't do this at home! Start by reading in a collection of ICA
Step17: We use our original ICA as reference.
Step18: Investigate our reference ICA
Step19: Which one is the bad EOG component?
Step20: Indeed it looks like an EOG, also in the average time course.
Step21: Now we can do the corrmap.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import ICA
from mne.preprocessing import create_eog_epochs, create_ecg_epochs
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True, add_eeg_ref=False)
raw.filter(1, 40, n_jobs=2) # 1Hz high pass is often helpful for fitting ICA
picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
n_components = 25 # if float, select n_components by explained variance of PCA
method = 'fastica' # for comparison with EEGLAB try "extended-infomax" here
decim = 3 # we need sufficient statistics, not all time points -> saves time
# we will also set state of the random number generator - ICA is a
# non-deterministic algorithm, but we want to have the same decomposition
# and the same order of components each time this tutorial is run
random_state = 23
ica = ICA(n_components=n_components, method=method, random_state=random_state)
print(ica)
reject = dict(mag=5e-12, grad=4000e-13)
ica.fit(raw, picks=picks_meg, decim=decim, reject=reject)
print(ica)
ica.plot_components() # can you spot some potential bad guys?
# first, component 0:
ica.plot_properties(raw, picks=0)
ica.plot_properties(raw, picks=0, psd_args={'fmax': 35.})
ica.plot_properties(raw, picks=[1, 2], psd_args={'fmax': 35.})
# uncomment the code below to test the inteactive mode of plot_components:
# ica.plot_components(picks=range(10), inst=raw)
eog_average = create_eog_epochs(raw, reject=dict(mag=5e-12, grad=4000e-13),
picks=picks_meg).average()
# We simplify things by setting the maximum number of components to reject
n_max_eog = 1 # here we bet on finding the vertical EOG components
eog_epochs = create_eog_epochs(raw, reject=reject) # get single EOG trials
eog_inds, scores = ica.find_bads_eog(eog_epochs) # find via correlation
ica.plot_scores(scores, exclude=eog_inds) # look at r scores of components
# we can see that only one component is highly correlated and that this
# component got detected by our correlation analysis (red).
ica.plot_sources(eog_average, exclude=eog_inds) # look at source time course
ica.plot_properties(eog_epochs, picks=eog_inds, psd_args={'fmax': 35.},
image_args={'sigma': 1.})
print(ica.labels_)
ica.plot_overlay(eog_average, exclude=eog_inds, show=False)
# red -> before, black -> after. Yes! We remove quite a lot!
# to definitely register this component as a bad one to be removed
# there is the ``ica.exclude`` attribute, a simple Python list
ica.exclude.extend(eog_inds)
# from now on the ICA will reject this component even if no exclude
# parameter is passed, and this information will be stored to disk
# on saving
# uncomment this for reading and writing
# ica.save('my-ica.fif')
# ica = read_ica('my-ica.fif')
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_properties(ecg_epochs, picks=ecg_inds, psd_args={'fmax': 35.})
from mne.preprocessing.ica import corrmap # noqa
# We'll start by simulating a group of subjects or runs from a subject
start, stop = [0, len(raw.times) - 1]
intervals = np.linspace(start, stop, 4, dtype=int)
icas_from_other_data = list()
raw.pick_types(meg=True, eeg=False) # take only MEG channels
for ii, start in enumerate(intervals):
if ii + 1 < len(intervals):
stop = intervals[ii + 1]
print('fitting ICA from {0} to {1} seconds'.format(start, stop))
this_ica = ICA(n_components=n_components, method=method).fit(
raw, start=start, stop=stop, reject=reject)
icas_from_other_data.append(this_ica)
print(icas_from_other_data)
reference_ica = ica
reference_ica.plot_components()
reference_ica.plot_sources(eog_average, exclude=eog_inds)
icas = [reference_ica] + icas_from_other_data
template = (0, eog_inds[0])
fig_template, fig_detected = corrmap(icas, template=template, label="blinks",
show=True, threshold=.8, ch_type='mag')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The udm2 asset
|
<ASSISTANT_TASK:>
Python Code:
from planet import api
import time
import os
import rasterio
from rasterio.plot import show
client = api.ClientV1()
# build a filter for the AOI
filter = api.filters.range_filter("clear_percent", gte=90)
# show the structure of the filter
print(filter)
# we are requesting PlanetScope 4 Band imagery
item_types = ['PSScene']
request = api.filters.build_search_request(filter, item_types)
# this will cause an exception if there are any API related errors
results = client.quick_search(request)
# print out the ID of the most recent 10 images that matched
for item in results.items_iter(10):
print('%s' % item['id'])
item_type = "PSScene"
item_id = "20190228_172942_0f1a"
# activate assets
assets = client.get_assets_by_id("PSScene", item_id).get()
client.activate(assets["ortho_analytic_4b"])
client.activate(assets["ortho_udm2"])
# wait until activation completes
while True:
assets = client.get_assets_by_id("PSScene", item_id).get()
if "location" in assets["ortho_analytic_4b"] and "location" in assets["ortho_udm2"]:
print('assets activated')
break
time.sleep(10)
# start downloads
data_dir = 'data'
r1 = client.download(assets["ortho_analytic_4b"], callback=api.write_to_file(data_dir))
r2 = client.download(assets["ortho_udm2"], callback=api.write_to_file(data_dir))
# wait until downloads complete
r1.wait()
r2.wait()
img_file = os.path.join(data_dir, r1.get_body().name)
udm_file = os.path.join(data_dir, r2.get_body().name)
print("image: {}".format(img_file))
print("udm2: {}".format(udm_file))
with rasterio.open(udm_file) as src:
shadow_mask = src.read(3).astype(bool)
cloud_mask = src.read(6).astype(bool)
show(shadow_mask, title="shadow", cmap="binary")
show(cloud_mask, title="cloud", cmap="binary")
mask = shadow_mask + cloud_mask
show(mask, title="mask", cmap="binary")
with rasterio.open(img_file) as src:
profile = src.profile
img_data = src.read([3, 2, 1], masked=True) / 10000.0 # apply RGB ordering and scale down
show(img_data, title=item_id)
img_data.mask = mask
img_data = img_data.filled(fill_value=0)
show(img_data, title="masked image")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This crazy try-except construction is our way of making sure the notebooks will work when completed without actually providing complete code. You can either write your code directly in the except block, or delete the try, exec and except lines entirely (remembering to unindent the remaining lines in that case, because python).
Step2: This cell just prints out the string my_goals.
|
<ASSISTANT_TASK:>
Python Code:
class SolutionMissingError(Exception):
def __init__(self):
Exception.__init__(self,"You need to complete the solution for this code to work!")
def REPLACE_WITH_YOUR_SOLUTION():
raise SolutionMissingError
REMOVE_THIS_LINE = REPLACE_WITH_YOUR_SOLUTION
try:
exec(open('Solution/goals.py').read())
except IOError:
my_goals = REPLACE_WITH_YOUR_SOLUTION()
print(my_goals)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Load each array of Ps and Ts
Step2: 3. Load signals
Step3: 4. Plot peaks and troughs on top of signals
|
<ASSISTANT_TASK:>
Python Code:
import tarfile
fname_base = 'C:/gh/data/example/lfp_set_PsTs/out.29419325.'
Nfiles = 10
for n in range(Nfiles):
fname = fname_base + str(n) + '.tar.gz'
tar = tarfile.open(fname, "r:gz")
tar.extractall('C:/gh/data/example/lfp_set_PsTs/' + str(n) + '/')
tar.close()
import numpy as np
Ps = np.zeros(Nfiles, dtype=np.ndarray)
Ts = np.zeros(Nfiles, dtype=np.ndarray)
for n in range(Nfiles):
Ps[n] = np.load('C:/gh/data/example/lfp_set_PsTs/' + str(n) + '/out/Ps_data.npy')
Ts[n] = np.load('C:/gh/data/example/lfp_set_PsTs/' + str(n) + '/out/Ts_data.npy')
lfps = np.zeros(Nfiles, dtype=np.ndarray)
for n in range(Nfiles):
if n == 0:
lfps[n] = np.load('C:/gh/data/example/lfp_set/' + str(10) + '.npy')
else:
lfps[n] = np.load('C:/gh/data/example/lfp_set/' + str(n) + '.npy')
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10,10))
for n in range(Nfiles):
plt.subplot(Nfiles, 1, n+1)
plt.plot(lfps[n], 'k')
plt.plot(Ps[n], lfps[n][Ps[n]], 'bo')
plt.plot(Ts[n], lfps[n][Ts[n]], 'ro')
if n == Nfiles-1:
plt.xlabel('Time (ms)')
else:
plt.xticks([])
plt.ylim((-3000,3000))
plt.yticks([-3000,0,3000])
if n == 0:
plt.ylabel('Voltage (uV)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define simulation parameters
Step3: Setup the simulation domain using parameters defined above
Step4: Solve for field profiles
Step5: Backward propagation (Fig. 1d)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import autograd.numpy as npa
import skimage as sk
import copy
import matplotlib as mpl
mpl.rcParams['figure.dpi']=100
import matplotlib.pylab as plt
from autograd.scipy.signal import convolve as conv
from skimage.draw import circle, circle_perimeter
import sys
sys.path.append('../')
import ceviche.viz
from ceviche import fdfd_mf_ez, jacobian
from ceviche.optimizers import adam_optimize
from ceviche.modes import insert_mode
# from ceviche.helper import operator_proj, operator_blur
import collections
# Create a container for our slice coords to be used for sources and probes
Slice = collections.namedtuple('Slice', 'x y')
# Speed of light in vacuum
c0 = 3e+8
# Frequency of the source
omega=243e12 #rad/s and not Hz, typo in the actual paper
# Modulation frequency
omega_mod=132e12 #rad/s and not THz, typo in the actual paper
# Number of sidebands on each side of central wavelength (total number of bands: 2*Nsb + 1)
Nsb = 1
# Spatial resolution in meters
dl=50e-9
# Number of pixels in the PMLs in each direction
Npml=10
# Background permittivity
eps_bg=1.0
# Waveguide structure's relative permittivity
eps_si=12.25
# Modulation depth in both regions
mod_depth=1.0
# Modulation phases of left and right regions
mod_phi_l = 0
mod_phi_r = np.pi/2
# Number of modulation frequencies (different from number of sidebands)
Nfreq = 1
# Full width of simulation domain
W=4e-6
# Waveguide thickness
wg_width=1.1e-6
# Central section thickness
wc=2.0e-6
# Full length of simulation domain
L=55e-6
# Length of modulation region on each side
Lmod=19e-6
#Length of central section
Lmid=4.75e-6
# Number of pixels in x-direction
Nx=int(L/dl)
# Number of pixels in y-direction
Ny=int(W/dl)
# Space between the PMLs and the design region (in pixels)
space=20
# Length in pixels of the source/probe slices on each side of the center point
slice_width=3*wg_width
print('Simulation domain is ' + str(Nx) + 'x' + str(Ny) + ' pixels')
def init_domain():
Initializes the domain and design region
# Permittivity initialization of the structure
epsr = eps_bg*np.ones((Nx, Ny))
## Geometry of waveguide
# Left waveguide section
wg_L1 = 0
wg_L2 = int(Nx/2) - int(Lmid/2/dl)
# Right waveguide section
wg_R1 = int(Nx/2) + int(Lmid/2/dl)
wg_R2 = int(Nx)
wg_top = int(Ny/2) + int(wg_width/2/dl)
wg_bot = int(Ny/2) - int(wg_width/2/dl)
#Central section
wg_c_top = int(Ny/2) + int(wc/2/dl)
wg_c_bot = int(Ny/2) - int(wc/2/dl)
## Permittivity of waveguide
epsr[wg_L1:wg_L2, wg_bot:wg_top] = eps_si
epsr[wg_R1:wg_R2, wg_bot:wg_top] = eps_si
epsr[wg_L2:wg_R1, wg_c_bot:wg_c_top] = eps_si
# Coordinates of source input
input_slice = Slice(x=np.array(Npml+1),
y=np.arange(int(Ny/2) - int(slice_width/2/dl), int(Ny/2) + int(slice_width/2/dl)))
# Coordinates of probe output
output_slice = Slice(x=np.array(Nx-Npml-1),
y=np.arange(int(Ny/2) - int(slice_width/2/dl), int(Ny/2) + int(slice_width/2/dl)))
# Modulation region (delta and phi)
delta = np.zeros((Nfreq, Nx, Ny))
delta[:, wg_L2-int(Lmod/dl):wg_L2, int(Ny/2):wg_top] = mod_depth
delta[:, wg_R1:wg_R1+int(Lmod/dl), int(Ny/2):wg_top] = mod_depth
phi = np.zeros((Nfreq, Nx, Ny))
phi[:, wg_L2-int(Lmod/dl):wg_L2, int(Ny/2):wg_top] = mod_phi_l
phi[:, wg_R1:wg_R1+int(Lmod/dl), int(Ny/2):wg_top] = mod_phi_r
return epsr, input_slice, output_slice, delta, phi
# Initialize the parametrization rho and the design region
epsr, input_slice, output_slice, delta, phi = init_domain()
slices = [input_slice, output_slice]
# Setup source and pad appropriately (notice source is at 'input' slice)
source_0 = np.array([insert_mode(omega, dl, input_slice.x, input_slice.y, epsr, m=1)])
pad_left = np.zeros([Nsb,Nx,Ny])
pad_right = np.zeros([Nsb,Nx,Ny])
source = np.concatenate( (np.concatenate((pad_left, source_0)), pad_right) )
# Define multi-frequency FDFD simulation with the epsr, delta, phi profiles
simulation = fdfd_mf_ez(omega, dl, epsr, omega_mod, delta, phi, Nsb, [Npml, Npml])
# Solve for electric and magnetic fields
Hx, Hy, Ez = simulation.solve(source)
# Plot results
fig, ax = plt.subplots(3,1, constrained_layout=True, figsize=(12,6))
ceviche.viz.real(Ez[Nsb + 0], outline=epsr, ax=ax[0], cbar=False)
ax[0].set_title('Ez profile at omega')
ceviche.viz.real(Ez[Nsb + 1], outline=epsr, ax=ax[1], cbar=False)
ax[1].set_title('Ez profile at omega+omega_mod')
for sl in slices:
ax[0].plot(sl.x*np.ones(len(sl.y)), sl.y, 'b-')
ax[0].plot(sl.x*np.ones(len(sl.y)), sl.y, 'b-')
ceviche.viz.real(delta[0], outline=epsr, ax=ax[2], cmap='Greys')
ax[2].set_title('Modulation region')
# Initialize the parametrization rho and the design region
epsr, input_slice, output_slice, delta, phi = init_domain()
slices = [input_slice, output_slice]
# Setup source and pad appropriately (notice source is at 'output' slice this time)
source_0 = np.array([insert_mode(omega, dl, output_slice.x, output_slice.y, epsr, m=1)])
pad_left = np.zeros([Nsb,Nx,Ny])
pad_right = np.zeros([Nsb,Nx,Ny])
source = np.concatenate( (np.concatenate((pad_left, source_0)), pad_right) )
# Define multi-frequency FDFD simulation with the epsr, delta, phi profiles
simulation = fdfd_mf_ez(omega, dl, epsr, omega_mod, delta, phi, Nsb, [Npml, Npml])
# Solve for electric and magnetic fields
Hx, Hy, Ez = simulation.solve(source)
# Plot results
fig, ax = plt.subplots(3,1, constrained_layout=True, figsize=(12,6))
ceviche.viz.real(Ez[Nsb + 0], outline=epsr, ax=ax[0], cbar=False)
ax[0].set_title('Ez profile at omega')
ceviche.viz.real(Ez[Nsb + 1], outline=epsr, ax=ax[1], cbar=False)
ax[1].set_title('Ez profile at omega+omega_mod')
for sl in slices:
ax[0].plot(sl.x*np.ones(len(sl.y)), sl.y, 'b-')
ax[0].plot(sl.x*np.ones(len(sl.y)), sl.y, 'b-')
ceviche.viz.real(delta[0], outline=epsr, ax=ax[2], cmap='Greys')
ax[2].set_title('Modulation region')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'bnu-esm-1-1', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Dataset Parameters
Step3: For information on the included passband-dependent parameters (not mentioned below), see the section on the lc dataset.
Step4: compute_times / compute_phases
Step5: wavelengths
Step6: components
Step7: sigmas
Step8: profile_func
Step9: profile_rest
Step10: profile_sv
Step11: Synthetics
Step12: The model for a line profile dataset will expose flux-densities at each time and for each component where the corresponding wavelengths Parameter was not empty. Here since we used the default and exposed line-profiles for the entire system, we have a single entry per-time.
Step13: Plotting
Step14: Mesh Fields
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lp', times=[0,1,2], wavelengths=phoebe.linspace(549, 551, 101))
print(b.get_dataset(kind='lp', check_visible=False))
print(b.get_dataset(kind='lp').times)
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
print(b.get_parameter(qualifier='phases_t0'))
print(b.filter(qualifier='wavelengths'))
print(b.get_parameter(qualifier='wavelengths', component='binary'))
print(b.filter(qualifier='flux_densities'))
print(b.get_parameter(qualifier='flux_densities',
component='binary',
time=0.0))
print(b.filter(qualifier='sigmas'))
print(b.get_parameter(qualifier='sigmas',
component='binary',
time=0))
print(b.get_parameter(qualifier='profile_func'))
print(b.get_parameter(qualifier='profile_rest'))
print(b.get_parameter(qualifier='profile_sv'))
b.run_compute(irrad_method='none')
print(b.filter(context='model').twigs)
print(b.filter(qualifier='flux_densities', context='model'))
print(b.get_parameter(qualifier='flux_densities', context='model', time=0))
afig, mplfig = b.filter(dataset='lp01', context='model', time=0).plot(show=True)
b.add_dataset('mesh', times=[0], dataset='mesh01')
print(b.get_parameter(qualifier='columns').choices)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: this is super simple - the input is being multiplied by a weight and returned. The power of NN's lies in the weights and how we update them.
Step3: Before we move on, being able to manipulate vectors is a cornerstone technique for Deep Learning. Some functions to do vector math
Step4: Testing the wikipedia example of a dot product
Step5: So we've just made a function to do a dot product.
Step6: Weights are a superpower. We can make many predictions from just a single input
Step7: Note that the 3 predictions are completely seperate.
Step8: In the above, each output has its own set of weights which it uses to take the dot product of with the input, calculating an output.
Step9: numpy
Step10: There is a lot more about numpy - see TK
Step11: Now this way is inefficient and it can be impossible to predict the goal, if say we had a large step value.
Step12: a neural net with descent
Step13: Chapter 5
Step14: the below is a shitfight
Step15: The dataset contains a visual representation of the image data
Step16: digits.target contains what each datapoint represents
Step17: So the image above, which looks like a zero, so digits.data[0] should be equal to digits.target[0], is equal to 0.
Step18: So the dot product of each input vector and the weights gives us 10 outputs
Step19: And the dot product of the hi
Step20: Now, the output numbers seem pretty high, so we might want to normalize the incoming data
Step21: Now calculating the hidden layer
Step22: Using the sigmoid function to calculate the output of the hidden layer
Step23: The hidden layer gives 10 outputs for each of the 1797 data points.
Step24: What are the output errors?
Step25: Now how to adust the weights now that we have the output errors?
|
<ASSISTANT_TASK:>
Python Code:
# the network
weight = 0.1
def neural_network(input, weight):
prediction = input * weight
return prediction
# using the network to predict something
number_of_toes = [8.5, 9.5, 10, 9]
input = number_of_toes[0]
pred = neural_network(input,weight)
pred
weights = [0.1, 0.2, 0]
def neural_network(input, weights):
pred = w_sum(input, weights)
return pred
toes = [8.5, 9.5, 9.9, 9.0]
wlrec = [0.65, 0.8, 0.8, 0.9]
nfans = [1.2, 1.3, 0.5, 1.0]
def w_sum(a, b):
takes in 2 vectors of the same length,
multiples ith item of each list with each other
and adds them all up
OR - we pair up each number in the two vectors, multiply
every pair, then add up the results
assert(len(a) == len(b))
return sum([a[i]*b[i] for i in range(len(a))])
for input in zip(toes, wlrec, nfans):
print(neural_network(input, weights))
def ele_mul(vec_a, vec_b):
assert(len(vec_a) == len(vec_b))
return [vec_a[i] * vec_b[i] for i in range(len(vec_a))]
def vector_sum(vec_a):
return sum(vec_a)
def ele_add(vec_a, vec_b):
assert(len(vec_a) == len(vec_b))
return [vec_a[i] + vec_b[i] for i in range(len(vec_a))]
def vector_average(vec_a):
return sum(vec_a) / len(vec_a)
a = [2,2,4]
b = [3,3,9]
# to get the dot product of a and b
vector_sum(ele_mul(a,b))
a = [1, 3, -5]
b = [4, -2, -1]
print('my func:', vector_sum(elementwise_multiplication(a,b)))
print('numpy dot func:', np.dot(a,b))
weights = np.array([0.1, 0.2, 0])
def neural_network(input, weights):
pred = input.dot(weights)
return pred
toes = np.array([8.5, 9.5, 9.9, 9.0])
wlrec = np.array([0.65, 0.8, 0.8, 0.9])
nfans = np.array([1.2, 1.3, 0.5, 1.0])
# input corresponds to every entry for the first game of the season
# to go through all the inputs
for input in zip(toes,wlrec,nfans):
print(neural_network(np.array(input), weights))
weights = [0.3, 0.2, 0.9]
def neural_network(input, weights):
pred = input * np.array(weights)
return pred
wlrec = [0.65, 0.9, 0.8, 0.8]
for input in wlrec:
print(neural_network(input, weights))
#toes #win #fans
weights = [[0.1, 0.1, -0.3],
[0.1, 0.2, 0.0],
[0.0, 1.3, 0.1]]
def neural_network(input, weights):
pred = vect_mat_mul(input,weights)
return pred
def vect_mat_mul(vect,matrix):
out = []
for m in matrix:
out.append(np.dot(vect,m))
return out
toes = [8.5, 9.5, 9.9, 9.0]
wlrec = [0.65, 0.8, 0.8, 0.9]
nfans = [1.2, 1.3, 0.5, 1.0]
for input in zip(toes,wlrec,nfans):
print(neural_network(input, weights))
w_hid = [[0.1,0.2,-0.1],
[-0.1,0.1,0.9],
[0.1,0.4,0.1]]
w_out = [[0.3,1.1,-0.3],
[-0.1,0.2,0.0],
[0.0,1.3,0.1]]
weights = [w_hid, w_out]
def neural_net(input, weights):
hid = np.dot(input, weights[0])
# now we feed the output of the first layer into the next
pred = np.dot(hid, weights[1])
return pred
toes = [8.5, 9.5, 9.9, 9.0]
wlrec = [0.65,0.8, 0.8, 0.9]
nfans = [1.2, 1.3, 0.5, 1.0]
input = list(zip(toes, wlrec, nfans))[0]
neural_net(input,weights)
a = np.array([0,1,2,3])
b = np.array([4,5,6,7])
c = np.array([[0,1,2,3],
[4,5,6,7]])
d = np.zeros((2,4))#(2x4 matrix of zeros)
e = np.random.rand(2,5) # random 2x5
a* c
weight = 0.5
input = 0.5
target = 0.8
step_amount = 0.001
for iteration in range(1101):
pred = input * weight
err = (pred - target) ** 2
#print(f"error {err} Preds {pred}")
up_pred = input * (weight + step_amount)
up_error = (up_pred - target) ** 2
down_pred = input * (weight - step_amount)
down_error = (down_pred - target) ** 2
if up_error < down_error:
weight += step_amount
else:
weight -= step_amount
print(f'final prediction: {input * weight} | weight: {weight}')
weight = 0.5
input = 0.5
target = 0.8
alpha = .01
predictions, errors = [], []
for i in range(1500):
pred = input * weight
error = (pred - target) ** 2
# delta measures by how much the target was missed
delta = pred - target
weight_delta = delta * input
weight = weight - weight_delta * alpha
predictions.append(pred)
errors.append(error)
#if i % 220 == 0:
# print(f'Step {i:2d} Error {error:f} Prediction {pred:f}')
plt.xlabel('iterations')
plt.grid()
plt.plot(errors, label="Errors")
plt.plot(predictions, label="Predictions")
plt.legend();
inputs = np.array([0.25,1.0,0.1])
target = 70 # my weight
weights = np.array([1,3,1])
learning_rate = 0.1
for epoch in range(40):
pred = np.dot(inputs, weights)
error = (pred - target)**2
delta_error = input * (pred - target)
weights = weights - (delta_error * learning_rate)
print('pred:', pred)
print('weights:', weights)
# step 1: empty network
weights = [ [0.1, 0.1, -0.3], #hurt
[0.1, 0.2, 0.0], #win
[0.0, 1.3, 0.1] ]#sad?
def neural_network(input, weights):
pred = np.dot(input, weights)
return pred
#Predict
toes = [8.5, 9.5, 9.9, 9.0]
wlrec = [0.65,0.8, 0.8, 0.9]
nfans = [1.2, 1.3, 0.5, 1.0]
hurt = [0.1, 0.0, 0.0, 0.1]
win =[1, 1, 0, 1]
sad = [0.1, 0.0, 0.1, 0.2]
alpha = 0.01
input = [toes[0],wlrec[0],nfans[0]]
true = [hurt[0], win[0], sad[0]]
pred = neural_network(input, weights)
error = [0,0,0]
delta = [0,0,0]
for i in range(len(true)):
error[i] = (pred[i] - true[i]) ** 2
delta[i] = pred[i] - true[i]
def outer_prod(vec_a, vec_b):
out = np.zeros((len(vec_a), len(vec_b)))
for i in range(len(vec_a)):
for j in range(len(vec_b)):
out[i][j] = vec_a[i]*vec_b[j]
return out
#updating the weights
weight_deltas = outer_prod(input,delta) * alpha
weights = np.array(weights) - weight_deltas
neural_network([toes[1],wlrec[1],nfans[1]], weights), [hurt[1], win[1], sad[1]]
# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, metrics
# The digits dataset
digits = datasets.load_digits()
print(digits.data.shape)
digits.data # the actual data
plt.matshow(digits.images[0])
print(digits.target.shape)
digits.target
inputs = digits.data
targets = digits.target
#one weight per input pixel
hidden_weights = np.random.random([64,10])
output_weights = np.random.random(10)
learning_rate = 0.01
epochs = 10
def sigmoid(x):
return 1 / (1 + np.exp(-x))
hidden_input = np.dot(inputs[0], hidden_weights)
hidden_output = sigmoid(hidden_input)
hidden_output.shape
np.dot(a, output_weights)
#inputs = np.random.random([len(digits.data),64]) # a blank input
#for i, item in enumerate(digits.data):
# inputs[i] = ((item - digits.data.mean()) / digits.data.std())
print(inputs.shape)
inputs
hidden_inputs = np.dot(inputs, hidden_weights)
print(hidden_inputs.shape)
hidden_inputs
hidden_outputs = sigmoid(hidden_inputs)
print(hidden_outputs.shape)
hidden_outputs
output_pred = np.dot(hidden_outputs, output_weights)
final_outputs = output_pred.round(0)
print(final_outputs.shape)
final_outputs
output_errors = (final_outputs - digits.target)**2
print(output_errors.shape)
output_errors
output_delta = final_outputs - digits.target
output_delta[:10]
#weights
ih_weight = np.random.random([64,10])
ho_weight = np.random.random(10)
learning_rate = 0.01
epochs = 20
for epoch in range(epochs):
for input, target in zip(digits.data, digits.target):
hidden_layer = np.dot(input, ih_weight)
hidden_outputs = sigmoid(hidden_inputs)
output_pred = np.dot(hidden_outputs, ho_weight)
output = output_pred.round(0)
# so now we have a prediction. time to back propagate!
#starting with the output errors:
error = output - target
output_error = error * output * (1 - output)
output_weight_delta = np.dot(ho_weight, output_delta) * learning_rate
ho_weight -= output_weight_delta * learning_rate
def neural_net(input)
output_weight_delta
ho_weight
ih_weight_delta.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can now listen to the resulting audio files to see which of the two onset detection functions works better for our audio example.
Step2: Finally, let's plot the onset detection functions we computed and the audio with onsets marked by vertical lines. Inspecting these plots, we can easily see how the hfc method picked up the hi-hats, while the complex method also detected the kicks.
|
<ASSISTANT_TASK:>
Python Code:
from essentia.standard import *
from tempfile import TemporaryDirectory
# Load audio file.
audio = MonoLoader(filename='../../../test/audio/recorded/hiphop.mp3')()
# 1. Compute the onset detection function (ODF).
# The OnsetDetection algorithm provides various ODFs.
od_hfc = OnsetDetection(method='hfc')
od_complex = OnsetDetection(method='complex')
# We need the auxilary algorithms to compute magnitude and phase.
w = Windowing(type='hann')
fft = FFT() # Outputs a complex FFT vector.
c2p = CartesianToPolar() # Converts it into a pair of magnitude and phase vectors.
# Compute both ODF frame by frame. Store results to a Pool.
pool = essentia.Pool()
for frame in FrameGenerator(audio, frameSize=1024, hopSize=512):
magnitude, phase = c2p(fft(w(frame)))
pool.add('odf.hfc', od_hfc(magnitude, phase))
pool.add('odf.complex', od_complex(magnitude, phase))
# 2. Detect onset locations.
onsets = Onsets()
onsets_hfc = onsets(# This algorithm expects a matrix, not a vector.
essentia.array([pool['odf.hfc']]),
# You need to specify weights, but if we use only one ODF
# it doesn't actually matter which weight to give it
[1])
onsets_complex = onsets(essentia.array([pool['odf.complex']]), [1])
# Add onset markers to the audio and save it to a file.
# We use beeps instead of white noise and stereo signal as it's more distinctive.
# We want to keep beeps in a separate audio channel.
# Add them to a silent audio and use the original audio as another channel. Mux both into a stereo signal.
silence = [0.] * len(audio)
beeps_hfc = AudioOnsetsMarker(onsets=onsets_hfc, type='beep')(silence)
beeps_complex = AudioOnsetsMarker(onsets=onsets_complex, type='beep')(silence)
audio_hfc = StereoMuxer()(audio, beeps_hfc)
audio_complex = StereoMuxer()(audio, beeps_complex)
# Write audio to files in a temporary directory.
temp_dir = TemporaryDirectory()
AudioWriter(filename=temp_dir.name + '/hiphop_onsets_hfc_stereo.mp3', format='mp3')(audio_hfc)
AudioWriter(filename=temp_dir.name + '/hiphop_onsets_complex_stereo.mp3', format='mp3')(audio_complex)
import IPython
IPython.display.Audio('../../../test/audio/recorded/hiphop.mp3')
IPython.display.Audio(temp_dir.name + '/hiphop_onsets_hfc_stereo.mp3')
IPython.display.Audio(temp_dir.name + '/hiphop_onsets_complex_stereo.mp3')
import matplotlib.pyplot as plt
%matplotlib inline
import numpy
n_frames = len(pool['odf.hfc'])
frames_position_samples = numpy.array(range(n_frames)) * 512
fig, ((ax1, ax2, ax3, ax4)) = plt.subplots(4, 1, sharex=True, sharey=False, figsize=(15, 16))
ax1.set_title('HFC ODF')
ax1.plot(frames_position_samples, pool['odf.hfc'], color='magenta')
ax2.set_title('Complex ODF')
ax2.plot(frames_position_samples, pool['odf.complex'], color='red')
ax3.set_title('Audio waveform and the estimated onset positions (HFC ODF)')
ax3.plot(audio)
for onset in onsets_hfc:
ax3.axvline(x=onset*44100, color='magenta')
ax4.set_title('Audio waveform and the estimated onset positions (complex ODF)')
ax4.plot(audio)
for onset in onsets_complex:
ax4.axvline(x=onset*44100, color='red')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. 清洗错行的情况
Step2: 问题: 第一行是变量名
Step3: 如何来处理错误换行情况?
Step4: 3. 读取数据、正确分列
Step5: 4. 统计数量
Step6: 5. 清洗tweets文本
Step7: 安装twitter_text
Step8: 获得清洗过的推特文本
Step9: 思考:
|
<ASSISTANT_TASK:>
Python Code:
bigfile = open('/Users/chengjun/百度云同步盘/Writing/OWS/ows-raw.txt', 'rb')
chunkSize = 1000000
chunk = bigfile.readlines(chunkSize)
print len(chunk)
with open("/Users/chengjun/GitHub/cjc2016/data/ows_tweets_sample.txt", 'w') as f:
for i in chunk:
f.write(i)
with open("/Users/chengjun/GitHub/cjc2016/data/ows_tweets_sample.txt", 'rb') as f:
lines = f.readlines()
# 总行数
len(lines)
# 查看第一行
lines[0]
varNames = lines[0].replace('\n', '').split(',')
varNames
len(varNames)
lines[1344]
with open("/Users/chengjun/GitHub/cjc2016/data/ows_tweets_sample_clean.txt", 'w') as f:
right_line = '' # 正确的行,它是一个空字符串
blocks = [] # 确认为正确的行会被添加到blocks里面
for line in lines:
right_line += line.replace('\n', ' ')
line_length = len(right_line.split(','))
if line_length >= 14:
blocks.append(right_line)
right_line = ''
for i in blocks:
f.write(i + '\n')
len(blocks)
blocks[1344]
with open("/Users/chengjun/GitHub/cjc2016/data/ows_tweets_sample_clean4.txt", 'w') as f:
right_line = '' # 正确的行,它是一个空字符串
blocks = [] # 确认为正确的行会被添加到blocks里面
for line in lines:
right_line += line.replace('\n', ' ').replace('\r', ' ')
line_length = len(right_line.split(','))
if line_length >= 14:
blocks.append(right_line)
right_line = ''
for i in blocks:
f.write(i + '\n')
blocks[1344]
# 提示:你可能需要修改以下路径名
with open("/Users/chengjun/GitHub/cjc2016/data/ows_tweets_sample_clean.txt", 'rb') as f:
chunk = f.readlines()
len(chunk)
chunk[:3]
import csv
clean_lines = (line.replace('\x00','') for line in chunk[1:])
lines = csv.reader(clean_lines, delimiter=',', quotechar='"')
from collections import defaultdict
data_dict = defaultdict(int)
line_num = 0
lines = csv.reader((line.replace('\x00','') for line in chunk[1:]), delimiter=',', quotechar='"')
for i in lines:
line_num +=1
data_dict[i[8]] +=1 # i[8] 是user
data_dict.items()[:5]
print line_num
%matplotlib inline
from matplotlib.font_manager import FontProperties
import matplotlib.pyplot as plt
import matplotlib
#matplotlib.rcParams['font.sans-serif'] = ['Microsoft YaHei'] #指定默认字体
matplotlib.rc("savefig", dpi=100)
font = FontProperties(fname=r'/Users/chengjun/github/cjc2016/data/msyh.ttf', size=14) # 注意:修改这里的路径名
plt.hist(data_dict.values())
#plt.yscale('log')
#plt.xscale('log')
plt.xlabel(u'发帖数', fontproperties=font)
plt.ylabel(u'人数', fontproperties=font)
plt.show()
tweet_dict = defaultdict(int)
for i in data_dict.values():
tweet_dict[i] += 1
plt.loglog(tweet_dict.keys(), tweet_dict.values(), 'ro',linewidth=2)
plt.xlabel(u'推特数', fontproperties=font)
plt.ylabel(u'人数', fontproperties=font)
plt.show()
import numpy as np
import statsmodels.api as sm
def powerPlot(d_value, d_freq, color, marker):
d_freq = [i + 1 for i in d_freq]
d_prob = [float(i)/sum(d_freq) for i in d_freq]
#d_rank = ss.rankdata(d_value).astype(int)
x = np.log(d_value)
y = np.log(d_prob)
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant,beta = res.params
r2 = res.rsquared
plt.plot(d_value, d_prob, linestyle = '', color = color, marker = marker)
plt.plot(d_value, np.exp(constant+x*beta),"red")
plt.xscale('log'); plt.yscale('log')
plt.text(max(d_value)/2,max(d_prob)/10,
r'$\beta$ = ' + str(round(beta,2)) +'\n' + r'$R^2$ = ' + str(round(r2, 2)))
histo, bin_edges = np.histogram(data_dict.values(), 15)
bin_center = 0.5*(bin_edges[1:] + bin_edges[:-1])
powerPlot(bin_center,histo, 'r', 'o')
#lg=plt.legend(labels = [u'Tweets', u'Fit'], loc=3, fontsize=20)
plt.ylabel(u'概率', fontproperties=font)
plt.xlabel(u'推特数', fontproperties=font)
plt.show()
import statsmodels.api as sm
from collections import defaultdict
import numpy as np
def powerPlot(data):
d = sorted(data, reverse = True )
d_table = defaultdict(int)
for k in d:
d_table[k] += 1
d_value = sorted(d_table)
d_value = [i+1 for i in d_value]
d_freq = [d_table[i]+1 for i in d_value]
d_prob = [float(i)/sum(d_freq) for i in d_freq]
#d_rank = ss.rankdata(d_value).astype(int)
x = np.log(d_value)
y = np.log(d_prob)
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant,beta = res.params
r2 = res.rsquared
plt.plot(d_value, d_prob, 'ro')
plt.plot(d_value, np.exp(constant+x*beta),"red")
plt.xscale('log'); plt.yscale('log')
plt.text(max(d_value)/2,max(d_prob)/5,
'Beta = ' + str(round(beta,2)) +'\n' + 'R squared = ' + str(round(r2, 2)))
plt.title('Distribution')
plt.ylabel('P(K)')
plt.xlabel('K')
plt.show()
powerPlot(data_dict.values())
tweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!!
#OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com
http://ccc.nju.edu.cn RT !!HELP!!!!'''
import re
import twitter_text
import re
tweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!!
#OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com
http://ccc.nju.edu.cn RT !!HELP!!!!'''
rt_patterns = re.compile(r"(RT|via)((?:\b\W*@\w+)+)", re.IGNORECASE)
rt_user_name = rt_patterns.findall(tweet)[0][1].strip(' @')
rt_user_name
import re
tweet = '''@AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!!
#OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com
http://ccc.nju.edu.cn RT !!HELP!!!!'''
rt_patterns = re.compile(r"(RT|via)((?:\b\W*@\w+)+)", re.IGNORECASE)
rt_user_name = rt_patterns.findall(tweet)
print rt_user_name
if rt_user_name:
print 'it exits.'
else:
print 'None'
import re
def extract_rt_user(tweet):
rt_patterns = re.compile(r"(RT|via)((?:\b\W*@\w+)+)", re.IGNORECASE)
rt_user_name = rt_patterns.findall(tweet)
if rt_user_name:
rt_user_name = rt_user_name[0][1].strip(' @')
else:
rt_user_name = None
return rt_user_name
tweet = '''@AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!!
#OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com
http://ccc.nju.edu.cn RT !!HELP!!!!'''
print extract_rt_user(tweet)
def extract_tweet_text(tweet, at_names, urls):
for i in at_names:
tweet = tweet.replace(i, '')
for j in urls:
tweet = tweet.replace(j, '')
marks = ['RT @', '@', '"', '#', '\n', '\t', ' ']
for k in marks:
tweet = tweet.replace(k, '')
return tweet
import twitter_text
tweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!!
#OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com
http://ccc.nju.edu.cn RT !!HELP!!!!'''
ex = twitter_text.Extractor(tweet)
at_names = ex.extract_mentioned_screen_names()
urls = ex.extract_urls()
hashtags = ex.extract_hashtags()
rt_user = extract_rt_user(tweet)
tweet_text = extract_tweet_text(tweet, at_names, urls)
print at_names, urls, hashtags, rt_user,'-------->', tweet_text
import csv
lines = csv.reader((line.replace('\x00','') for line in chunk[1:]), delimiter=',', quotechar='"')
tweets = [i[1] for i in lines]
for tweet in tweets[:5]:
ex = twitter_text.Extractor(tweet)
at_names = ex.extract_mentioned_screen_names()
urls = ex.extract_urls()
hashtags = ex.extract_hashtags()
rt_user = extract_rt_user(tweet)
tweet_text = extract_tweet_text(tweet, at_names, urls)
print at_names, urls, hashtags, rt_user,
print tweet_text
import csv
lines = csv.reader((line.replace('\x00','') for line in chunk[1:]), delimiter=',', quotechar='"')
tweet_user_data = [(i[1], i[8]) for i in lines]
for tweet,user in tweet_user_data:
rt_user = extract_rt_user(tweet)
if rt_user:
print rt_user, ',', user
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 我们看一下图片是什么样子的:
Step2: 1.2 构建模型
Step3: 如果sigmoid改成relu的话,精度就会降低非常的多。为啥呢?
Step4: 1.3 训练模型
Step5: 我们把训练过程中的loss及accuracy打印出来
Step6: 1.4 evaluate模型
Step7: 1.5 完整代码
Step8: 2、归一化
Step9: 现有数据在0~255之间,下面我们对数据做归一化。
Step10: 上述代码的几个说明:
Step11: 3、回调函数:TensorBoard EarlyStopping ModelCheckpoint
Step12: 启动tensorborad的方式很简单:
Step13: DNN在前几个ecpochs的训练时,loss降低的很慢,导致这个问题的原因主要有:
Step14: 5.2 selu
Step15: 5.3 dropout
Step16: 6、完整代码
|
<ASSISTANT_TASK:>
Python Code:
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data()
x_valid,x_train = x_train_all[:5000],x_train_all[5000:]
y_valid,y_train = y_train_all[:5000],y_train_all[5000:]
print(x_train.shape,y_train.shape)
print(x_valid.shape,y_valid.shape)
print(x_test.shape,y_test.shape)
def show_single_image(img_arr):
plt.imshow(img_arr, cmap='binary')
plt.show()
show_single_image(x_train[0])
def show_imgs(n_rows, n_cols, x_data, y_data, class_names):
assert len(x_data) == len(y_data)
assert n_rows * n_cols < len(x_data)
plt.figure(figsize = (n_cols * 1.4, n_rows * 1.6))
for row in range(n_rows):
for col in range(n_cols):
index = n_cols * row + col
plt.subplot(n_rows, n_cols, index+1)
plt.imshow(x_data[index], cmap="binary",
interpolation = 'nearest')
plt.axis('off')
plt.title(class_names[y_data[index]])
plt.show()
class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress',
'Coat', 'Sandal', 'Shirt', 'Sneaker',
'Bag', 'Ankle boot']
show_imgs(3, 5, x_train, y_train, class_names)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28])) #将输入的二维数组展开成一维向量
model.add(keras.layers.Dense(300,activation='sigmoid'))
model.add(keras.layers.Dense(100,activation='sigmoid'))
model.add(keras.layers.Dense(10,activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.layers
model.summary()
history = model.fit(x_train,y_train,epochs=10,validation_data=(x_valid,y_valid))
type(history)
history.history
def print_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(10,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
print_learning_curves(history)
model.evaluate(x_test,y_test)
import numpy as tf
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow import keras
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data()
x_valid,x_train = x_train_all[:5000],x_train_all[5000:]
y_valid,y_train = y_train_all[:5000],y_train_all[5000:]
print(x_train.shape,y_train.shape)
print(x_valid.shape,y_valid.shape)
print(x_test.shape,y_test.shape)
def show_single_image(img_arr):
plt.imshow(img_arr, cmap='binary')
plt.show()
show_single_image(x_train[0])
def show_imgs(n_rows, n_cols, x_data, y_data, class_names):
assert len(x_data) == len(y_data)
assert n_rows * n_cols < len(x_data)
plt.figure(figsize = (n_cols * 1.4, n_rows * 1.6))
for row in range(n_rows):
for col in range(n_cols):
index = n_cols * row + col
plt.subplot(n_rows, n_cols, index+1)
plt.imshow(x_data[index], cmap="binary",
interpolation = 'nearest')
plt.axis('off')
plt.title(class_names[y_data[index]])
plt.show()
class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress',
'Coat', 'Sandal', 'Shirt', 'Sneaker',
'Bag', 'Ankle boot']
show_imgs(3, 5, x_train, y_train, class_names)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28])) #将输入的二维数组展开成一维向量
model.add(keras.layers.Dense(300,activation='sigmoid'))
model.add(keras.layers.Dense(100,activation='sigmoid'))
model.add(keras.layers.Dense(10,activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.layers
model.summary()
history = model.fit(x_train,y_train,epochs=10,validation_data=(x_valid,y_valid))
type(history)
history.history
def print_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(10,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
print_learning_curves(history)
model.evaluate(x_test,y_test)
print(np.max(x_train), np.min(x_train))
#scaler = sklearn.preprocessing.StandardScaler()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scaler = scaler.fit_transform(
x_train.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)
x_valid_scaler = scaler.transform(
x_valid.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)
x_test_scaler = scaler.transform(
x_test.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)
history = model.fit(x_train_scaler,y_train,epochs=10,validation_data=(x_valid_scaler,y_valid))
model.evaluate(x_test_scaler,y_test)
logdir = './callbacks'
if not os.path.exists(logdir):
os.mkdir(logdir)
output_model_file = os.path.join(logdir,"fashion_mnist_model.h5")
callbacks = [
keras.callbacks.TensorBoard(logdir),
keras.callbacks.ModelCheckpoint(output_model_file, save_best_only = True),
keras.callbacks.EarlyStopping(patience=5,min_delta=1e-3)
]
history = model.fit(x_train_scaler, y_train, epochs=10,
validation_data=(x_valid_scaler,y_valid),
callbacks = callbacks)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
for _ in range(20):
model.add(keras.layers.Dense(10,activation='relu'))
model.add(keras.layers.Dense(10,activation='softmax'))
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
for _ in range(20):
model.add(keras.layers.Dense(10,activation='relu'))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(10,activation='softmax'))
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
for _ in range(20):
model.add(keras.layers.Dense(10,activation='selu'))
model.add(keras.layers.Dense(10,activation='softmax'))
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
for _ in range(20):
model.add(keras.layers.Dense(10,activation='selu'))
model.add(keras.layers.AlphaDropout(rate=0.5))# 只在最后一层添加了dropout
# AlphaDropout: 1. 均值和方差不变 2. 归一化性质也不变
# model.add(keras.layers.Dropout(rate=0.5))
model.add(keras.layers.Dense(10,activation='softmax'))
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow import keras
import sklearn
import os
#导入数据
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data()
x_valid,x_train = x_train_all[:5000],x_train_all[5000:]
y_valid,y_train = y_train_all[:5000],y_train_all[5000:]
#训练数据归一化
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scaler = scaler.fit_transform(
x_train.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)
x_valid_scaler = scaler.transform(
x_valid.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)
x_test_scaler = scaler.transform(
x_test.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)
#构建及compile模型
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
for _ in range(20):
model.add(keras.layers.Dense(100,activation='selu'))
model.add(keras.layers.AlphaDropout(rate=0.5))
model.add(keras.layers.Dense(10,activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
#定义callback
logdir = './callbacks'
if not os.path.exists(logdir):
os.mkdir(logdir)
output_model_file = os.path.join(logdir,"fashion_mnist_model.h5")
callbacks = [
keras.callbacks.TensorBoard(logdir),
keras.callbacks.ModelCheckpoint(output_model_file, save_best_only = True),
keras.callbacks.EarlyStopping(patience=5,min_delta=1e-3)
]
#训练模型
history = model.fit(x_train_scaler, y_train, epochs=10,
validation_data=(x_valid_scaler,y_valid),
callbacks = callbacks)
#检查模型效果
def print_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(10,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
print_learning_curves(history)
model.evaluate(x_test_scaler,y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: At first, we need to define the dataset name and a variable we want to use.
Step3: Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality.
Step4: Download the data with package API
Step5: Work with the downloaded files
Step6: Here we are making a Basemap of the US that we will use for showing the data.
Step7: Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider.
Step8: Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images.
Step9: With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others.
Step10: To see data more specifically we need to choose the location. This time we decided to look into the place where PM2.5 is highest. Seems like at the moment it is the Santa Barbara area, where the Thomas Fire is taking place.
Step11: In the plot below we can see the PM2.5 forecast on the surface layer. Note that the time zone on the graph is UTC while the time zone in Santa Barbara is UTC-08
Step12: Finally, we will remove the package we downloaded.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
%matplotlib inline
import numpy as np
import dh_py_access.lib.datahub as datahub
import xarray as xr
import matplotlib.pyplot as plt
import ipywidgets as widgets
from mpl_toolkits.basemap import Basemap
import dh_py_access.package_api as package_api
import matplotlib.colors as colors
import warnings
import shutil
import imageio
import os
warnings.filterwarnings("ignore")
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
dh = datahub.datahub(server,version,API_key)
dataset = 'cams_nrt_forecasts_global'
variable_name1 = 'pm2p5'
area_name = 'USA'
latitude_north = 49.138; longitude_west = -128.780
latitude_south = 24.414; longitude_east = -57.763
package_cams = package_api.package_api(dh,dataset,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,area_name=area_name)
package_cams.make_package()
package_cams.download_package()
dd1 = xr.open_dataset(package_cams.local_file_name)
dd1['longitude'] = ((dd1.longitude+180) % 360) - 180
dd1['pm2p5_micro'] = dd1.pm2p5 * 1000000000.
dd1.pm2p5_micro.data[dd1.pm2p5_micro.data < 0] = np.nan
m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,
resolution = 'i', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(dd1.longitude.data,dd1.latitude.data)
lonmap,latmap = m(lons,lats)
vmax = np.nanmax(dd1.pm2p5_micro.data)
vmin = 2
def loadimg(k):
fig=plt.figure(figsize=(10,7))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd1.pm2p5_micro.data[k],
norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = 'rainbow')
ilat,ilon = np.unravel_index(np.nanargmax(dd1.pm2p5_micro.data[k]),dd1.pm2p5_micro.data[k].shape)
m.plot(lonmap[ilat,ilon],latmap[ilat,ilon],'ro')
m.drawcoastlines()
m.drawcountries()
m.drawstates()
cbar = plt.colorbar(pcm,fraction=0.02, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3])
cbar.ax.set_yticklabels([0,10,100,1000])
plt.title(str(dd1.pm2p5_micro.time[k].data)[:-10])
cbar.set_label('micrograms m^3')
print("Maximum: ","%.2f" % np.nanmax(dd1.pm2p5_micro.data[k]))
plt.show()
widgets.interact(loadimg, k=widgets.IntSlider(min=0,max=(len(dd1.pm2p5_micro.data)-1),step=1,value=0, layout=widgets.Layout(width='100%')))
loadimg(10)
def make_ani():
folder = './anim/'
for k in range(len(dd1.pm2p5_micro)):
filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png'
if not os.path.exists(filename):
fig=plt.figure(figsize=(10,7))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd1.pm2p5_micro.data[k],
norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = 'rainbow')
m.drawcoastlines()
m.drawcountries()
m.drawstates()
cbar = plt.colorbar(pcm,fraction=0.02, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3])
cbar.ax.set_yticklabels([0,10,100,1000])
plt.title(str(dd1.pm2p5_micro.time[k].data)[:-10])
ax.set_xlim()
cbar.set_label('micrograms m^3')
if not os.path.exists(folder):
os.mkdir(folder)
plt.savefig(filename,bbox_inches = 'tight')
plt.close()
files = sorted(os.listdir(folder))
images = []
for file in files:
if not file.startswith('.'):
filename = folder + file
images.append(imageio.imread(filename))
kargs = { 'duration': 0.1,'quantizer':2,'fps':5.0}
imageio.mimsave('cams_pm2p5.gif', images, **kargs)
print ('GIF is saved as cams_pm2p5.gif under current working directory')
shutil.rmtree(folder)
make_ani()
ilat,ilon = np.unravel_index(np.nanargmax(dd1.pm2p5_micro.data[1]),dd1.pm2p5_micro.data[1].shape)
lon_max = -121.9; lat_max = 37.33 #dd1.latitude.data[ilat]
data_in_spec_loc = dd1.sel(longitude = lon_max,latitude=lat_max,method='nearest')
print ('Latitude ' + str(lat_max) + ' ; Longitude ' + str(lon_max))
fig = plt.figure(figsize=(10,5))
plt.plot(data_in_spec_loc.time,data_in_spec_loc.pm2p5_micro, '*-',linewidth = 1,c='blue',label = dataset)
plt.xlabel('Time')
plt.title('PM2.5 forecast for San Jose')
plt.grid()
os.remove(package_cams.local_file_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
data = np.array([[4, 2, 5, 6, 7],
[ 5, 4, 3, 5, 7]])
bin_size = 3
new_data = data[:, ::-1]
bin_data_mean = new_data[:,:(data.shape[1] // bin_size) * bin_size].reshape(data.shape[0], -1, bin_size).mean(axis=-1)[:,::-1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pull Data from BigQuery
Step2: The query below selects everything you'll need to build a regression model to predict the closing price of AAPL stock. The model will be very simple for the purposes of demonstrating BQML functionality. The only features you'll use as input into the model are the previous day's closing price and a three day trend value. The trend value can only take on two values, either -1 or +1. If the AAPL stock price has increased over any two of the previous three days then the trend will be +1. Otherwise, the trend value will be -1.
Step3: View the first five rows of the query's output. Note that the object df containing the query output is a Pandas Dataframe.
Step4: Visualize data
Step5: You can also embed the trend_3_day variable into the time series above.
Step6: Build a Regression Model in Scikit-Learn
Step7: The model's predictions are more or less in line with the truth. However, the utility of the model depends on the business context (i.e. you won't be making any money with this model). It's fair to question whether the variable trend_3_day even adds to the performance of the model
|
<ASSISTANT_TASK:>
Python Code:
%%bash
bq mk -d ai4f
bq load --autodetect --source_format=CSV ai4f.AAPL10Y gs://cloud-training/ai4f/AAPL10Y.csv
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn import linear_model
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
plt.rc('figure', figsize=(12, 8.0))
%%bigquery?
%%bigquery df
WITH
raw AS (
SELECT
date,
close,
LAG(close, 1) OVER(ORDER BY date) AS min_1_close,
LAG(close, 2) OVER(ORDER BY date) AS min_2_close,
LAG(close, 3) OVER(ORDER BY date) AS min_3_close,
LAG(close, 4) OVER(ORDER BY date) AS min_4_close
FROM
`ai4f.AAPL10Y`
ORDER BY
date DESC ),
raw_plus_trend AS (
SELECT
date,
close,
min_1_close,
IF (min_1_close - min_2_close > 0, 1, -1) AS min_1_trend,
IF (min_2_close - min_3_close > 0, 1, -1) AS min_2_trend,
IF (min_3_close - min_4_close > 0, 1, -1) AS min_3_trend
FROM
raw ),
train_data AS (
SELECT
date,
close,
min_1_close AS day_prev_close,
IF (min_1_trend + min_2_trend + min_3_trend > 0, 1, -1) AS trend_3_day
FROM
raw_plus_trend
ORDER BY
date ASC )
SELECT
*
FROM
train_data
print(type(df))
df.dropna(inplace=True)
df.head()
df.plot(x='date', y='close');
start_date = '2018-06-01'
end_date = '2018-07-31'
plt.plot(
'date', 'close', 'k--',
data = (
df.loc[pd.to_datetime(df.date).between(start_date, end_date)]
)
)
plt.scatter(
'date', 'close', color='b', label='pos trend',
data = (
df.loc[df.trend_3_day == 1 & pd.to_datetime(df.date).between(start_date, end_date)]
)
)
plt.scatter(
'date', 'close', color='r', label='neg trend',
data = (
df.loc[(df.trend_3_day == -1) & pd.to_datetime(df.date).between(start_date, end_date)]
)
)
plt.legend()
plt.xticks(rotation = 90);
df.shape
features = ['day_prev_close', 'trend_3_day']
target = 'close'
X_train, X_test = df.loc[:2000, features], df.loc[2000:, features]
y_train, y_test = df.loc[:2000, target], df.loc[2000:, target]
# Create linear regression object
regr = linear_model.LinearRegression(fit_intercept=False)
# Train the model using the training set
regr.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# The mean squared error
print('Root Mean Squared Error: {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred))))
# Explained variance score: 1 is perfect prediction
print('Variance Score: {0:.2f}'.format(r2_score(y_test, y_pred)))
plt.scatter(y_test, y_pred)
plt.plot([140, 240], [140, 240], 'r--', label='perfect fit')
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.legend();
print('Root Mean Squared Error: {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, X_test.day_prev_close))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 8.1 Calibration as a Least Squares Problem <a id='cal
Step3: Figure 8.1.1
Step4: Our hour angle range is from -6h to 6h, and our declination is set to $60^{\circ}$.
Step5: Figure 8.1.2
Step6: 8.1.2. Unpolarized Calibration <a id='cal
Step7: We now use create_vis_mat to create an example $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$. Note that
Step8: We now plot the baseline entries of $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$.
Step9: Figure 8.1.3
Step10: We are now able to define a wrapper function create_G_LM that in turn calls optimize.leastsq.
Step11: We may now calibrate $\boldsymbol{\mathcal{D}}$ by using create_G_LM.
Step12: The above function works by vectorizing the real and imaginary part of $\boldsymbol{\mathcal{D}}$ and
Step13: We plot the corrected visibilities below. Note that the model and corrected visibilities align well, implying that calibration was successfull.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from scipy import optimize
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
HTML('../style/code_toggle.html')
lam = 3e8/1.4e9 #observational wavelenth
print("lam = ",lam)
b = np.array([100,200,300])/lam
print("b [wavelengths] = ",b)
plt.plot(np.array([0,100,200]),np.array([0,0,0]),'ro')
plt.xlim([-250,250])
plt.ylim([-100,100])
plt.xlabel("East-West [m]", fontsize=18)
plt.ylabel("North-South [m]", fontsize=18)
plt.title("ENU-coordinates of three element interferometer.", fontsize=18)
plt.show()
H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians
delta = 60*(np.pi/180) #Declination in radians
u = np.zeros((len(b),len(H)))
v = np.zeros((len(b),len(H)))
for k in range(len(b)):
u[k,:] = b[k]*np.cos(H)
v[k,:] = b[k]*np.sin(H)*np.sin(delta)
plt.plot(u[k,:],v[k,:],"r")
plt.plot(-u[k,:],-v[k,:],"b")
plt.xlabel("$u$ [rad$^{-1}$]", fontsize=18)
plt.ylabel("$v$ [rad$^{-1}$]", fontsize=18)
plt.title("$uv$-Coverage of three element interferometer", fontsize=18)
plt.show()
u_m = np.zeros((len(b),len(b),len(H)))
v_m = np.zeros((len(b),len(b),len(H)))
u_m[0,1,:] = u[0,:] #the first two entries denote p and q and the third index denotes time
u_m[1,2,:] = u[1,:]
u_m[0,2,:] = u[2,:]
v_m[0,1,:] = v[0,:]
v_m[1,2,:] = v[1,:]
v_m[0,2,:] = v[2,:]
'''Creates the observed visibilities
point_sources - skymodel of point sources - (amplitude, l, m)
u_m - the u coordinates of observation (packed in a 2D structure)
v_m - the v coordinates of observation (packed in a 2D structure)
g - the antenna gain error vector
sig - the noise
'''
def create_vis_mat(point_sources,u_m,v_m,g,sig):
D = np.zeros(u.shape)
G = np.diag(g)
#Step 1: Create Model Visibility Matrix
for k in range(len(point_sources)): #for each point source
l_0 = point_sources[k,1]
m_0 = point_sources[k,2]
D = D + point_sources[k,0]*np.exp(-2*np.pi*1j*(u_m*l_0+v_m*m_0))
for t in range(D.shape[2]): #for each time-step
#Step 2: Corrupting the Visibilities
D[:,:,t] = np.dot(G,D[:,:,t])
D[:,:,t] = np.dot(D[:,:,t],G.conj())
#Step 3: Adding Noise
D[:,:,t] = D[:,:,t] + sig*np.random.randn(u_m.shape[0],u_m.shape[1]) \
+ sig*np.random.randn(u_m.shape[0],u_m.shape[1])*1j
return D
point_sources = np.array([(1,0,0),(0.5,(1*np.pi)/180,(0*np.pi)/180)]) #l and m are measures in radians
g = np.array([1.2+1.3j,1.1-1.5j,-1.3+0.7j])
sig = 0.1
D = create_vis_mat(point_sources,u_m,v_m,g,sig) #we corrupt our data and we add noise
M = create_vis_mat(point_sources,u_m,v_m,np.array([1,1,1]),0) #no corruption and no noise
fig = plt.figure()
timeslots = np.cumsum(np.ones((len(M[0,1,:]),)))
#We only plot the real part of visibilities
#Plotting Baseline 01
ax = plt.subplot("311")
ax.set_title("$m_{01}$ (blue) and $d_{01}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,1,:].real)
ax.plot(timeslots,D[0,1,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 02
ax = plt.subplot("312")
ax.set_title("$m_{02}$ (blue) and $d_{02}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,2,:].real)
ax.plot(timeslots,D[0,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 12
ax = plt.subplot("313")
ax.set_title("$m_{12}$ (blue) and $d_{12}$ (green)", fontsize=18)
ax.plot(timeslots,M[1,2,:].real)
ax.plot(timeslots,D[1,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
plt.tight_layout()
plt.show()
'''Unpolarized direction independent calibration entails finding the G that minimizes ||R-GMG^H||.
This function evaluates D-GMG^H.
g is a vector containing the real and imaginary components of the antenna gains.
d is a vector containing a vecotrized R (observed visibilities), real and imaginary.
m is a vector containing a vecotrized M (predicted), real and imaginary.
r is a vector containing the residuals.
'''
def err_func(g,d,m):
Nm = len(d)//2
N = len(g)//2
G = np.diag(g[0:N]+1j*g[N:])
D = np.reshape(d[0:Nm],(N,N))+np.reshape(d[Nm:],(N,N))*1j #matrization
M = np.reshape(m[0:Nm],(N,N))+np.reshape(m[Nm:],(N,N))*1j
T = np.dot(G,M)
T = np.dot(T,G.conj())
R = D - T
r_r = np.ravel(R.real) #vectorization
r_i = np.ravel(R.imag)
r = np.hstack([r_r,r_i])
return r
'''This function finds argmin G ||D-GMG^H|| using Levenberg-Marquardt. It uses the optimize.leastsq scipy to perform
the actual minimization.
D is your observed visibilities matrx.
M is your predicted visibilities.
g the antenna gains.
G = gg^H.'''
def create_G_LM(D,M):
N = D.shape[0] #number of antennas
temp =np.ones((D.shape[0],D.shape[1]) ,dtype=complex)
G = np.zeros(D.shape,dtype=complex)
g = np.zeros((D.shape[0],D.shape[2]),dtype=complex)
for t in range(D.shape[2]): #perform calibration per time-slot
g_0 = np.ones((2*N,)) # first antenna gain guess
g_0[N:] = 0
d_r = np.ravel(D[:,:,t].real) #vectorization of observed + seperating real and imag
d_i = np.ravel(D[:,:,t].imag)
d = np.hstack([d_r,d_i])
m_r = np.ravel(M[:,:,t].real) #vectorization of model + seperating real and imag
m_i = np.ravel(M[:,:,t].imag)
m = np.hstack([m_r,m_i])
g_lstsqr_temp = optimize.leastsq(err_func, g_0, args=(d, m))
g_lstsqr = g_lstsqr_temp[0]
G_m = np.dot(np.diag(g_lstsqr[0:N]+1j*g_lstsqr[N:]),temp)
G_m = np.dot(G_m,np.diag((g_lstsqr[0:N]+1j*g_lstsqr[N:]).conj()))
g[:,t] = g_lstsqr[0:N]+1j*g_lstsqr[N:] #creating antenna gain vector
G[:,:,t] = G_m
return g,G
glm,Glm = create_G_LM(D,M)
R_c = Glm**(-1)*D
fig = plt.figure()
timeslots = np.cumsum(np.ones((len(M[0,1,:]),)))
#We only plot the real part of visibilities
#Plotting Baseline 01
ax = plt.subplot("311")
ax.set_title("$m_{01}$ (blue) and $d_{01}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,1,:].real)
ax.plot(timeslots,R_c[0,1,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 02
ax = plt.subplot("312")
ax.set_title("$m_{02}$ (blue) and $d_{02}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,2,:].real)
ax.plot(timeslots,R_c[0,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 12
ax = plt.subplot("313")
ax.set_title("$m_{12}$ (blue) and $d_{12}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[1,2,:].real)
ax.plot(timeslots,R_c[1,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h5>First
Step2: <h3>When did River Grove open, when did the last owners take over, and how many companies have owned the facility?</h3>
Step3: <h3>How many visible complaints have there been under the current ownership?</h3>
Step4: <h3>How many online complaints have there been under previous ownership?</h3>
Step5: <h3>How many complaints occurred in the two years before the current owners took over?</h3>
Step6: <h3>What are the names River Grove is listed under on the public-facing website, and in what historical order?</h3>
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
df = pd.read_csv('../../data/processed/complaints-3-29-scrape.csv')
owners = pd.read_csv('../../data/raw/APD_HistOwner.csv')
owners.rename(columns={'HOW_IdNumber':'owner_id','HOW_CcmuNumber': 'fac_ccmunumber', 'HOW_DateActive':'license_date'}, inplace=True)
owners['license_date'] = pd.to_datetime(owners['license_date'])
owners = owners[['fac_ccmunumber','license_date','owner_id']]
#Last ownership change
owners[owners['fac_ccmunumber']=='50M132']
#Slice of public River Grove complaints
rg = df[(df['facility_id']=='50M132') & (df['public']=='online')]
rg[rg['incident_date']>='2015-04-01'].count()[0]
rg[rg['incident_date']<'2015-04-01'].count()[0]
rg[(rg['incident_date']<'2015-04-01') & (rg['incident_date']>'2013-04-01')].count()[0]
rg[['incident_date','online_fac_name']].drop_duplicates(subset='online_fac_name', keep='first').sort_values('incident_date', ascending=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can project lines with multiple vertices also
Step2: Look at this line on a map
Step3: Using cartopy's io shapefile interface to read Hunza basin shapefile
|
<ASSISTANT_TASK:>
Python Code:
osgb = ccrs.OSGB()
geod = ccrs.Geodetic()
# Convert from Ordnance Survey GB to lon/lat:
easting = 291813.424
northing = 92098.387
lon, lat = geod.transform_point(
x=easting, y=northing, src_crs=osgb)
print(lon, lat)
# check with mapx, this is UL corner of EASE-Grid 2.0 N!
e2n = ccrs.LambertAzimuthalEqualArea(central_latitude=90.0)
lon, lat = geod.transform_point(
x = -9000000.,
y = 9000000.,
src_crs=e2n)
print(lon, lat)
import shapely.geometry as sgeom
new_york = [-74.0060, 40.7128] # lon, lat
honolulu = [-157.8583, 21.3069] # lon, lat
line = sgeom.LineString([new_york, honolulu])
pc = ccrs.PlateCarree()
lines = pc.project_geometry(line, pc)
lines
help(lines)
%matplotlib notebook
import matplotlib.pyplot as plt
plt.figure()
ax = plt.axes(projection=pc)
ax.add_geometries(
[lines], pc,
edgecolor='blue', facecolor='none', lw=2)
ax.coastlines()
plt.show()
# Will this work on E2N? No: I think it's because original line coords are in lat/lon?
#e2n_lines = e2n.project_geometry(line, e2n)
#plt.figure()
#ax = plt.axes(projection=e2n)
#ax.add_geometries(
# [e2n_lines], e2n,
# edgecolor='blue', facecolor='none', lw=2)
#ax.coastlines()
#plt.show()
import cartopy.crs as ccrs
import cartopy.io.shapereader as shpreader
import shapely.geometry as sgeom
bfile = '/Users/brodzik/Desktop/GIS_data/basins/IN_Hunza_at_DainyorBridge.shp'
reader = shpreader.Reader(bfile)
record = next(reader.records())
record
record.attributes
record.bounds
record.geometry
help(record)
e2n = ccrs.LambertAzimuthalEqualArea(central_latitude=90.)
e2n_basin = e2n.project_geometry(record.geometry)
plt.figure()
ax = plt.axes(projection=e2n)
ax.set_extent([0., 9000000., -9000000., 0.], crs=e2n)
ax.gridlines(color='gray', linestyle='--')
ax.coastlines()
ax.add_geometries(
[e2n_basin], e2n,
edgecolors='red', facecolor='none', lw=2)
plt.tight_layout()
plt.show()
from netCDF4 import Dataset
cetbfile = '/Users/brodzik/cetb_data/v1.3/AQUA_AMSRE/2004164/NSIDC-0630-EASE2_N3.125km-AQUA_AMSRE-2004164-36H-M-SIR-RSS-v1.3.nc'
f = Dataset(cetbfile, 'r', 'netCDF4')
f
import numpy as np
tb = np.squeeze(f.variables['TB'][:])
tb.shape
plt.figure()
ax = plt.axes(projection=e2n)
ax.set_extent([-9000000., 9000000., -9000000., 9000000.], crs=e2n)
ax.gridlines(color='gray', linestyle='--')
ax.coastlines()
# ax.imshow(tb, transform=e2n)
#ax.add_geometries(
# [e2n_basin], e2n,
# edgecolors='red', facecolor='none', lw=2)
plt.tight_layout()
plt.show()
#ax.imshow(tb, extent=extent, transform=rob, origin='upper')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Scrieti o functie care calculeaza cate vocale sunt intr-un sir de caractere.
Step2: 3) Scrieti o functie care returneaza numarul de cuvinte care exista intr-un string. Cuvintele sunt separate de spatii, semne de punctuatie (, ;, ? ! . )
Step3: 4) Scrieti o functie care primeste ca parametri doua siruri de caractere si care returneaza numarul de aparitii ale primului sir de caractere in al doilea.
Step4: 6) Scrieti o functie care converteste in sir de caractere scris UpperCamelCase in lowercase_with_underscores.
Step5: 7) Scrieti o functie care primeste un integer char_len si un numar variabil de parametri (siruri de caractere) si verifica daca fiecare doua string-uri vecine respecta urmatoarea regula
Step6: Lab 2
Step7: 2) Sa se scrie o functie care primeste o lista de numere si returneaza o lista cu numerele prime care se gasesc in ea.
Step8: 3) Fie un tuplu (x,y) reprezentarea unui punct intr-un sistem cartezian. Sa se scrie o functie care primeste ca parametru o lista de puncte si returneaza o lista de tuple (a,b,c) unice care reprezinta dreptele unice determinate de acele puncte ( (a,b,c) corespunde dreptei ax + by + c = 0).
Step9: 4) Sa se scrie o functie care primeste ca parametri doua liste a si b si returneaza
Step10: 6) Sa se scrie o functie care primeste ca parametru un numar variabil de liste si un numar intreg x. Sa se returneze o lista care sa contina elementele care apar de exact x ori in listele primite. Exemplu
Step11: Lab 3
Step12: 2) Scrieti o functie care primeste ca parametru un sir de caractere si returneaza un dictionar in care cheile sunt caracterele dn componenta sirului de caractere iar valorile sunt reprezentate de numarul de aparitii ale caracterului respectiv in textul dat.
Step13: 3) Sa se compare doua dictionare fara a folosi operatorul "==" si sa se returneze un tuplu de liste de diferente astfel
Step14: 4) Fie functia build_xml_element care primeste urmatorii parametri
Step15: 5) Fie functia validate_dict care primeste ca parametru un set de tuple care reprezinta reguli de validare pentru un dictionar cu chei de tipul string si valori tot de tipul string si un dictionare. O regula este definita astfel
Step16: 7) Fie un dictionar global definit asemanator cu cel de mai sus, cu deosebirea ca functiile date ca valori ale dictionarului poate primi orice combinatie de parametri. Sa se scrie o functie apply_function care primeste ca parametru numele unei operatii si aplica functia corespunzatoare peste argumentele primite. Sa se implementeze astfel incat, in cazul adaugarii unei functii noi, sa nu fie necesara modificarea functiei apply_function.
Step17: 9) Sa se scrie o functie care primeste un numar variabil de seturi si returneaza un dictionar cu urmatoarele operatii dintre toate seturile doua cate doua
|
<ASSISTANT_TASK:>
Python Code:
def gcd(a,b):
while b:
a,b = b,a%b
return a
def gcdMultiple(*args):
#print(len(args))
#for i in args:
#print(i)
if len(args) < 2:
return -1
for i in range(2,len(args)+1,2):
res = gcd(args[i-2],args[i-1])
fin = gcd(res,args[i-2])
return fin
'''
def all_cmmdc(*args):
if len(args) == 0:
return -1
cm = cmmdc(args[0],args[1])
for i in args[2:]:
cm = cmmdc(cm,1)
return cm
print(all_cmmdc(2,8,6))
'''
#easier with functools reduce
#gcd(6,9) #is good
gcdMultiple(3,7,8,9)
def count_vowels(string):
num_vowels=0
for char in string:
if char in "aeiouAEIOU":
num_vowels = num_vowels+1
return num_vowels
print count_vowels("This is a sentence.") #6
def countWords(text):
#you should take into account various separators
# oneliner example:
wordList = ''.join(ch if ch.isalnum() else ' ' for ch in text).split()
#wordList = text.split(' ')
return len(wordList)
print countWords("Hello, hello; gigi ?") #3
def special_chars_str(special_str):
print special_str.find('\t')
special_str = "\t Ceva si inca ceva\n"
special_chars_str(special_str)
def camelcase_to_underscores(code):
code = list(code)
print code
print range(len(code))
for i in range(len(code)):
if code[i].isupper():
code.insert(i, '_')
print code
code = ''.join(code)
code.lower()
print code
return code
'''for ch in code:
if ch.isupper():
code.insert(counter, '_')
counter += 1'''
camelcase_to_underscores('UpperCamelCase') #upper_camel_case
def bigPrime(string):
#cifre = ['0', '1', '2']
number = []
numbers = []
for i in range(0, len(string)):
if string[i].isdigit():
number.append(string[i])
elif not number:
print 'number ready: ', number
numbers.append(''.join(number))
return numbers
print bigPrime('Ceva01ceffa212kjsda1') #01, 212, 1
def fibonacci(n):
fibo_list = [0, 1]
for i in range(2, n):
fibo_list.append(fibo_list[i-1] + fibo_list[i-2])
return fibo_list
fibo_list = fibonacci(6)
print fibo_list, ", length =", len(fibo_list)
def isPrime(n):
if n==2 or n==3: return True
if n%2==0 or n<2: return False
for i in range(3,int(n**0.5)+1,2): # only odd numbers
if n%i==0:
return False
return True
def prime_sublist(numbers):
sublist = []
for i in numbers:
if isPrime(i):
sublist.append(i)
return sublist
prime_sublist([0, 1, 2, 3, 4, 5])
def lines():
pass
points = [(1,2), (0,1), (3,1)]
import itertools
def combinations(x,k):
return list(itertools.combinations(x,k))
x = [1, 2, 3, 4]
k = 3
print combinations(x,k)
def sort_special(tpl):
pass
sort_special([('abc', 'bcd'), ('abc', 'zza')])
def makeOperations(a,b):
operations = []
join = a | b
print('join: ', join)
operations.append(join)
intersection = a & b
print('intersection: ', intersection)
operations.append(intersection)
diffA = a - b
diffB = b - a
print('a-b: ', diffA)
print('b-a: ', diffB)
operations.append(diffA)
operations.append(diffB)
return tuple(operations)
a = set([1,2,3,4])
b = set([3,4,5,6])
makeOperations(a,b)
def createFreqDict(phrase):
#chList = phrase.split(' ')
#print(dict(chList))
freqDict = {}
for i in phrase:
freqDict[i] = 0
print(freqDict)
for i in phrase:
freqDict[i] += 1
print(freqDict)
createFreqDict('Ana are mere')
dict1 = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
dict2 = {'c': 3, 'd': 5, 'e': 5}
dict1Keys = set(dict1.keys())
dict2Keys = set(dict2.keys())
intersection = dict1Keys & dict2Keys
print intersection
def build_xml_element(tag, content, **Kwargs):
string = []
#return string.join()
build_xml_element("a", "Hello there", href="http://python.org", _class="my-link", id="someid")
rules = {
"+": lambda a, b: a + b,
"*": lambda a, b: a * b,
"/": lambda a, b: a / b,
"%": lambda a, b: a % b
}
def apply_operator(operator, a, b):
return rules[operator](a,b)
apply_operator("+", 2, 3)
def tuple_unique(my_set):
return (len(my_set),0)
my_set = {1,1,2,2,3,3,4,5,5}
print tuple_unique(my_set)
import json
def set_operations_serialize(*args):
if len(args) < 2:
print 'You need at least 2 operands'
return
operations = {}
a = args[0]
b = args[1]
str_a = "{" + ", ".join(str(elem) for elem in a) + "}"
str_b = "{" + ", ".join(str(elem) for elem in b) + "}"
join = a | b
operations[str_a + " | " + str_b] = list(join)
print 'join: ', join
intersection = a & b
operations[str_a + " & " + str_b] = list(intersection)
print 'intersection: ', intersection
diffA = a - b
diffB = b - a
operations[str_a + " - " + str_b] = list(diffA)
operations[str_b + " - " + str_a] = list(diffB)
print 'a-b: ', diffA
print 'b-a: ', diffB
return operations
result = set_operations_serialize({1,2}, {2,3})
print result, type(result)
with open('data.json', 'w+') as fp:
json.dump(result, fp, sort_keys=True, indent=4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convenience function for easy plotting
Step2: The problem can be solved by superposition. For this we write the analytical solution as
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.special as sp
def newfig(title='?', xlabel='?', ylabel='?', xlim=None, ylim=None,
xscale='linear', yscale='linear', size_inches=(14, 8)):
'''Setup a new axis for plotting'''
fig, ax = plt.subplots()
fig.set_size_inches(size_inches)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xscale(xscale)
ax.set_yscale(yscale)
if xlim is not None: ax.set_xlim(xlim)
if ylim is not None: ax.set_ylim(ylim)
ax.grid(True)
return ax
# The aquifer properties
kD = 400 # m2/d
S = 0.1 # [-]
# using arrays instead of lists is convenient in computations
A = np.array([1.0, -0.5, +0.5, -0.25]) # [m] river stage
dA = np.diff(np.hstack((0, A))) # [m] the series of head changes.
tc = np.array([0.5, 0.8, 1.0, 2.0]) # [d] change time series
x =50 # distance to be considered
t = np.linspace(0, 5, 501) # period to be considered (5 days)
t[0] = 1e-6 # prevents division by zero
# Intialize the groundwater head to be all zeros
sriv = np.zeros_like(t)
s = np.zeros_like(t)
# set up the plot
ax = newfig('superposition in space', 'time [d]', 'head [m]')
# Loop over all changes times
for tci, dai in zip(tc, dA):
u = np.sqrt(x ** 2 * S / (4 * kD * (t[t>tci] - tci)))
ds = dai * sp.erfc(u)
ax.plot(t[t > tci], ds, '--', label=f'change at t={tci:.2}d')
s[ t > tci] += ds # add change due to this stage to total
sriv[t > tci] += dai * sp.erfc(u * 0.) # siply multiply u * 0 for the situation at x=0 = riv
ax.plot(t, s, linewidth=2, label='total effect')
ax.plot(t, sriv, linewidth=2, label='river stage')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let us consider a pair of variables $y_1$ and $y_2$, drawn from a bivariate Gaussian distribution. The joint probability density for $y_1$ and $y_2$ is
Step2: To make the relation to time-series data a bit more obvious, let's plot the two variables side by side, then see what happens to one variable when we observe (fix) the other.
Step3: Now consider $N$ variables drawn from a multivariate Gaussian distribution
Step4: Textbooks
Step5: The Matern family
Step6: Periodic kernels...
Step7: Combining kernels
|
<ASSISTANT_TASK:>
Python Code:
def gauss1d(x,mu,sig):
return np.exp(-(x-mu)**2/sig*2/2.)/np.sqrt(2*np.pi)/sig
def pltgauss1d(sig=1):
mu=0
x = np.r_[-4:4:101j]
pl.figure(figsize=(10,7))
pl.plot(x, gauss1d(x,mu,sig),'k-');
pl.axvline(mu,c='k',ls='-');
pl.axvline(mu+sig,c='k',ls='--');
pl.axvline(mu-sig,c='k',ls='--');
pl.axvline(mu+2*sig,c='k',ls=':');
pl.axvline(mu-2*sig,c='k',ls=':');
pl.xlim(x.min(),x.max());
pl.ylim(0,1);
pl.xlabel(r'$y$');
pl.ylabel(r'$p(y)$');
return
interact(pltgauss1d,
sig=widgets.FloatSlider(value=1.0,
min=0.5,
max=2.0,
step=0.25,
description=r'$\sigma$',
readout_format='.2f'));
def gauss2d(x1,x2,mu1,mu2,sig1,sig2,rho):
z = (x1-mu1)**2/sig1**2 + (x2-mu2)**2/sig2**2 - 2*rho*(x1-mu1)*(x2-mu2)/sig1/sig2
e = np.exp(-z/2/(1-rho**2))
return e/(2*np.pi*sig1*sig2*np.sqrt(1-rho**2))
def pltgauss2d(rho=0,show_cond=0):
mu1, sig1 = 0,1
mu2, sig2 = 0,1
y2o = -1
x1 = np.r_[-4:4:101j]
x2 = np.r_[-4:4:101j]
x22d,x12d = np.mgrid[-4:4:101j,-4:4:101j]
y = gauss2d(x12d,x22d,mu1,mu2,sig1,sig2,rho)
y1 = gauss1d(x1,mu1,sig1)
y2 = gauss1d(x2,mu2,sig2)
mu12 = mu1+rho*(y2o-mu2)/sig2**2
sig12 = np.sqrt(sig1**2-rho**2*sig2**2)
y12 = gauss1d(x1,mu12,sig12)
pl.figure(figsize=(10,10))
ax1 = pl.subplot2grid((3,3),(1,0),colspan=2,rowspan=2,aspect='equal')
v = np.array([0.02,0.1,0.3,0.6]) * y.max()
CS = pl.contour(x1,x2,y,v,colors='k')
if show_cond: pl.axhline(y2o,c='r')
pl.xlabel(r'$y_1$');
pl.ylabel(r'$y_2$');
pl.xlim(x1.min(),x1.max())
ax1.xaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both'))
ax1.yaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both'))
ax2 = pl.subplot2grid((3,3),(0,0),colspan=2,sharex=ax1)
pl.plot(x1,y1,'k-')
if show_cond: pl.plot(x1,y12,'r-')
pl.ylim(0,0.8)
pl.ylabel(r'$p(y_1)$')
pl.setp(ax2.get_xticklabels(), visible=False)
ax2.xaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both'))
ax2.yaxis.set_major_locator(pl.MaxNLocator(4, prune = 'upper'))
pl.xlim(x1.min(),x1.max())
ax3 = pl.subplot2grid((3,3),(1,2),rowspan=2,sharey=ax1)
pl.plot(y2,x2,'k-')
if show_cond: pl.axhline(y2o,c='r')
pl.ylim(x2.min(),x2.max());
pl.xlim(0,0.8);
pl.xlabel(r'$p(y_2)$')
pl.setp(ax3.get_yticklabels(), visible=False)
ax3.xaxis.set_major_locator(pl.MaxNLocator(4, prune = 'upper'))
ax3.yaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both'))
pl.subplots_adjust(hspace=0,wspace=0)
return
interact(pltgauss2d,
rho=widgets.FloatSlider(min=-0.8,max=0.8,step=0.4,description=r'$\rho$',value=0),
show_cond=widgets.Checkbox(value=True,description='show conditional distribution'));
def SEKernel(par, x1, x2):
A, Gamma = par
D2 = cdist(x1.reshape(len(x1),1), x2.reshape(len(x2),1),
metric = 'sqeuclidean')
return A * np.exp(-Gamma*D2)
A = 1.0
Gamma = 0.01
x = np.array([-1,1])
K = SEKernel([A,Gamma],x,x)
m = np.zeros(len(x))
sig = np.sqrt(np.diag(K))
pl.figure(figsize=(15,7))
pl.subplot(121)
for i in range(len(x)):
pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-')
pl.fill_between([x[i]-0.1,x[i]+0.1],
[m[i]+sig[i],m[i]+sig[i]],
[m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2)
pl.xlim(-2,2)
pl.ylim(-2,2)
pl.xlabel(r'$x$')
pl.ylabel(r'$y$');
def Pred_GP(CovFunc, CovPar, xobs, yobs, eobs, xtest):
# evaluate the covariance matrix for pairs of observed inputs
K = CovFunc(CovPar, xobs, xobs)
# add white noise
K += np.identity(xobs.shape[0]) * eobs**2
# evaluate the covariance matrix for pairs of test inputs
Kss = CovFunc(CovPar, xtest, xtest)
# evaluate the cross-term
Ks = CovFunc(CovPar, xtest, xobs)
# invert K
Ki = inv(K)
# evaluate the predictive mean
m = np.dot(Ks, np.dot(Ki, yobs))
# evaluate the covariance
cov = Kss - np.dot(Ks, np.dot(Ki, Ks.T))
return m, cov
xobs = np.array([-1])
yobs = np.array([1.0])
eobs = 0.0001
pl.subplot(122)
pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.')
x = np.array([1])
m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
for i in range(len(x)):
pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-')
pl.fill_between([x[i]-0.1,x[i]+0.1],
[m[i]+sig[i],m[i]+sig[i]],
[m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2)
pl.xlim(-2,2)
pl.ylim(-2,2)
pl.xlabel(r'$x$')
pl.ylabel(r'$y$');
xobs = np.array([-1,1,2])
yobs = np.array([1,-1,0])
eobs = np.array([0.0001,0.1,0.1])
pl.figure(figsize=(15,7))
pl.subplot(121)
pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.')
Gamma = 0.5
x = np.array([-2.5,-2,-1.5,-0.5, 0.0, 0.5,1.5,2.5])
m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
for i in range(len(x)):
pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-')
pl.fill_between([x[i]-0.1,x[i]+0.1],
[m[i]+sig[i],m[i]+sig[i]],
[m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2)
pl.xlim(-3,3)
pl.ylim(-3,3)
pl.xlabel(r'$x$')
pl.ylabel(r'$y$');
pl.subplot(122)
pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.')
x = np.linspace(-3,3,100)
m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
pl.plot(x,m,'k-')
pl.fill_between(x,m+sig,m-sig,color='k',alpha=0.2)
pl.xlim(-3,3)
pl.ylim(-3,3)
pl.xlabel(r'$x$')
pl.ylabel(r'$y$');
def kernel_SE(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
D2 = cdist(X1,X2,'sqeuclidean')
K = p0 * np.exp(- p1 * D2)
return np.matrix(K)
def kernel_Mat32(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
DD = cdist(X1, X2, 'euclidean')
arg = np.sqrt(3) * abs(DD) / p1
K = p0 * (1 + arg) * np.exp(- arg)
return np.matrix(K)
def kernel_RQ(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
alpha = par[2]
D2 = cdist(X1, X2, 'sqeuclidean')
K = p0 * (1 + D2 / (2*alpha*p1))**(-alpha)
return np.matrix(K)
def kernel_Per(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
period = par[2]
DD = cdist(X1, X2, 'euclidean')
K = p0 * np.exp(- p1*(np.sin(np.pi * DD / period))**2)
return np.matrix(K)
def kernel_QP(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
period = par[2]
p3 = 10.0**par[3]
DD = cdist(X1, X2, 'euclidean')
D2 = cdist(X1, X2, 'sqeuclidean')
K = p0 * np.exp(- p1*(np.sin(np.pi * DD / period))**2 - p3 * D2)
return np.matrix(K)
def add_wn(K,lsig):
sigma=10.0**lsig
N = K.shape[0]
return K + sigma**2 * np.identity(N)
def get_kernel(name):
if name == 'SE': return kernel_SE
elif name == 'RQ': return kernel_RQ
elif name == 'M32': return kernel_Mat32
elif name == 'Per': return kernel_Per
elif name == 'QP': return kernel_QP
else:
print 'No kernel called %s - using SE' % name
return kernel_SE
def pltsamples1(par0=0.0, par1=0.0, wn = 0.0):
x = np.r_[-5:5:201j]
X = np.matrix([x]).T # scipy.spatial.distance expects matrices
kernel = get_kernel('SE')
K = kernel(X,X,[par0,par1])
K = add_wn(K,wn)
fig=pl.figure(figsize=(10,4))
ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal')
pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10)
pl.title('Covariance matrix')
ax2 = pl.subplot2grid((1,3), (0,1),colspan=2)
np.random.seed(0)
for i in range(3):
y = np.random.multivariate_normal(np.zeros(len(x)),K)
pl.plot(x,y-i*2)
pl.xlim(-5,5)
pl.ylim(-8,5)
pl.xlabel('x')
pl.ylabel('y')
pl.title('Samples from %s prior' % 'SE')
pl.tight_layout()
interact(pltsamples1,
par0=widgets.FloatSlider(min=-1,max=1,step=0.5,description=r'$\log_{10} A$',value=0),
par1=widgets.FloatSlider(min=-1,max=1,step=0.5,description=r'$\log_{10} \Gamma$',value=0),
wn=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log_{10} \sigma$',value=-2)
);
# Function to plot samples from kernel
def pltsamples2(par2=0.5, kernel_shortname='SE'):
x = np.r_[-5:5:201j]
X = np.matrix([x]).T # scipy.spatial.distance expects matrices
kernel = get_kernel(kernel_shortname)
K = kernel(X,X,[0.0,0.0,par2])
fig=pl.figure(figsize=(10,4))
ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal')
pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10)
pl.title('Covariance matrix')
ax2 = pl.subplot2grid((1,3), (0,1),colspan=2)
np.random.seed(0)
for i in range(3):
y = np.random.multivariate_normal(np.zeros(len(x)),K)
pl.plot(x,y-i*2)
pl.xlim(-5,5)
pl.ylim(-8,5)
pl.xlabel('x')
pl.ylabel('y')
pl.title('Samples from %s prior' % kernel_shortname)
pl.tight_layout()
interact(pltsamples2,
par2=widgets.FloatSlider(min=0.25,max=1,step=0.25,description=r'$\alpha$',value=0.5),
kernel_shortname=widgets.RadioButtons(options=['SE','M32','RQ'], value='SE',description='kernel')
);
# Function to plot samples from kernel
def pltsamples3(par2=2.0, par3=2.0,kernel_shortname='Per'):
x = np.r_[-5:5:201j]
X = np.matrix([x]).T # scipy.spatial.distance expects matrices
kernel = get_kernel(kernel_shortname)
K = kernel(X,X,[0.0,0.0,par2,par3])
fig=pl.figure(figsize=(10,4))
ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal')
pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10)
pl.title('Covariance matrix')
ax2 = pl.subplot2grid((1,3), (0,1),colspan=2)
np.random.seed(0)
for i in range(3):
y = np.random.multivariate_normal(np.zeros(len(x)),K)
pl.plot(x,y-i*2)
pl.xlim(-5,5)
pl.ylim(-8,5)
pl.xlabel('x')
pl.ylabel('y')
pl.title('Samples from %s prior' % kernel_shortname)
pl.tight_layout()
interact(pltsamples3,
par2=widgets.FloatSlider(min=1,max=3,step=1,description=r'$P$',value=2),
par3=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log\Gamma_2$',value=-1),
kernel_shortname=widgets.RadioButtons(options=['Per','QP'], value='QP',description='kernel')
);
import george
x2d, y2d = np.mgrid[-3:3:0.1,-3:3:0.1]
x = x2d.ravel()
y = y2d.ravel()
N = len(x)
X = np.zeros((N,2))
X[:,0] = x
X[:,1] = y
k1 = george.kernels.ExpSquaredKernel(1.0,ndim=2)
s1 = george.GP(k1).sample(X).reshape(x2d.shape)
k2 = george.kernels.ExpSquaredKernel(1.0,ndim=2,axes=1) + george.kernels.ExpSquaredKernel(0.2,ndim=2,axes=0)
s2 = george.GP(k2).sample(X).reshape(x2d.shape)
pl.figure(figsize=(10,5))
pl.subplot(121)
pl.contourf(x2d,y2d,s1)
pl.xlim(x.min(),x.max())
pl.ylim(y.min(),y.max())
pl.xlabel(r'$x$')
pl.ylabel(r'$y$')
pl.title('RBF')
pl.subplot(122)
pl.contourf(x2d,y2d,s2)
pl.xlim(x.min(),x.max())
pl.ylim(y.min(),y.max())
pl.xlabel(r'$x$')
pl.title('ARD');
# Function to plot samples from kernel
def pltsamples3(par2=0.5,par3=0.5, kernel_shortname='SE'):
x = np.r_[-5:5:201j]
X = np.matrix([x]).T # scipy.spatial.distance expects matrices
kernel = get_kernel(kernel_shortname)
K = kernel(X,X,[0.0,0.0,par2,par3])
fig=pl.figure(figsize=(10,4))
ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal')
pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10)
pl.title('Covariance matrix')
ax2 = pl.subplot2grid((1,3), (0,1),colspan=2)
np.random.seed(0)
for i in range(5):
y = np.random.multivariate_normal(np.zeros(len(x)),K)
pl.plot(x,y)
pl.xlim(-5,5)
pl.ylim(-5,5)
pl.xlabel('x')
pl.ylabel('y')
pl.title('Samples from %s prior' % kernel_shortname)
pl.tight_layout()
interact(pltsamples3,
par2=widgets.FloatSlider(min=1,max=3,step=1,description=r'$P$',value=2),
par3=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log_{10}\Gamma_2$',value=-1.),
kernel_shortname=widgets.RadioButtons(options=['Per','QP'], value='Per',description='kernel')
);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Данные
Step2: Модель RandomForestClassifier
Step3: Кривые обучения для деревьев большей глубины
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import ensemble , cross_validation, learning_curve, metrics
import numpy as np
import pandas as pd
import xgboost as xgb
%pylab inline
bioresponce = pd.read_csv('bioresponse.csv', header=0, sep=',')
bioresponce.head()
bioresponce_target = bioresponce.Activity.values
bioresponce_data = bioresponce.iloc[:, 1:]
n_trees = [1] + range(10, 55, 5)
%%time
scoring = []
for n_tree in n_trees:
estimator = ensemble.RandomForestClassifier(n_estimators = n_tree, min_samples_split=5, random_state=1)
score = cross_validation.cross_val_score(estimator, bioresponce_data, bioresponce_target,
scoring = 'accuracy', cv = 3)
scoring.append(score)
scoring = np.asmatrix(scoring)
scoring
pylab.plot(n_trees, scoring.mean(axis = 1), marker='.', label='RandomForest')
pylab.grid(True)
pylab.xlabel('n_trees')
pylab.ylabel('score')
pylab.title('Accuracy score')
pylab.legend(loc='lower right')
%%time
xgb_scoring = []
for n_tree in n_trees:
estimator = xgb.XGBClassifier(learning_rate=0.1, max_depth=5, n_estimators=n_tree, min_child_weight=3)
score = cross_validation.cross_val_score(estimator, bioresponce_data, bioresponce_target,
scoring = 'accuracy', cv = 3)
xgb_scoring.append(score)
xgb_scoring = np.asmatrix(xgb_scoring)
xgb_scoring
pylab.plot(n_trees, scoring.mean(axis = 1), marker='.', label='RandomForest')
pylab.plot(n_trees, xgb_scoring.mean(axis = 1), marker='.', label='XGBoost')
pylab.grid(True)
pylab.xlabel('n_trees')
pylab.ylabel('score')
pylab.title('Accuracy score')
pylab.legend(loc='lower right')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Each split is in a structured json file with a number of questions and answers for each passage (or context). We’ll take this apart into parallel lists of contexts, questions, and answers (note that the contexts here are repeated since there are multiple questions per context)
Step2: The contexts and questions are just strings. The answers are dicts containing the subsequence of the passage with the correct answer as well as an integer indicating the character at which the answer begins. In order to train a model on this data we need (1) the tokenized context/question pairs, and (2) integers indicating at which token positions the answer begins and ends.
Step3: Now train_answers and val_answers include the character end positions and the corrected start positions. Next, let’s tokenize our context/question pairs. 🤗 Tokenizers can accept parallel lists of sequences and encode them together as sequence pairs.
Step4: Next we need to convert our character start/end positions to token start/end positions. When using 🤗 Fast Tokenizers, we can use the <b>built in char_to_token()</b> method.
Step5: Our data is ready. Let’s just put it in a PyTorch/TensorFlow dataset so that we can easily use it for training. In PyTorch, we define a custom Dataset class. In TensorFlow, we pass a tuple of (inputs_dict, labels_dict) to the from_tensor_slices method.
Step6: Now we can use a DistilBert model with a QA head for training
Step7: The data and model are both ready to go. You can train the model with Trainer/TFTrainer exactly as in the sequence classification example above. If using native PyTorch, replace labels with start_positions and end_positions in the training example. If using Keras’s fit, we need to make a minor modification to handle this example since it involves multiple model outputs.
|
<ASSISTANT_TASK:>
Python Code:
!mkdir squad
!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O /content/squad/train-v2.0.json
!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O /content/squad/dev-v2.0.json
import json
from pathlib import Path
def loadJSONData(filename):
with open(filename) as jsonDataFile:
data = json.load(jsonDataFile)
return data
#Data has Multiple Titles
#Every Title has Multiple Paragraphs and Each Para has Text in Context
#Every Paragraphs has Multiple Questions and Every Question has multiple answers with Answer start index
#If Answer is plausible , is_impossible is False.
def preprocessSQUAD(JSONData):
contextList = []
questionsList = []
answersList = []
titlesCount = len(JSONData['data'])
BaseData = JSONData['data']
print("Length Of JSON Data : ", titlesCount)
for titleID in (range(titlesCount)):
title = BaseData[titleID]['title']
#print("Title : ", title);
paragraphs = BaseData[titleID]['paragraphs']
paragraphCount = len(paragraphs)
for paraID in range(paragraphCount):
context = paragraphs[paraID]['context']
#print("Context : ",context);
questions = paragraphs[paraID]['qas']
questionCount = len(questions)
for questionID in range(questionCount):
# No Need to Process Questions whose Answers are not present
if (questions[questionID]['is_impossible'] == True):
continue
questionText = questions[questionID]['question']
answers = questions[questionID]['answers']
#The SQUAD answer is a List and in DEV most of times there are multiple answers
for answer in answers:
#Prepare The list of Context, Question and Answers parallely.
contextList.append(context)
questionsList.append(questionText)
answersList.append(answer)
print("Length of Context, Questions and Answers" , len (contextList), " , ", len(questionsList), " , ", len(answersList) )
return contextList, questionsList, answersList
def read_squad(path):
dataInJSON = loadJSONData(path)
return preprocessSQUAD(dataInJSON)
train_contexts, train_questions, train_answers = read_squad('/content/squad/train-v2.0.json')
print("Length of Context, Questions and Answers" , len (train_contexts), " , ", len(train_questions), " , ", len(train_answers) )
val_contexts, val_questions, val_answers = read_squad('/content/squad/dev-v2.0.json')
print("Length of Context, Questions and Answers" , len (val_contexts), " , ", len(val_questions), " , ", len(val_answers) )
def add_end_idx(answers, contexts):
offByOneCount = 0
offByTwoCount = 0
exactCount = 0
for answer, context in zip(answers, contexts):
# extract Answers and Start Positions
#print(answer)
answerText = answer['text']
answerStartIndex = answer['answer_start']
# calculate the end positions
answerEndIndex = answerStartIndex + len (answerText)
#print("Answer : ",answerText)
#print("AnswerStartIndex : ",answerStartIndex," AnswerEndIndex : ",answerEndIndex )
# Check if Answers are off by 1 or 2 and fix
if context[answerStartIndex:answerEndIndex] == answerText:
answer['answer_end'] = answerEndIndex
exactCount = exactCount + 1
# Answer is off by 1 char
elif context[answerStartIndex - 1:answerEndIndex - 1] == answerText:
answer['answer_start'] = answerStartIndex - 1
answer['answer_end'] = answerEndIndex - 1
offByOneCount = offByOneCount + 1
elif context[answerStartIndex + 1:answerEndIndex + 1] == answerText:
answer['answer_start'] = answerStartIndex + 1
answer['answer_end'] = answerEndIndex + 1
offByOneCount = offByOneCount + 1
# Answer is off by 2 chars
elif context[answerStartIndex - 2:answerEndIndex - 2] == answerText:
answer['answer_start'] = answerStartIndex - 2
answer['answer_end'] = answerEndIndex - 2
offByTwoCount = offByTwoCount + 1
elif context[answerStartIndex + 2:answerEndIndex + 2] == answerText:
answer['answer_start'] = answerStartIndex + 2
answer['answer_end'] = answerEndIndex + 2
offByTwoCount = offByTwoCount + 1
else:
print("!!Answer is outside correctable range!!")
print ("OffByOne : " , offByOneCount, " , OffByTwo : ", offByTwoCount, " exact : ", exactCount)
add_end_idx(train_answers, train_contexts)
add_end_idx(val_answers, val_contexts)
!pip install transformers==4.0.1
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
# Your code here
train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True)
# Your code here
val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True)
def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
# Your code here
for answerIndex in range(len(answers)):
#print (answers[answerIndex])
start_positions.append(encodings.char_to_token(answerIndex, answers[answerIndex]['answer_start']))
end_positions.append(encodings.char_to_token(answerIndex, answers[answerIndex]['answer_end'] - 1))
# if None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
if end_positions[-1] is None:
end_positions[-1] = tokenizer.model_max_length
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
add_token_positions(train_encodings, train_answers)
add_token_positions(val_encodings, val_answers)
import tensorflow as tf
# Your code here
train_dataset = tf.data.Dataset.from_tensor_slices((
{key: train_encodings[key] for key in ['input_ids', 'attention_mask']},
{key: train_encodings[key] for key in ['start_positions', 'end_positions']}
))
# Your code here
val_dataset = tf.data.Dataset.from_tensor_slices((
{key: val_encodings[key] for key in ['input_ids', 'attention_mask']},
{key: val_encodings[key] for key in ['start_positions', 'end_positions']}
))
from transformers import TFDistilBertForQuestionAnswering
# Your code here
model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
# Keras will expect a tuple when dealing with labels
# Write your code here to replace labels with start_positions and end_positions in the training example
train_dataset = train_dataset.map(lambda x, y: (x, (y['start_positions'], y['end_positions'])))
# Keras will assign a separate loss for each output and add them together. So we'll just use the standard CE loss
# instead of using the built-in model.compute_loss, which expects a dict of outputs and averages the two terms.
# Note that this means the loss will be 2x of when using TFTrainer since we're adding instead of averaging them.
# Your code here
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.distilbert.return_dict = False # if using 🤗 Transformers >3.02, make sure outputs are tuples
# Your code here
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=loss) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transfusion
|
<ASSISTANT_TASK:>
Python Code:
lr = LogisticRegression(fit_intercept=False,C=1e7)
lr.fit(PhiX,Pos)
lr.coef_
PosHatPhi = lr.predict(PhiX)
with plt.style.context(('seaborn-white')):
plot_confusion_matrix(skmetrics.confusion_matrix(Pos,PosHatPhi),[0,1],
title="Confusion matrix for linear fit")
def Phi(x,deg=2):
x = np.concatenate(([1],x))
curvec = x.copy()
for l in xrange(deg-1):
Xout = np.outer(x,curvec)
curvec = np.array([Xout[i,j] for i in xrange(len(x)) for j in xrange(i,len(curvec))])
return(curvec)
PhiX = np.array([Phi(x,deg=2) for x in X])
beta = np.array([0.,-1.,-.5,-2.,8.,-.8])
Pos = PhiX[:,:6].dot(beta) > .3*np.random.randn(n)
_ = plt.scatter(X[~Pos,0],X[~Pos,1],c='b',label='neg')
_ = plt.scatter(X[Pos,0],X[Pos,1],c='r',label='pos')
_ = plt.title("Actual Labels")
_ = plt.legend(loc=2)
Xte = np.random.uniform(size=2*n)
Xte.shape = (n,2)
PXte = np.array([Phi(x) for x in Xte])
lr = LogisticRegression(fit_intercept=False,C=1e7)
lr.fit(PhiX,Pos)
PhiX = np.array([Phi(x,deg=2) for x in X])
PosTe = PXte[:,:6].dot(beta) > .3*np.random.randn(n)
yhatte = lr.decision_function(PXte)
fpr, tpr, _ = skmetrics.roc_curve(PosTe,yhatte)
_ = plt.plot(fpr,tpr)
_ = plt.title('ROC curve')
_ = plt.xlabel('FPR')
_ = plt.ylabel('TPR')
def mse_pred(X,y,Xte,yte,deg=2):
PhiX = np.array([Phi(x,deg=deg) for x in X])
lr = LogisticRegression(fit_intercept=False)
lr.fit(PhiX,y)
n = len(y)
PXte = np.array([Phi(x,deg=deg) for x in Xte])
yhat = lr.predict(PhiX)
yhatte = lr.predict(PXte)
mse_tr = np.sum((yhat - y)**2.)/n
mse_te = np.sum((yhatte - yte)**2.)/n
return((mse_tr,mse_te))
PXte = np.array([Phi(x) for x in Xte])
PosTe = PXte[:,:6].dot(beta) > .5*np.random.randn(n)
MSEs = [mse_pred(X,Pos*1.,Xte,PosTe*1.,deg=k) for k in xrange(1,9)]
mse_trs, mse_tes = tuple(zip(*MSEs))
plt.plot(range(1,9),mse_trs,label="Training Error")
plt.plot(range(1,9),mse_tes,label="Test Error")
plt.xlabel('degree')
plt.ylabel('0-1 loss')
_ = plt.legend()
Dtrans = np.loadtxt("transfusion.data",dtype=np.str_,delimiter=",")
X = np.array(D[1:,0:4],dtype=float)
y = np.array(D[1:,4],dtype=float)
X = preprocessing.scale(X)
Xtr, Xva, ytr, yva = cross_validation.train_test_split(X, y, test_size=0.3)
print X.shape, Xte.shape
G = np.arange(10.)*.1 + .1
poly_TE = []
for gamma in G:
clf = svm.SVC(kernel="poly",degree=3,gamma=gamma,C=1.)
clf.fit(Xtr,ytr)
yhat = clf.predict(Xva)
poly_TE.append(np.sum(np.abs(yhat - yva)))
gamma_poly = G[np.argmin(poly_TE)]
G = .5**np.arange(16.)*32.
rbf_TE = []
for gamma in G:
clf = svm.SVC(kernel="rbf",gamma=gamma,C=3.)
clf.fit(Xtr,ytr)
yhat = clf.predict(Xva)
rbf_TE.append(np.sum(np.abs(yhat - yva)))
gamma_rbf = G[np.argmin(rbf_TE)]
plt.plot(np.log(G),rbf_TE)
plt.ylabel("Test Error")
plt.xlabel("log(bandwidth)")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The following is code copied from EPRV/fakedata.py to generate a realistic fake spectrum
Step4: First step
Step5: Next
Step6: and repeat
|
<ASSISTANT_TASK:>
Python Code:
def oned_gaussian(xs, mm, sig):
return np.exp(-0.5 * (xs - mm) ** 2 / sig ** 2) / np.sqrt(2. * np.pi * sig)
def make_synth(rv, xs, ds, ms, sigs):
`rv`: radial velocity in m/s (or same units as `c` above
`xs`: `[M]` array of wavelength values
`ds`: depths at line centers
`ms`: locations of the line centers in rest wavelength
`sigs`: Gaussian sigmas of lines
synths = np.ones_like(xs)
for d, m, sig in zip(ds, ms, sigs):
synths *= np.exp(d *
oned_gaussian(xs * doppler(rv), m, sig))
return synths
def make_data(N, xs, ds, ms, sigs):
`N`: number of spectra to make
`xs`: `[M]` array of wavelength values
`ds`: depth-like parameters for lines
`ms`: locations of the line centers in rest wavelength
`sigs`: Gaussian sigmas of lines
np.random.seed(2361794231)
M = len(xs)
data = np.zeros((N, M))
ivars = np.zeros((N, M))
rvs = 30000. * np.random.uniform(-1., 1., size=N) # 30 km/s bc Earth ; MAGIC
for n, rv in enumerate(rvs):
ivars[n, :] = 10000. # s/n = 100 ; MAGIC
data[n, :] = make_synth(rv, xs, ds, ms, sigs)
data[n, :] += np.random.normal(size=M) / np.sqrt(ivars[n, :])
return data, ivars, rvs
fwhms = [0.1077, 0.1113, 0.1044, 0.1083, 0.1364, 0.1, 0.1281,
0.1212, 0.1292, 0.1526, 0.1575, 0.1879] # FWHM of Gaussian fit to line (A)
sigs = np.asarray(fwhms) / 2. / np.sqrt(2. * np.log(2.)) # Gaussian sigma (A)
ms = [4997.967, 4998.228, 4998.543, 4999.116, 4999.508, 5000.206, 5000.348,
5000.734, 5000.991, 5001.229, 5001.483, 5001.87] # line center (A)
ds = [-0.113524, -0.533461, -0.030569, -0.351709, -0.792123, -0.234712, -0.610711,
-0.123613, -0.421898, -0.072386, -0.147218, -0.757536] # depth of line center (normalized flux)
ws = np.ones_like(ds) # dimensionless weights
dx = 0.01 # A
xs = np.arange(4998. + 0.5 * dx, 5002., dx) # A
N = 16
data, ivars, true_rvs = make_data(N, xs, ds, ms, sigs)
data = np.log(data)
data_xs = np.log(xs)
def add_tellurics(xs, all_data, true_rvs, lambdas, strengths, dx):
N, M = np.shape(all_data)
tellurics = np.ones_like(xs)
for ll, s in zip(lambdas, strengths):
tellurics *= np.exp(-s * oned_gaussian(xs, ll, dx))
plt.plot(xs, tellurics)
all_data *= np.repeat([tellurics,],N,axis=0)
return all_data
n_tellurics = 16 # magic
telluric_sig = 3.e-6 # magic
telluric_xs = np.random.uniform(data_xs[0], data_xs[-1], n_tellurics)
strengths = 0.01 * np.random.uniform(size = n_tellurics) ** 2. # magic numbers
all_data = np.exp(data)
all_data = add_tellurics(data_xs, all_data, true_rvs, telluric_xs, strengths, telluric_sig)
data = np.log(all_data)
def make_template(all_data, rvs, xs, dx):
`all_data`: `[N, M]` array of pixels
`rvs`: `[N]` array of RVs
`xs`: `[M]` array of wavelength values
`dx`: linear spacing desired for template wavelength grid (A)
(N,M) = np.shape(all_data)
all_xs = np.empty_like(all_data)
for i in range(N):
all_xs[i,:] = xs + np.log(doppler(rvs[i])) # shift to rest frame
all_data, all_xs = np.ravel(all_data), np.ravel(all_xs)
tiny = 10.
template_xs = np.arange(min(all_xs)-tiny*dx, max(all_xs)+tiny*dx, dx)
template_ys = np.nan + np.zeros_like(template_xs)
for i,t in enumerate(template_xs):
ind = (all_xs >= t-dx/2.) & (all_xs < t+dx/2.)
if np.sum(ind) > 0:
template_ys[i] = np.nanmedian(all_data[ind])
ind_nan = np.isnan(template_ys)
template_ys[ind_nan] = np.interp(template_xs[ind_nan], template_xs[~ind_nan], template_ys[~ind_nan])
return template_xs, template_ys
def subtract_template(data_xs, data, model_xs_t, model_ys_t, rvs_t):
(N,M) = np.shape(data)
data_sub = np.copy(data)
for n,v in enumerate(rvs_t):
model_ys_t_shifted = Pdot(data_xs, model_xs_t, model_ys_t, v)
data_sub[n,:] -= np.ravel(model_ys_t_shifted)
if n == 0:
plt.plot(data_xs, data[n,:], color='k')
plt.plot(data_xs, data_sub[n,:], color='blue')
plt.plot(data_xs, np.ravel(model_ys_t_shifted), color='red')
return data_sub
x0_star = true_rvs + np.random.normal(0., 100., N)
x0_t = np.zeros(N)
model_xs_star, model_ys_star = make_template(data, x0_star, data_xs, np.log(6000.01) - np.log(6000.))
model_xs_t, model_ys_t = make_template(data, x0_t, data_xs, np.log(6000.01) - np.log(6000.))
def chisq_star(rvs_star, rvs_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t):
pd_star = Pdot(data_xs, model_xs_star, model_ys_star, rvs_star)
pd_t = Pdot(data_xs, model_xs_t, model_ys_t, rvs_t)
pd = pd_star + pd_t
return np.sum((data - pd)**2 * ivars)
def chisq_t(rvs_t, rvs_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t):
pd_star = Pdot(data_xs, model_xs_star, model_ys_star, rvs_star)
pd_t = Pdot(data_xs, model_xs_t, model_ys_t, rvs_t)
pd = pd_star + pd_t
return np.sum((data - pd)**2 * ivars)
soln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
soln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
x0_star = soln_star
x0_t = soln_t
print np.std(x0_star - true_rvs)
print np.std(x0_t)
data_star = subtract_template(data_xs, data, model_xs_t, model_ys_t, x0_t)
data_t = subtract_template(data_xs, data, model_xs_star, model_ys_star, x0_star)
plt.plot(data_xs, data[0,:], color='black')
plt.plot(model_xs_star, model_ys_star, color='red')
plt.plot(model_xs_t, model_ys_t, color='green')
plt.plot(data_xs, data_star[0,:], color='blue')
plt.plot(data_xs, data_t[0,:], color='red')
true_star = np.log(make_data(N, xs, ds, ms, sigs)[0])
plt.plot(data_xs, true_star[0,:], color='k')
plt.plot(data_xs, data_star[0,:], color='blue')
plt.plot(model_xs_star, model_ys_star, color='red')
soln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
soln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
print np.std(soln_star - true_rvs)
print np.std(soln_t)
for n in range(5):
x0_star = soln_star
x0_t = soln_t
data_star = subtract_template(data_xs, data, model_xs_t, model_ys_t, x0_t)
data_t = subtract_template(data_xs, data, model_xs_star, model_ys_star, x0_star)
model_xs_star, model_ys_star = make_template(data_star, x0_star, data_xs, np.log(6000.01) - np.log(6000.))
model_xs_t, model_ys_t = make_template(data_t, x0_t, data_xs, np.log(6000.01) - np.log(6000.))
soln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
soln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
print "iter {0}: star std = {1:.2f}, telluric std = {2:.2f}".format(n, np.std(soln_star - true_rvs), np.std(soln_t))
true_star = np.log(make_data(N, xs, ds, ms, sigs)[0])
plt.plot(data_xs, true_star[0,:], color='k')
plt.plot(data_xs, data_star[0,:], color='blue')
plt.plot(data_xs, data_t[0,:], color='red')
plt.plot(data_xs, data[0,:], color='k')
plt.plot(data_xs, data_star[0,:] + data_t[0,:], color='red')
plt.plot(data_xs, data[10,:], color='k')
plt.plot(data_xs, data_star[10,:] + data_t[10,:], color='red')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-1', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Atrributes
Step2: How far are two nodes? Paths and centralities.
|
<ASSISTANT_TASK:>
Python Code:
import networkx as nx
G=nx.Graph()
G.add_node(1)
G.add_nodes_from([2,3,"hey"])
print G.nodes()
G.add_edge(1,2)
e=(2,3)
G.add_edge(*e)
G.add_edges_from([(1,2),(1,3), (3, 'hey')])
print G.edges()
print G.neighbors(1)
H=nx.DiGraph(G) # create a DiGraph using the connections from G
print H.edges()
G[1][3]['color']='blue'
%matplotlib inline
import matplotlib.pyplot as plt
nx.draw(G)
plt.savefig("data/graph.png")
G=nx.Graph()
G.graph[1]=('May',1)
G.graph['foo']='spam'
print G.graph
G.add_edge(1, 2, weight=4.7 )
G.add_edges_from([(3,4),(4,5)], color='red')
G.add_edges_from([(1,2,{'color':'blue'}), (2,3,{'weight':8})])
G[1][2]['weight'] = 4.7
G.edge[1][2]['weight'] = 4
print G.edges(data = True)
G.node[1]['room'] = 714
G.node[2]['name'] = 'tiny'
print G.nodes(data = True)
%matplotlib inline
import matplotlib.pyplot as plt
#nx.draw(G)
pos = nx.spring_layout(G)
labels = {1: "node 1" }
nx.draw(G, pos, with_labels=False, node_size = 800, edge_color = 'b', alpha = 0.4)
nx.draw_networkx_labels(G, pos, labels, fontsize=14)
import networkx as nx
G = nx.newman_watts_strogatz_graph(20, 3, 0.5, seed=None)
#G = nx.barabasi_albert_graph(100, 50)
#G = nx.gnm_random_graph(200, 150)
path = nx.shortest_path(G, 1, 15)
#print len(path), path
labels = {n:str(n) for n in path}
rest = set(G.nodes())-set(path)
d = nx.degree(G)
#size=[d[n] * 50 for n in d if n in path]
size=[d[n] * 10 for n in d]
%matplotlib inline
import matplotlib.pyplot as plt
#pos = nx.spring_layout(G)
#nx.draw(G, pos, node_size = 100, alpha = 0.4)
pos = nx.spring_layout(G, k = 1/float(3))
#pos = nx.spectral_layout(G, scale = 0.2)
nx.draw_networkx_nodes(G, pos, nodelist = path, node_color="b", node_size = size)
nx.draw_networkx_nodes(G, pos, nodelist = list(rest), node_color="r", node_size = 5, alpha = 0.4)
nx.draw_networkx_edges(G, pos, alpha = 0.2)
nx.draw_networkx_labels(G, pos, labels)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load .sofa file
Step2: SOFA content
Step3: Save as npy file
|
<ASSISTANT_TASK:>
Python Code:
import os
from netCDF4 import Dataset
import numpy as np
import sys
sys.path.insert(0, '../')
from sound_field_analysis import io
sofa_file_name = 'sofa/mit_kemar_large_pinna.sofa'
sofa_file = Dataset(sofa_file_name, 'r', format='NETCDF4')
print('sofa_file: ' + str(sofa_file))
print('SourcePosition: ' + str(sofa_file['SourcePosition']))
print('Data.IR: ' + str(sofa_file['Data.IR']))
print('Ear distance: ' + str(sofa_file['ReceiverPosition'][1, 1] - sofa_file['ReceiverPosition'][0, 1]) + ' m')
# extract IRs
HRIRs_l = np.squeeze(sofa_file['Data.IR'][:, 0, :])
HRIRs_r = np.squeeze(sofa_file['Data.IR'][:, 1, :])
Az, El, R = np.squeeze(np.hsplit(sofa_file['SourcePosition'][:], 3))
fs = sofa_file['Data.SamplingRate'][:][0]
sofa_file.close()
azimuth = Az / 180 * np.pi
colatitude = np.pi / 2 - El / 180 * np.pi
hrir_full_l = io.ArraySignal(io.TimeSignal(HRIRs_l, fs), io.SphericalGrid(azimuth, colatitude, R))
hrir_full_r = io.ArraySignal(io.TimeSignal(HRIRs_r, fs), io.SphericalGrid(azimuth, colatitude, R))
np.save(os.path.splitext(os.path.basename(sofa_file_name))[0] + '_L', hrir_full_l)
np.save(os.path.splitext(os.path.basename(sofa_file_name))[0] + '_R', hrir_full_r)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 잘라내기 종합 가이드
Step2: 모델 정의하기
Step3: 일부 레이어 잘라내기(순차 및 함수형)
Step4: 이 예에서는 레이어 유형을 사용하여 잘라낼 레이어를 결정했지만, 특정 레이어를 잘라내는 가장 쉬운 방법은 name 속성을 설정하고 clone_function에서 해당 내용을 찾는 것입니다.
Step5: 읽기 더 쉽지만 잠재적으로 모델 정확성이 낮음
Step6: 순차 예
Step7: 사용자 정의 Keras 레이어를 잘라내거나 잘라낼 레이어의 일부를 수정합니다.
Step8: 모델 훈련하기
Step9: Colab이 아닌 사용자의 경우, TensorBoard.dev에서 이 코드 블록의 이전 실행의 결과를 볼 수 있습니다.
Step10: Colab이 아닌 사용자의 경우, TensorBoard.dev에서 이 코드 블록의 이전 실행의 결과를 볼 수 있습니다.
Step11: 위의 코드가 일반적으로 적용됩니다. 아래 코드는 HDF5 모델 형식(HDF5 가중치 및 기타 형식이 아님)에만 필요합니다.
Step12: 잘라낸 모델 배포하기
Step13: 하드웨어별 최적화
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
%load_ext tensorboard
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model = setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def get_gzipped_model_size(model):
# Returns size of gzipped model, in bytes.
import os
import zipfile
_, keras_file = tempfile.mkstemp('.h5')
model.save(keras_file, include_optimizer=False)
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(keras_file)
return os.path.getsize(zipped_file)
setup_model()
pretrained_weights = setup_pretrained_weights()
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended.
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
model_for_pruning.summary()
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `prune_low_magnitude` to make only the
# Dense layers train with pruning.
def apply_pruning_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.sparsity.keras.prune_low_magnitude(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_pruning_to_dense`
# to the layers of the model.
model_for_pruning = tf.keras.models.clone_model(
base_model,
clone_function=apply_pruning_to_dense,
)
model_for_pruning.summary()
print(base_model.layers[0].name)
# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.
i = tf.keras.Input(shape=(20,))
x = tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
model_for_pruning = tf.keras.Model(inputs=i, outputs=o)
model_for_pruning.summary()
# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.
model_for_pruning = tf.keras.Sequential([
tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
model_for_pruning.summary()
class MyDenseLayer(tf.keras.layers.Dense, tfmot.sparsity.keras.PrunableLayer):
def get_prunable_weights(self):
# Prune bias also, though that usually harms model accuracy too much.
return [self.kernel, self.bias]
# Use `prune_low_magnitude` to make the `MyDenseLayer` layer train with pruning.
model_for_pruning = tf.keras.Sequential([
tfmot.sparsity.keras.prune_low_magnitude(MyDenseLayer(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
model_for_pruning.summary()
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
log_dir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# Log sparsity and other metrics in Tensorboard.
tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir)
]
model_for_pruning.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model_for_pruning.fit(
x_train,
y_train,
callbacks=callbacks,
epochs=2,
)
#docs_infra: no_execute
%tensorboard --logdir={log_dir}
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
# Boilerplate
loss = tf.keras.losses.categorical_crossentropy
optimizer = tf.keras.optimizers.Adam()
log_dir = tempfile.mkdtemp()
unused_arg = -1
epochs = 2
batches = 1 # example is hardcoded so that the number of batches cannot change.
# Non-boilerplate.
model_for_pruning.optimizer = optimizer
step_callback = tfmot.sparsity.keras.UpdatePruningStep()
step_callback.set_model(model_for_pruning)
log_callback = tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir) # Log sparsity and other metrics in Tensorboard.
log_callback.set_model(model_for_pruning)
step_callback.on_train_begin() # run pruning callback
for _ in range(epochs):
log_callback.on_epoch_begin(epoch=unused_arg) # run pruning callback
for _ in range(batches):
step_callback.on_train_batch_begin(batch=unused_arg) # run pruning callback
with tf.GradientTape() as tape:
logits = model_for_pruning(x_train, training=True)
loss_value = loss(y_train, logits)
grads = tape.gradient(loss_value, model_for_pruning.trainable_variables)
optimizer.apply_gradients(zip(grads, model_for_pruning.trainable_variables))
step_callback.on_epoch_end(batch=unused_arg) # run pruning callback
#docs_infra: no_execute
%tensorboard --logdir={log_dir}
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
_, keras_model_file = tempfile.mkstemp('.h5')
# Checkpoint: saving the optimizer is necessary (include_optimizer=True is the default).
model_for_pruning.save(keras_model_file, include_optimizer=True)
# Deserialize model.
with tfmot.sparsity.keras.prune_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
# Typically you train the model here.
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
print("final model")
model_for_export.summary()
print("\n")
print("Size of gzipped pruned model without stripping: %.2f bytes" % (get_gzipped_model_size(model_for_pruning)))
print("Size of gzipped pruned model with stripping: %.2f bytes" % (get_gzipped_model_size(model_for_export)))
base_model = setup_model()
# For using intrinsics on a CPU with 128-bit registers, together with 8-bit
# quantized weights, a 1x16 block size is nice because the block perfectly
# fits into the register.
pruning_params = {'block_size': [1, 16]}
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model, **pruning_params)
model_for_pruning.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Optimising the bandwidth
Step2: Grid based algorithm
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import open_cp.scripted
import open_cp.scripted.analysis as analysis
loaded = open_cp.scripted.Loader("retro_preds.pic.xz")
times = [x[1] for x in loaded]
preds = [x[2] for x in loaded]
fig, axes = plt.subplots(ncols=2, figsize=(16,7))
for ax, i in zip(axes, [0, 60]):
analysis.plot_prediction(loaded, preds[i], ax)
ax.set_title(times[i])
betas = analysis.hit_counts_to_beta("retro.csv")
fig, ax = plt.subplots(figsize=(12,8))
analysis.plot_betas(betas, ax)
fig, ax = plt.subplots(figsize=(12,8))
analysis.plot_betas(betas, ax, range(1,21))
all_betas = analysis.hit_counts_to_beta("retro_opt.csv")
all_betas1 = analysis.hit_counts_to_beta("retro_opt1.csv")
sample = [ "RetroHotspotCtsProvider(Weight=Quartic(bandwidth={}))".format(x)
for x in [50, 100, 150, 200, 250, 300] ]
betas = {k:all_betas[k] for k in sample}
betas1 = {k:all_betas1[k] for k in sample}
fig, axes = plt.subplots(ncols=2, figsize=(16,7))
analysis.plot_betas(betas, axes[0], range(1,21))
analysis.plot_betas(betas, axes[1], range(1,21), plot_sds=False)
#for ax in axes:
# ax.legend([])
fig, axes = plt.subplots(ncols=2, figsize=(16,7))
analysis.plot_betas(betas1, axes[0], range(1,21))
analysis.plot_betas(betas1, axes[1], range(1,21), plot_sds=False)
#for ax in axes:
# ax.legend([])
all_betas = analysis.hit_counts_to_beta("retro_grid_opt.csv")
sample = [ "RetroHotspotProvider(Weight=Quartic(bandwidth={}))".format(x)
for x in [50, 100, 150, 200, 250, 300] ]
betas = {k:all_betas[k] for k in sample}
fig, axes = plt.subplots(ncols=2, figsize=(16,7))
analysis.plot_betas(betas, axes[0], range(1,21))
analysis.plot_betas(betas, axes[1], range(1,21), plot_sds=False)
#for ax in axes:
# ax.legend([])
import importlib
importlib.reload(analysis)
fig, ax = plt.subplots(figsize=(10,6))
normed = analysis.plot_betas_means_against_max(all_betas, ax, range(1,21))
#ax.legend()
None
{key for key in normed if all(x>=0.95 for x in normed[key])}
{key : min(normed[key]) for key in normed}
import re
import numpy as np
def label(x):
m = re.search("width=(\d+)", x)
return int(m.group(1))
avg = {label(key) : np.mean(normed[key]) for key in normed}
fig, ax = plt.subplots(figsize=(8,6))
x = np.sort(list(avg))
y = [avg[t] for t in x]
ax.plot(x, y)
avg
r = normed
highlight = {140, 150}
fig, ax = plt.subplots(figsize=(10,6))
x = list(range(1, 21))
for k in r:
if label(k) not in highlight:
ax.plot(x, r[k], color="black", label="_")
else:
ax.plot(x, r[k], label=label(k))
ax.legend()
None
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Review
Step2: We need to enter the descriptions for each entry in our dictionary manually...
Step3: Convert Time Columns
Step4: What's the date range of the data?
Step5: Number of crimes reported each year
Step6: Looks like most of the data is actually from 2009-2017! Let's throw the rest away...
Step7: Crime Over Time
Step8: This gives us a timeline for different types of crime reported in Atlanta.
Step9: Average number of crimes per month, for each year
Step10: Explanation of boxplot
Step11: More on pandas datetime objects
Step12: Can you pick out the seasonal variation in number of crimes per year?
Step13: Correlation In Number of Crimes Over Time
Step14: Visualizing the correlation...
Step15: Crimes By Place
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
# load data set
df = pd.read_csv('/home/data/APD/COBRA-YTD-multiyear.csv.gz')
print "Shape of table: ", df.shape
dataDict = pd.DataFrame({'DataType': df.dtypes.values, 'Description': '', }, index=df.columns.values)
dataDict.loc['MI_PRINX'].Description = '' # type: int64
dataDict.loc['offense_id'].Description = 'Unique ID in the format YYDDDNNNN with the year YY, the day of the year DDD and a counter NNNN' # type: int64
dataDict.loc['rpt_date'].Description = 'Date the crime was reported' # type: object
dataDict.loc['occur_date'].Description = 'Estimated date when the crime occured' # type: object
dataDict.loc['occur_time'].Description = 'Estimated time when the crime occured' # type: object
dataDict.loc['poss_date'].Description = '' # type: object
dataDict.loc['poss_time'].Description = '' # type: object
dataDict.loc['beat'].Description = '' # type: int64
dataDict.loc['apt_office_prefix'].Description = '' # type: object
dataDict.loc['apt_office_num'].Description = '' # type: object
dataDict.loc['location'].Description = '' # type: object
dataDict.loc['MinOfucr'].Description = '' # type: int64
dataDict.loc['MinOfibr_code'].Description = '' # type: object
dataDict.loc['dispo_code'].Description = '' # type: object
dataDict.loc['MaxOfnum_victims'].Description = '' # type: float64
dataDict.loc['Shift'].Description = 'Zones have 8 or 10 hour shifts' # type: object
dataDict.loc['Avg Day'].Description = '' # type: object
dataDict.loc['loc_type'].Description = '' # type: float64
dataDict.loc['UC2 Literal'].Description = '' # type: object
dataDict.loc['neighborhood'].Description = '' # type: object
dataDict.loc['npu'].Description = '' # type: object
dataDict.loc['x'].Description = '' # type: float64
dataDict.loc['y'].Description = '' # type: float64
dataDict.to_csv("COBRA_Data_Dictionary.csv")
dataDict
# function currying
def fixdatetime(fld):
def _fix(s):
date_col = '%s_date' % fld # "rpt_date"
time_col = '%s_time' % fld # "rpt_time"
if time_col in s.index:
return str(s[date_col])+' '+str(s[time_col])
else:
return str(s[date_col])+' 00:00:00'
return _fix
for col in ['rpt', 'occur', 'poss']:
datser = df.apply(fixdatetime(col), axis=1)
df['%s_dt'%col] = pd.to_datetime(datser, format="%m/%d/%Y %H:%M:%S", errors='coerce')
df[["MI_PRINX", "offense_id", "beat", "UC2 Literal", "neighborhood", "rpt_dt", "occur_dt", "poss_dt"]].head()
print df.occur_dt.min(), '---', df.occur_dt.max()
# resample is like "groupby" for time
df.resample('A-DEC', closed='right', on='occur_dt').offense_id.count()
# df['Year'] = df.occur_dt.map(lambda d: d.year)
# df2 = df[(df.Year>=2010) & (df.Year<=2017)]
# df2.shape, df.shape
df = df[df.occur_dt>='01/01/2009']
df[["occur_dt", "UC2 Literal", "offense_id"]].head()
# Pivoting the table:
# index = nolumn that the new table will be indexed by
# columns = column whose unique values will form the new column names
# values = values used to fill the table (default = all columns other than those given in index and columns)
df_ct = df.pivot_table(index="occur_dt", columns="UC2 Literal", values="offense_id")
df_ct.head()
df_ct = df_ct.resample("1M", closed="right").count()
df_ct.head()
ax = df_ct.plot.box(figsize=(13,4), rot=45)
plt.ylabel("Total Reported Crimes by Month")
## In-class exercise:
# Make a boxplot of the number of reported crimes, aggregating by week.
df_wk = df.pivot_table(index="occur_dt", columns="UC2 Literal", values="offense_id")
df_wk = df_wk.resample("W-SUN", closed='right').count()
df_wk.plot.box(figsize=(13,4), rot=45)
plt.ylabel("Total Reported Crimes by Week")
ax = df_ct.plot(figsize=(10,5), style='-o')
ax.get_legend().set_bbox_to_anchor((1, 1))
plt.ylabel("Total Reported Crimes by Month")
ax.vlines(pd.date_range("12/31/2009", "12/31/2017", freq="A-JAN"), 0, 900)
ann_cr = df_ct.resample("A-DEC", closed="right").sum()
ax = ann_cr[ann_cr.index<"01/01/2017"].plot(figsize=(10,5), style='-o')
ax.get_legend().set_bbox_to_anchor((1, 1))
crime_corr = df_ct.corr()
crime_corr
plt.matshow(crime_corr);
plt.yticks(range(len(crime_corr.columns)), crime_corr.columns);
plt.xticks(range(len(crime_corr.columns)), crime_corr.columns, rotation=90);
plt.colorbar();
df['Zone'] = df['beat']//100
df['Year'] = df.occur_dt.apply(lambda x: x.year)
df_cp = df.pivot_table(index="Zone",
columns="UC2 Literal",
values="offense_id",
aggfunc=lambda x: np.count_nonzero(~np.isnan(x)))
df_cp
df_cp = df.pivot_table(index=["Year","Zone"],
columns="UC2 Literal",
values="offense_id",
aggfunc=lambda x: np.count_nonzero(~np.isnan(x)))
df_cp
df_cp = df_cp[np.logical_and([x >= 1 for x in zip(*df_cp.index.values)[1]],
[x <= 6 for x in zip(*df_cp.index.values)[1]])].fillna(0)
df_cp.head(20)
# A MUCH PRETTIER way to do the same thing:
df_cp = df_cp.loc[(slice(None), slice(1,6)),:].fillna(0)
df_cp.head(20)
## slicing on a multi-index
# get data for 2009-2010, for zones 1-3
df_cp.loc[(slice(2009,2010), slice(1,5,2)),:]
## In-class exercise:
# Show all robbery data for 2011, 2013, and 2015, for zones 4-6
df_cp.loc[(slice(2011,2015,2), slice(4,6)), "ROBBERY-COMMERCIAL":"ROBBERY-RESIDENCE"]
df_cp.filter(like='ROBBERY').loc[(slice(2011,2015,2), slice(4,6)), :]
## In-class exercise:
# Count the total number of crimes in each zone
df_cp.groupby(level=1).sum()
help(df_cp.plot)
## In-class exercise:
# Plot the number of pedestrian robberies in each zone in 2016
df_cp.loc[(slice(2016,2016), slice(None)), "ROBBERY-PEDESTRIAN"].plot.bar()
plt.xticks(range(6), range(1,7), rotation=0);
plt.xlabel("Zone");
plt.ylabel("Ped. Robberies in 2016");
## In-class exercise:
# What is the average annual number of crimes in each zone (for each type of crime)?
# Hint: use "groupby" with a "level" argument.
df_cp.groupby(level=1).mean()
##### Shapefile stuff ########
import sys
try:
from osgeo import ogr, osr, gdal
except:
sys.exit('ERROR: cannot find GDAL/OGR modules')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1 - Watts and Strogatz small world network model
Step2: Question 2 - Barabasi Albert Model
Step3: Tweaking the probability
Step4: Question 3
Step5: Random deletions in exp and scale-free graphs
Step6: Question 4
Step7: In Yeast Protein Protein Interactome visualization generated from cytoscape, many proteins were seen to interact with a large number of other proteins. Such proteins are vital in the organism's survival and are therefore considered important. A higher degree means that that protein is more important.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from IPython.display import Image
# Regular graph initialization
nodes = 1000
k = 10
adj = np.zeros([nodes, nodes])
for i in range(0, nodes):
for j in range(1, k/2 + 1):
adj[i, (i+j) % nodes] = 1
adj[(i+j) % nodes, i] = 1
edgelist = np.zeros([nodes*k/2, 2])
iterator = 0
for i in range(0, nodes-1):
for j in range(i+1, nodes):
if adj[i, j] == 1:
edgelist[iterator, 0] = i
edgelist[iterator, 1] = j
iterator += 1
G = nx.Graph()
G.add_edges_from(edgelist)
# print adj
# print sum(adj)
# G = nx.random_regular_graph(k, nodes)
G_copy = G
# plt.clf()
# pos=nx.spring_layout(G) # positions for all nodes
# nx.draw(G, pos)
# #nx.draw_networkx_labels(G, pos)
# plt.title('Initial Regular Graph')
# plt.show()
# plt.clf()
# pos=nx.spring_layout(G) # positions for all nodes
# nx.draw(G, pos)
# nx.draw_networkx_labels(G, pos)
# plt.title('Initial Regular Graph')
# plt.show()
prob = list()
prob.append(0.0001)
while prob[-1] < 1:
prob.append(prob[-1]*2.303)
prob = prob[0:-1]
L = np.zeros([1, len(prob)+1])[0]
C = np.zeros([1, len(prob)+1])[0]
G = G_copy
L[0] = nx.average_shortest_path_length(G)
C[0] = nx.average_clustering(G)
count = 5
L_temp = np.zeros([1, count])[0]
C_temp = np.zeros([1, count])[0]
iterator = 1
for p in prob:
print p
for c in range(count):
G = G_copy
for n in G.nodes():
for e in G.edges(n):
if np.random.rand() < p:
## perform rewiring
neighbours = np.array(G.edges(n))[:,1]
new_node = np.random.randint(1, high=nodes+1)
while new_node in neighbours:
new_node = np.random.randint(1, high=nodes+1)
G.remove_edge(e[0], e[1])
G.add_edge(e[0], new_node)
if not nx.is_connected(G):
G.remove_edge(e[0], new_node)
G.add_edge(e[0], e[1])
L_temp[c] = nx.average_shortest_path_length(G)
C_temp[c] = nx.average_clustering(G)
L[iterator] = np.average(L_temp)
C[iterator] = np.average(C_temp)
iterator += 1
## TODO: Capture graph disconnected condition on rewiring
# L /= L[0]
# C /= C[0]
ln = L[0:12]
ln = ln/L[0]
cn = C[0:12]
cn = cn/C[0]
plt.semilogx(prob, ln, 'bo', prob, cn, 'rx')
plt.show()
max_nodes = 500
G = nx.Graph([[0, 1]])
for x in range(2, max_nodes):
# print x
p = np.float32(np.asarray(G.degree().values()))
p = p/sum(G.degree().values())
t = True
while t:
try:
n = np.random.rand()
m = max(filter(lambda x: x< n, p))
t = False
except ValueError:
continue
# p = p*200
G.add_node(x)
try:
G.add_edge(np.where(max(filter(lambda x: x < n, p)) == p)[0][0], x)
except IndexError:
pass
except ValueError:
pass
try:
G.add_edge(np.where(max(filter(lambda x: x < m, p)) == p)[0][0], x)
except IndexError:
pass
except ValueError:
pass
print "Average clustering coefficient"
print nx.average_clustering(G)
print "Characteristic Path length"
print nx.average_shortest_path_length(G)
plt.semilogx(G.degree().values())
plt.title('Degree Distribution')
plt.show()
max_nodes = 500
G = nx.Graph([[0, 1]])
for x in range(2, max_nodes):
# print x
p = np.float32(np.asarray(G.degree().values()))
p = np.square(p)
p = p/sum(G.degree().values())
t = True
while t:
try:
n = np.random.rand()
m = max(filter(lambda x: x< n, p))
t = False
except ValueError:
continue
# p = p*200
G.add_node(x)
try:
G.add_edge(np.where(max(filter(lambda x: x < n, p)) == p)[0][0], x)
except IndexError:
pass
except ValueError:
pass
try:
G.add_edge(np.where(max(filter(lambda x: x < m, p)) == p)[0][0], x)
except IndexError:
pass
except ValueError:
pass
print "Average clustering coefficient"
print nx.average_clustering(G)
print "Characteristic Path length"
print nx.average_shortest_path_length(G)
plt.semilogx(G.degree().values())
plt.title('Degree Distribution')
plt.show()
nodes = 100
degree = 5
G = nx.random_regular_graph(degree, nodes)
X = np.linspace(0, 0.8, 101)
L = np.zeros_like(X)
S = np.zeros_like(X)
nodes_to_del = 0.01*G.number_of_nodes()
for x in X:
i = np.where(X == x)[0][0]
# print i
# G_copy = G
if x != 0:
G.remove_nodes_from(np.random.randint(G.number_of_nodes(), size=nodes_to_del))
try:
L[i] = nx.average_shortest_path_length(G)
except nx.NetworkXError:
L[i] = L[i-1]
S[i] = nx.number_of_nodes(max(nx.connected_component_subgraphs(G)))
# print G.number_of_nodes()
plt.hist(G.degree().values())
plt.title('Degree Sequence')
plt.show()
plt.plot(X, S/nodes)
plt.title('Size of the giant cluster')
plt.show()
plt.plot(X[1:8], L[1:8])
plt.title('Characteristic Path Length')
plt.show()
max_nodes = 100
G = nx.Graph([[0, 1]])
for x in range(2, max_nodes):
p = np.float32(np.asarray(G.degree().values()))
p = p/sum(G.degree().values())
t = True
while t:
try:
n = np.random.rand()
m = max(filter(lambda x: x< n, p))
t = False
except ValueError:
continue
G.add_node(x)
try:
G.add_edge(np.where(max(filter(lambda x: x < n, p)) == p)[0][0], x)
except IndexError:
pass
except ValueError:
pass
try:
G.add_edge(np.where(max(filter(lambda x: x < m, p)) == p)[0][0], x)
except IndexError:
pass
except ValueError:
pass
plt.hist(G.degree().values())
plt.title('Degree Sequence')
plt.show()
X = np.linspace(0, 0.8, 101)
## scale free
L_sf = np.zeros_like(X)
S_sf = np.zeros_like(X)
nodes_to_del = 0.01*G.number_of_nodes()
for x in X:
i = np.where(X == x)[0][0]
# print i
# G_copy = G
if x != 0:
G.remove_nodes_from(np.random.randint(G.number_of_nodes(), size=nodes_to_del))
try:
L_sf[i] = nx.average_shortest_path_length(G)
except nx.NetworkXError:
L_sf[i] = L_sf[i-1]
S_sf[i] = nx.number_of_nodes(max(nx.connected_component_subgraphs(G)))
# print G.number_of_nodes()
plt.plot(X, S_sf/max_nodes)
plt.title('Size of the giant cluster')
plt.show()
plt.plot(X[1:8], L_sf[1:8])
plt.title('Characteristic Path Length')
plt.show()
G = nx.random_regular_graph(degree, nodes)
nx.number_of_nodes(min(nx.connected_component_subgraphs(G)))
# Load YEAST PPI dataset
fh=open("Y2H_union.txt", 'rb')
G_yeast = nx.read_edgelist(fh)
plt.hist(G_yeast.degree().values(), bins=100)
plt.title('YEAST PPI Degree Distribution')
plt.show()
plt.hist(nx.clustering(G_yeast).values(), bins=100)
plt.title('YEAST PPI Clustering Coefficient Distribution')
plt.show()
# Cytoscape generated map
Image(filename='Y2H_union.jpeg')
# Random deletions in YPI
fh=open("Y2H_union.txt", 'rb')
G_yeast = nx.read_edgelist(fh)
X = np.linspace(0, 0.8, 101)
S_avg = np.zeros_like(X)
S = np.zeros_like(X)
nodes_to_del = int(0.01*G_yeast.number_of_nodes())
for x in X:
i = np.where(X == x)[0][0]
nodelist = G_yeast.nodes()
# print i
# G_copy = G
if x != 0:
l = np.random.randint(G_yeast.number_of_nodes(), size=nodes_to_del)
nodelist = G_yeast.nodes()
for f in range(len(l)):
try:
G_yeast.remove_node(nodelist[l[f]])
except nx.NetworkXError:
pass
# print x
# print G_yeast.number_of_nodes()
ass = nx.connected_components(G_yeast)
sss = sorted(ass, key = len, reverse=True)
S[i] = len(sss[0])
sizes = np.zeros(len(sss)-1)
for k in range(len(sizes)):
sizes[k] = len(sss[k+1])
S_avg[i] = np.mean(sizes)
# print G.number_of_nodes()
plt.plot(X, S/S[0])
plt.title('Size of the largest cluster (Failulres)')
plt.show()
plt.plot(X, S_avg)
plt.title('Average size of fragmented components (Failure)')
plt.show()
# Targeted deletions in YPI
fh=open("Y2H_union.txt", 'rb')
G_yeast = nx.read_edgelist(fh)
X = np.linspace(0, 0.8, 101)
S_avg = np.zeros_like(X)
S = np.zeros_like(X)
nodes_to_del = int(0.01*G_yeast.number_of_nodes())
for x in X:
i = np.where(X == x)[0][0]
nodelist = G_yeast.nodes()
# print i
# G_copy = G
if x != 0:
for f in range(len(l)):
nodelist = G_yeast.nodes()
try:
# max(G_yeast.degree().values())
G_yeast.remove_node(nodelist[G_yeast.degree().values().index(max(G_yeast.degree().values()))])
# print "deleted"
except nx.NetworkXError:
# print "caught"
pass
# print x
# print G_yeast.number_of_nodes()
ass = nx.connected_components(G_yeast)
sss = sorted(ass, key = len, reverse=True)
S[i] = len(sss[0])
sizes = np.zeros(len(sss)-1)
for k in range(len(sizes)):
sizes[k] = len(sss[k+1])
S_avg[i] = np.mean(sizes)
# print G.number_of_nodes()
plt.plot(X, S/S[0])
plt.title('Size of the largest cluster (Attacks)')
plt.show()
plt.plot(X, S_avg)
plt.title('Average size of fragmented components (Attacks)')
plt.show()
fh=open("Y2H_union.txt", 'rb')
G_yeast = nx.read_edgelist(fh)
G_yeast.degree().values().index(max(G_yeast.degree().values()))
?np.where
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read 3D semantic segmentation labels from Frame proto
Step5: Visualize Segmentation Labels in Range Images
Step7: Point Cloud Conversion and Visualization
Step8: Show colored point cloud
Step13: Create a dummy submission file for the validation set
Step14: Create a dummy submission file for the testing set
|
<ASSISTANT_TASK:>
Python Code:
# Data location. Please edit.
# A tfrecord containing tf.Example protos as downloaded from the Waymo dataset
# webpage.
# Replace this path with your own tfrecords.
FILENAME = '/content/waymo-od/tutorial/.../tfexample.tfrecord'
import os
import matplotlib.pyplot as plt
import tensorflow.compat.v1 as tf
import numpy as np
tf.enable_eager_execution()
from waymo_open_dataset.utils import frame_utils
from waymo_open_dataset import dataset_pb2 as open_dataset
from waymo_open_dataset.protos import segmentation_metrics_pb2
from waymo_open_dataset.protos import segmentation_submission_pb2
dataset = tf.data.TFRecordDataset(FILENAME, compression_type='')
for data in dataset:
frame = open_dataset.Frame()
frame.ParseFromString(bytearray(data.numpy()))
if frame.lasers[0].ri_return1.segmentation_label_compressed:
break
print(frame.context.name)
print(frame.context.stats)
(range_images, camera_projections, segmentation_labels,
range_image_top_pose) = frame_utils.parse_range_image_and_camera_projection(
frame)
print(segmentation_labels[open_dataset.LaserName.TOP][0].shape.dims)
plt.figure(figsize=(64, 20))
def plot_range_image_helper(data, name, layout, vmin = 0, vmax=1, cmap='gray'):
Plots range image.
Args:
data: range image data
name: the image title
layout: plt layout
vmin: minimum value of the passed data
vmax: maximum value of the passed data
cmap: color map
plt.subplot(*layout)
plt.imshow(data, cmap=cmap, vmin=vmin, vmax=vmax)
plt.title(name)
plt.grid(False)
plt.axis('off')
def get_semseg_label_image(laser_name, return_index):
Returns semseg label image given a laser name and its return index.
return segmentation_labels[laser_name][return_index]
def show_semseg_label_image(semseg_label_image, layout_index_start = 1):
Shows range image.
Args:
show_semseg_label_image: the semseg label data of type MatrixInt32.
layout_index_start: layout offset
semseg_label_image_tensor = tf.convert_to_tensor(semseg_label_image.data)
semseg_label_image_tensor = tf.reshape(
semseg_label_image_tensor, semseg_label_image.shape.dims)
instance_id_image = semseg_label_image_tensor[...,0]
semantic_class_image = semseg_label_image_tensor[...,1]
plot_range_image_helper(instance_id_image.numpy(), 'instance id',
[8, 1, layout_index_start], vmin=-1, vmax=200, cmap='Paired')
plot_range_image_helper(semantic_class_image.numpy(), 'semantic class',
[8, 1, layout_index_start + 1], vmin=0, vmax=22, cmap='tab20')
frame.lasers.sort(key=lambda laser: laser.name)
show_semseg_label_image(get_semseg_label_image(open_dataset.LaserName.TOP, 0), 1)
show_semseg_label_image(get_semseg_label_image(open_dataset.LaserName.TOP, 1), 3)
def convert_range_image_to_point_cloud_labels(frame,
range_images,
segmentation_labels,
ri_index=0):
Convert segmentation labels from range images to point clouds.
Args:
frame: open dataset frame
range_images: A dict of {laser_name, [range_image_first_return,
range_image_second_return]}.
segmentation_labels: A dict of {laser_name, [range_image_first_return,
range_image_second_return]}.
ri_index: 0 for the first return, 1 for the second return.
Returns:
point_labels: {[N, 2]} list of 3d lidar points's segmentation labels. 0 for
points that are not labeled.
calibrations = sorted(frame.context.laser_calibrations, key=lambda c: c.name)
point_labels = []
for c in calibrations:
range_image = range_images[c.name][ri_index]
range_image_tensor = tf.reshape(
tf.convert_to_tensor(range_image.data), range_image.shape.dims)
range_image_mask = range_image_tensor[..., 0] > 0
if c.name in segmentation_labels:
sl = segmentation_labels[c.name][ri_index]
sl_tensor = tf.reshape(tf.convert_to_tensor(sl.data), sl.shape.dims)
sl_points_tensor = tf.gather_nd(sl_tensor, tf.where(range_image_mask))
else:
num_valid_point = tf.math.reduce_sum(tf.cast(range_image_mask, tf.int32))
sl_points_tensor = tf.zeros([num_valid_point, 2], dtype=tf.int32)
point_labels.append(sl_points_tensor.numpy())
return point_labels
points, cp_points = frame_utils.convert_range_image_to_point_cloud(
frame, range_images, camera_projections, range_image_top_pose)
points_ri2, cp_points_ri2 = frame_utils.convert_range_image_to_point_cloud(
frame, range_images, camera_projections, range_image_top_pose, ri_index=1)
point_labels = convert_range_image_to_point_cloud_labels(
frame, range_images, segmentation_labels)
point_labels_ri2 = convert_range_image_to_point_cloud_labels(
frame, range_images, segmentation_labels, ri_index=1)
# 3d points in vehicle frame.
points_all = np.concatenate(points, axis=0)
points_all_ri2 = np.concatenate(points_ri2, axis=0)
# point labels.
point_labels_all = np.concatenate(point_labels, axis=0)
point_labels_all_ri2 = np.concatenate(point_labels_ri2, axis=0)
# camera projection corresponding to each point.
cp_points_all = np.concatenate(cp_points, axis=0)
cp_points_all_ri2 = np.concatenate(cp_points_ri2, axis=0)
from IPython.display import Image, display
display(Image('/content/waymo-od/tutorial/3d_semseg_points.png'))
import zlib
def compress_array(array: np.ndarray, is_int32: bool = False):
Compress a numpy array to ZLIP compressed serialized MatrixFloat/Int32.
Args:
array: A numpy array.
is_int32: If true, use MatrixInt32, otherwise use MatrixFloat.
Returns:
The compressed bytes.
if is_int32:
m = open_dataset.MatrixInt32()
else:
m = open_dataset.MatrixFloat()
m.shape.dims.extend(list(array.shape))
m.data.extend(array.reshape([-1]).tolist())
return zlib.compress(m.SerializeToString())
def decompress_array(array_compressed: bytes, is_int32: bool = False):
Decompress bytes (of serialized MatrixFloat/Int32) to a numpy array.
Args:
array_compressed: bytes.
is_int32: If true, use MatrixInt32, otherwise use MatrixFloat.
Returns:
The decompressed numpy array.
decompressed = zlib.decompress(array_compressed)
if is_int32:
m = open_dataset.MatrixInt32()
dtype = np.int32
else:
m = open_dataset.MatrixFloat()
dtype = np.float32
m.ParseFromString(decompressed)
return np.array(m.data, dtype=dtype).reshape(m.shape.dims)
TOP_LIDAR_ROW_NUM = 64
TOP_LIDAR_COL_NUM = 2650
def get_range_image_point_indexing(range_images, ri_index=0):
Get the indices of the valid points (of the TOP lidar) in the range image.
The order of the points match those from convert_range_image_to_point_cloud
and convert_range_image_to_point_cloud_labels.
Args:
range_images: A dict of {laser_name, [range_image_first_return,
range_image_second_return]}.
ri_index: 0 for the first return, 1 for the second return.
Returns:
points_indexing_top: (N, 2) col and row indices of the points in the
TOP lidar.
points_indexing_top = None
xgrid, ygrid = np.meshgrid(range(TOP_LIDAR_COL_NUM), range(TOP_LIDAR_ROW_NUM))
col_row_inds_top = np.stack([xgrid, ygrid], axis=-1)
range_image = range_images[open_dataset.LaserName.TOP][ri_index]
range_image_tensor = tf.reshape(
tf.convert_to_tensor(range_image.data), range_image.shape.dims)
range_image_mask = range_image_tensor[..., 0] > 0
points_indexing_top = col_row_inds_top[np.where(range_image_mask)]
return points_indexing_top
def dummy_semseg_for_one_frame(frame, dummy_class=14):
Assign all valid points to a single dummy class.
Args:
frame: An Open Dataset Frame proto.
dummy_class: The class to assign to. Default is 14 (building).
Returns:
segmentation_frame: A SegmentationFrame proto.
(range_images, camera_projections, segmentation_labels,
range_image_top_pose) = frame_utils.parse_range_image_and_camera_projection(
frame)
# Get the col, row indices of the valid points.
points_indexing_top = get_range_image_point_indexing(range_images, ri_index=0)
points_indexing_top_ri2 = get_range_image_point_indexing(
range_images, ri_index=1)
# Assign the dummy class to all valid points (in the range image)
range_image_pred = np.zeros(
(TOP_LIDAR_ROW_NUM, TOP_LIDAR_COL_NUM, 2), dtype=np.int32)
range_image_pred[points_indexing_top[:, 1],
points_indexing_top[:, 0], 1] = dummy_class
range_image_pred_ri2 = np.zeros(
(TOP_LIDAR_ROW_NUM, TOP_LIDAR_COL_NUM, 2), dtype=np.int32)
range_image_pred_ri2[points_indexing_top_ri2[:, 1],
points_indexing_top_ri2[:, 0], 1] = dummy_class
# Construct the SegmentationFrame proto.
segmentation_frame = segmentation_metrics_pb2.SegmentationFrame()
segmentation_frame.context_name = frame.context.name
segmentation_frame.frame_timestamp_micros = frame.timestamp_micros
laser_semseg = open_dataset.Laser()
laser_semseg.name = open_dataset.LaserName.TOP
laser_semseg.ri_return1.segmentation_label_compressed = compress_array(
range_image_pred, is_int32=True)
laser_semseg.ri_return2.segmentation_label_compressed = compress_array(
range_image_pred_ri2, is_int32=True)
segmentation_frame.segmentation_labels.append(laser_semseg)
return segmentation_frame
# Create the dummy pred file for the validation set run segments.
# Replace this path with the real path to the WOD validation set folder.
folder_name = '/content/waymo-od/.../validation/'
filenames = [os.path.join(folder_name, x) for x in os.listdir(
folder_name) if 'tfrecord' in x]
assert(len(filenames) == 202)
segmentation_frame_list = segmentation_metrics_pb2.SegmentationFrameList()
for idx, filename in enumerate(filenames):
if idx % 10 == 0:
print('Processing %d/%d run segments...' % (idx, len(filenames)))
dataset = tf.data.TFRecordDataset(filename, compression_type='')
for data in dataset:
frame = open_dataset.Frame()
frame.ParseFromString(bytearray(data.numpy()))
if frame.lasers[0].ri_return1.segmentation_label_compressed:
segmentation_frame = dummy_semseg_for_one_frame(frame)
segmentation_frame_list.frames.append(segmentation_frame)
print('Total number of frames: ', len(segmentation_frame_list.frames))
# Create the submission file, which can be uploaded to the eval server.
submission = segmentation_submission_pb2.SemanticSegmentationSubmission()
submission.account_name = 'joe@gmail.com'
submission.unique_method_name = 'JoeNet'
submission.affiliation = 'Smith Inc.'
submission.authors.append('Joe Smith')
submission.description = "A dummy method by Joe (val set)."
submission.method_link = 'NA'
submission.sensor_type = 1
submission.number_past_frames_exclude_current = 2
submission.number_future_frames_exclude_current = 0
submission.inference_results.CopyFrom(segmentation_frame_list)
output_filename = '/tmp/wod_semseg_val_set_dummy_pred_submission.bin'
f = open(output_filename, 'wb')
f.write(submission.SerializeToString())
f.close()
# Create the dummy pred file for the testing set run segments.
# Replace the paths with the real paths to the WOD testing set folders.
folder_name1 = '/content/waymo-od/.../testing/'
folder_name2 = '/content/waymo-od/.../testing_location/'
filenames1 = [os.path.join(folder_name1, x) for x in os.listdir(
folder_name1) if 'tfrecord' in x]
filenames2 = [os.path.join(folder_name2, x) for x in os.listdir(
folder_name2) if 'tfrecord' in x]
filenames = filenames1 + filenames2
print(len(filenames))
assert(len(filenames) == 150)
# Replace this path with the real path. The file is under:
# /waymo-open-dataset/tutorial/ in the github repo.
# Each line of the file is the "<context_name>, <timestamp_micros>" of a frame
# with semseg labels.
testing_set_frame_file = '/path/3d_semseg_test_set_frames.txt'
context_name_timestamp_tuples = [x.rstrip().split(',') for x in (
open(testing_set_frame_file, 'r').readlines())]
segmentation_frame_list = segmentation_metrics_pb2.SegmentationFrameList()
for idx, filename in enumerate(filenames):
if idx % 10 == 0:
print('Processing %d/%d run segments...' % (idx, len(filenames)))
dataset = tf.data.TFRecordDataset(filename, compression_type='')
for data in dataset:
frame = open_dataset.Frame()
frame.ParseFromString(bytearray(data.numpy()))
context_name = frame.context.name
timestamp = frame.timestamp_micros
if (context_name, str(timestamp)) in context_name_timestamp_tuples:
print(context_name, timestamp)
segmentation_frame = dummy_semseg_for_one_frame(frame)
segmentation_frame_list.frames.append(segmentation_frame)
print('Total number of frames: ', len(segmentation_frame_list.frames))
# Create the submission file, which can be uploaded to the eval server.
submission = segmentation_submission_pb2.SemanticSegmentationSubmission()
submission.account_name = 'joe@gmail.com'
submission.unique_method_name = 'JoeNet'
submission.affiliation = 'Smith Inc.'
submission.authors.append('Joe Smith')
submission.description = "A dummy method by Joe (test set)."
submission.method_link = 'NA'
submission.sensor_type = 1
submission.number_past_frames_exclude_current = 2
submission.number_future_frames_exclude_current = 0
submission.inference_results.CopyFrom(segmentation_frame_list)
output_filename = '/tmp/wod_semseg_test_set_dummy_pred_submission.bin'
f = open(output_filename, 'wb')
f.write(submission.SerializeToString())
f.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
Step4: What are the metrics for "holding the position"?
|
<ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
import pickle
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent_predictor import AgentPredictor
from functools import partial
from sklearn.externals import joblib
NUM_THREADS = 1
LOOKBACK = -1
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
DYNA = 20
BASE_DAYS = 112
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Crop the final days of the test set as a workaround to make dyna work
# (the env, only has the market calendar up to a certain time)
data_test_df = data_test_df.iloc[:-DYNA]
total_data_test_df = total_data_test_df.loc[:data_test_df.index[-1]]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
estimator_close = joblib.load('../../data/best_predictor.pkl')
estimator_volume = joblib.load('../../data/best_volume_predictor.pkl')
agents = [AgentPredictor(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=DYNA,
name='Agent_{}'.format(i),
estimator_close=estimator_close,
estimator_volume=estimator_volume,
env=env,
prediction_window=BASE_DAYS) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
import pickle
with open('../../data/dyna_q_with_predictor.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up input files
Step2: 2. The genome against which generated guides are scored against
Step3: Begin custom processing
Step4: Next, we want to digest this scaffold into guides. This uses the al_diggesttarget function contained in eating.py to produce a generator of scores.
Step5: In this version of the script, the output from al_digesttarget isn't especially readable, but for reference
Step6: Next, we'd like to take the 20mers extracted and score them against the entire Xenopus laevis genome. These lines score each guide variable region for specificity using the xl71 BLAST database and the xl71genomedict dict.
Step7: The format of the resulting data is (score, guide).
Step8: The scores in this object are an ordered list, with all HpaII scores first, then all BfaI scores and finally all ScrFI scores. We are interested in the distribution of scores along the DNA fragment, irrespective of the enzyme used to generate them. Thus, we want to rearrange the list with all scores from 5' to 3'.
Step9: Let's extract the scores and plot their distribution on a histogram.
Step10: So, there are ~5000 guides that are quite non-specific (score <= 4) and >14,000 guides that have a score of 100 and a further 4000 that score between 95 and 99.
Step11: Our "yield" of guides descends steadily from a cutoff of >=5 to a cutoff of >=95, then drops from 2894 guides produced at a cutoff of 95 to 1719 at 100. So, a cutoff of >=95 balances specificity and yield.
Step12: We next asked what happens if we concentrate the guides into a smaller region. To test this, we cut the input DNA into sections of 1/7 the ~21MB starting length and asked how many guides would be obtained if 144 PCR products were designed within each of those subregions.
Step13: The final 1/7 of the scaffold has the densest guide yield.
Step14: Designing primers
|
<ASSISTANT_TASK:>
Python Code:
import Bio
from Bio.Blast.Applications import NcbiblastnCommandline
from Bio import SeqIO
from Bio.Blast import NCBIXML
from Bio import Restriction
from Bio.Restriction import *
from Bio.Alphabet.IUPAC import IUPACAmbiguousDNA
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
import cPickle as pickle
import subprocess
import matplotlib
from eating import *
import multiprocessing as mp
from operator import itemgetter, attrgetter, methodcaller
import numpy
%pylab inline
fasta_file = SeqIO.parse("../../Genomic Data/LAEVIS_7.1.repeatMasked.fa", "fasta")
handle = open("../../Genomic Data/LAEVIS_7.1.repeatMasked.fa", 'rb')
xl71genome = SeqIO.parse(handle, "fasta", alphabet=IUPACAmbiguousDNA())
xl71genomedict = {}
for item in xl71genome:
xl71genomedict[item.id] = item
len(xl71genomedict)
longest = 0
for item in fasta_file:
if len(item) > longest:
longest = len(item)
longscaffold = [item]
print(longscaffold[0].name + " is the longest scaffold at " "{:,}".format(len(longscaffold[0])) + " bp in length.")
cutslist = al_digesttarget(longscaffold)
[item for item in al_digesttarget([longscaffold[0][0:1500]])]
def multiscore_pool(x):
score = al_scoreguide(x, "xl71", xl71genomedict)
return (score[0], score[1])
http://sebastianraschka.com/Articles/2014_multiprocessing_intro.html#An-introduction-to-parallel-programming-using-Python's-multiprocessing-module
pool = mp.Pool(processes=2)
results = [pool.apply(multiscore_pool, args=(x,)) for x in cutslist]
pickle.dump(results, open( "finalpicklescores.pkl", "wb" ))
pool.close()
results[0:20]
import copy
a = []
for (score, details) in results:
a.append(int(details.name)) # The guide's name attribute contains its position in bp
resultssorted = zip(results, a)
resultssorted = sorted(resultssorted, key=itemgetter(1), reverse=False)
resultssorted = [item for item, null in resultssorted]
resultssorted[:5]
resultssorted[-5:]
scores = [score for score, details in resultssorted]
def plot_score_histogram(scores):
'''
Input is a list of scores only (as ints)
'''
path = '/Library/Fonts/Microsoft/Arial.ttf'
prop = matplotlib.font_manager.FontProperties(fname=path)
matplotlib.rcParams['font.family'] = prop.get_name()
bins = range(0,106,5)
figure()
hist(scores, bins, color="gray")
tick_params(axis=u'both', labelsize=18)
#savefig('Scaffold score distribution.pdf', format="pdf")
plot_score_histogram(scores)
def find_clusters_by_cutoff(resultssorted, x):
starts=[]
ends=[]
previtemgood = 0
for index, (score, details) in enumerate(resultssorted):
if score >= x and previtemgood ==0 and len(details) >= 20: #this avoids guides that are shorter than 20 bp (from where an enzyme cuts twice in close proximity)
starts.append((index, score, int(details.name)))
previtemgood = 1
elif score >= x and previtemgood == 1 and len(details) >=20:
None
elif previtemgood == 1:
previtemgood =0
ends.append((index-1, resultssorted[index-1][0], int(resultssorted[index-1][1].name)))
run_positions = zip(starts, ends)
goodruns_length = sorted([end - start for (start, i, j), (end,l,m) in run_positions], reverse=True)
return (goodruns_length, run_positions)
threshold = range(0, 105, 5)
probeyield = []
for item in threshold:
probeyield.append((item, sum(find_clusters_by_cutoff(resultssorted, item)[0][0:143])))
print(probeyield)
%pylab inline
figure()
plot([b for b, c in probeyield], [c for b, c in probeyield], "o")
threshold = 95
runs = find_clusters_by_cutoff(resultssorted, threshold)[1]
#(countofguides, (startguidenumberfrom5', startscore, startpositionbp), (endguidenumberfrom5', endscore, endpositionbp))
goodruns = sorted([((i_end - i), (i, s, pos), (i_end, s_end, pos_end)) for (i, s, pos), (i_end, s_end, pos_end) in runs], reverse=True)
probeyield = []
x = 95
fraction = 7.0
overlap = 2.0
region_to_extract = len(resultssorted)/fraction
for i in [float(item)/overlap for item in range(int(overlap*fraction+2.0))]:
goodruns = find_clusters_by_cutoff(resultssorted[int(region_to_extract*i):int(region_to_extract*(i+1))], x)[0]
probeyield.append((i, int(region_to_extract*i), sum(goodruns[0:143])))
if sum(goodruns[0:143]) == 0:
break
probeyield
#Modify resultssorted to only include the 3.4MB region used. (18121076 to (21505465+786) = 21506251)
resultssorted = [item for item in resultssorted if int(item[1].name) >= 18121076 and int(item[1].name) <= 21506251]
scores = [score for score, details in resultssorted]
# Set up the input for primer3:
# Sequence available to PCR:
guide_count = []
amps_in_3MB = []
for index, item in enumerate(goodruns[0:400]):
left_outside = item[0][0].id[-1]
left_inside = item[2][0][1].id[-1]
if left_outside == "F" and left_inside == "R":
permissible_start = int(item[0][0].name) + 10
required_start_absolute = int(item[2][0][1].name) +14
elif left_outside == "R" and left_inside == "R":
permissible_start = int(item[0][0].name) + 1
required_start_absolute = int(item[2][0][1].name) +14
elif left_outside == "R" and left_inside == "F":
permissible_start = int(item[0][0].name) + 1
required_start_absolute = int(item[2][0][1].name) +18
elif left_outside == "F" and left_inside == "F":
permissible_start = int(item[0][0].name) + 10
required_start_absolute = int(item[2][0][1].name) +18
else:
print("error on left")
right_inside = item[2][-1][1].id[-1]
right_outside = item[0][1].id[-1]
if right_outside == "F" and right_inside == "R":
permissible_end = int(item[0][1].name) + 19
required_end_absolute = int(item[2][-1][1].name) + 2
elif right_outside == "R" and right_inside == "F":
permissible_end = int(item[0][1].name) + 10
required_end_absolute = int(item[2][-1][1].name) + 8
elif right_outside == "R" and right_inside == "R":
permissible_end = int(item[0][1].name) + 10
required_end_absolute = int(item[2][-1][1].name) + 2
elif right_outside == "F" and right_inside == "F":
permissible_end = int(item[0][1].name) + 19
required_end_absolute = int(item[2][-1][1].name) + 8
else:
print("error on right")
amp = longscaffold[0][permissible_start:permissible_end]
# Bounds that need to be included in PCR product :
required_start_relative = required_start_absolute-permissible_start
required_end_relative = required_end_absolute - permissible_start
amp.dbxrefs=((required_start_relative, required_end_relative))
# Set up some other stuff:
amp.name =str(item[0][0].name)
amp.id =str(item[0][0].name)
amp.description=str(item[1])
amp.seq.alphabet = IUPACAmbiguousDNA()
if "NNNNN" in amp.seq: # Exclude if it has runs of Ns
None
#print amp.name + " contains ns " + str(item[1])
else:
amps_in_3MB.append(amp)
guide_count.append(item[1])
amps_in_3MB_gen = (i for i in amps_in_3MB)
print sum(guide_count[0:144])
with open("primerlist.txt", "w") as primerlist:
primerlist.write("Sequence_id\tforward_seq\tforward_start\tforward_length\tforward_tm\tforward_gc\treverse_seq\treverse_start\treverse_length\treverse_tm\treverse_gc\tinput_seq_length\tPCR_product_length\tGuides_Contained\n")
primerlist.close()
for item in amps_in_3MB:
current_amp = item
primerdict = al_primersearch(current_amp)
al_collect_good_primers(item, primerdict)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import required modules
Step2: Target Configuration
Step3: Workload Execution and Power Consumptions Samping
Step4: Power Measurements Data
|
<ASSISTANT_TASK:>
Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Devlib modules to load
"modules" : ["cpufreq"], # Required by rt-app calibration
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_HWMON",
# Energy Meters Configuration for BayLibre's ACME Cape
"emeter" : {
"instrument" : "hwmon",
"conf" : {
# Prefixes of the HWMon labels
'sites' : ['a53', 'a57'],
# Type of hardware monitor to be used
'kinds' : ['energy']
},
'channel_map' : {
'LITTLE' : 'a53',
'big' : 'a57',
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
# "rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
nrg_report = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
logging.info("Measured channels energy:")
logging.info("%s", nrg_report.channels)
logging.info("Generated energy file:")
logging.info(" %s", nrg_report.report_file)
!cat $nrg_report.report_file
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On longer functions, its nice to be able to see an estimation of how much longer things will take!
Step2: Concept for exercise
|
<ASSISTANT_TASK:>
Python Code:
import modin.pandas as pd
import numpy as np
from tqdm import tqdm
from modin.config import ProgressBar
ProgressBar.enable()
frame_data = np.random.randint(0, 100, size=(2**18, 2**8))
df = pd.DataFrame(frame_data).add_prefix("col")
df = df.applymap(lambda x: ~x)
df
!jupyter nbextension enable --py --sys-prefix modin_spreadsheet
ProgressBar.disable()
import modin.experimental.spreadsheet as mss
s3_path = "s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv"
modin_df = pd.read_csv(s3_path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3, nrows=1000)
spreadsheet = mss.from_dataframe(modin_df)
spreadsheet
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up frequency distribution to generate samples
Step2: MLE for training data
Step3: Prep simulated losses for neural network
Step4: Set up the network architecture and train
Step5: Evaluating the neural network
Step6: Optimizing network architecture
|
<ASSISTANT_TASK:>
Python Code:
import risklearning.learning_frequency as rlf
reload(rlf)
import pandas as pd
import numpy as np
import scipy.stats as stats
import math
import matplotlib.style
matplotlib.style.use('ggplot')
import ggplot as gg
%matplotlib inline
# Read in Poisson parameters used to simulate loss counts
lambdas_df = pd.read_csv('data/lambdas_tcem_1d.csv')
lambda_start = lambdas_df['TCEM'][0]
lambda_end = lambdas_df['TCEM'].tail(1).iloc[0]
print('Lambda start value: {}, lambda end value: {}'.format(lambda_start, lambda_end))
lambda_ts = lambdas_df['TCEM']
# Read in simulated loss counts
counts_sim_df = pd.read_csv('data/tcem_1d.csv')
# EDPM: Execution, Delivery and Process Management
# TCEM: Transaction Capture, Execution and Maintenance--think fat-finger mistake
counts_sim_df.head()
#%% Do MLE (simple average for Poisson process
t_start = np.min(counts_sim_df['t'])
t_end = np.max(counts_sim_df['t'])
n_tenors_train = -t_start
n_tenors_test = t_end
counts_train = (counts_sim_df[counts_sim_df.t < 0]).groupby('L2_cat').sum()
counts_test = (counts_sim_df[counts_sim_df.t >= 0]).groupby('L2_cat').sum()
lambdas_train = counts_train['counts']/n_tenors_train
lambdas_test = counts_test['counts']/n_tenors_test
bin_tops = [1,2,3,4,5,6,7,8,9,10,15,101]
# Recall that digitize (used later) defines bins by lower <= x < upper
count_tops =[count - 1 for count in bin_tops]
# Calculate bin probabilities from MLE poisson
poi_mle = stats.poisson(lambdas_train)
poi_bins = rlf.bin_probs(poi_mle, bin_tops)
mle_probs = pd.DataFrame({'Count Top': count_tops, 'Probs': poi_bins})
# For later comparison
mle_probs_vals = list(mle_probs.Probs)
import warnings
warnings.filterwarnings('ignore') # TODO: improve slicing to avoid warnings
x_train, y_train, x_test, y_test = rlf.prep_count_data(counts_sim_df, bin_tops)
#from keras.optimizers import SGD
#sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# rl_train_net is a wrapper for standard keras functionality that
# makes it easier below to optimize hyperparameters
rl_net = rlf.rl_train_net(x_train, y_train, x_test, y_test, [150], \
n_epoch = 300, optimizer = 'adagrad')
proba = rl_net['probs_nn']
#% Convert proba from wide to long and append to other probs
mle_probs_vals = list(mle_probs.Probs)
# TODO: Missing last tenor in nn proba (already in x_test, y_test)
probs_list = []
kl_mle_list = []
kl_nn_list = []
for t in range(proba.shape[0]):
nn_probs_t = proba[t]
true_bins_t = rlf.bin_probs(stats.poisson(lambda_ts[-t_start+t]), bin_tops)
probs_t = pd.DataFrame({'Tenor': t, 'Count Top': count_tops, \
'Probs True': true_bins_t, \
'Probs NN': nn_probs_t, \
'Probs MLE': mle_probs_vals}, \
index = range(t*len(count_tops), \
t*len(count_tops) + len(count_tops)))
probs_list.append(probs_t)
# Calculate KL divergences
kl_mle_list.append(stats.entropy(true_bins_t, mle_probs_vals))
kl_nn_list.append(stats.entropy(true_bins_t, nn_probs_t))
probs = pd.concat(probs_list)
probs_tail = probs[probs.Tenor > 360 ]
gg.ggplot(probs_tail, gg.aes(x='Count Top',weight='Probs True')) \
+ gg.facet_grid('Tenor') \
+ gg.geom_bar() \
+ gg.geom_step(gg.aes(y='Probs MLE', color = 'red')) \
+ gg.geom_step(gg.aes(y='Probs NN', color = 'blue')) \
+ gg.scale_x_continuous(limits = (0,len(count_tops)))
# KL divergences
kl_df = pd.DataFrame({'Tenor': range(0, t_end+1), \
'KL MLE': kl_mle_list, \
'KL NN': kl_nn_list})
print kl_df.head()
print kl_df.tail()
#%
# Plot KL divergences
gg.ggplot(kl_df, gg.aes(x='Tenor')) \
+ gg.geom_step(gg.aes(y='KL MLE', color = 'red')) \
+ gg.geom_step(gg.aes(y='KL NN', color = 'blue'))
# More systematically with NN architecture
# Loop over different architectures, create panel plot
neurons_list = [10, 20,50,100, 150, 200]
#neurons_list = [10, 20,50]
depths_list = [1,2,3]
optimizer = 'adagrad'
#%%
kl_df_list = []
for depth in depths_list:
for n_neurons in neurons_list:
nn_arch = [n_neurons]*depth
print("Training " + str(depth) + " layer(s) of " + str(n_neurons) + " neurons")
rl_net = rlf.rl_train_net(x_train, y_train, x_test, y_test, nn_arch, \
n_epoch = 300, optimizer = optimizer)
proba = rl_net['probs_nn']
print("\nPredicting with " + str(depth) + " layer(s) of " + str(n_neurons) + " neurons")
probs_kl_dict = rlf.probs_kl(proba, lambda_ts, t_start, t_end+1, bin_tops, mle_probs_vals)
probs = probs_kl_dict['Probs']
kl_df_n = probs_kl_dict['KL df']
kl_df_n['Hidden layers'] = depth
kl_df_n['Neurons per layer'] = n_neurons
kl_df_n['Architecture'] = str(depth) + '_layers_of_' + str(n_neurons) \
+ '_neurons'
kl_df_list.append(kl_df_n)
#%%
kl_df_hyper = pd.concat(kl_df_list)
# Plot
kl_mle = kl_df_n['KL MLE'] # These values are constant over the above loops (KL between MLE and true distribution)
for depth in depths_list:
kl_df_depth = kl_df_hyper[kl_df_hyper['Hidden layers'] == depth]
kl_df_depth = kl_df_hyper[kl_df_hyper['Hidden layers'] == depth]
kl_depth_vals = kl_df_depth.pivot(index = 'Tenor', columns = 'Neurons per layer', values = 'KL NN')
kl_depth_vals['KL MLE'] = kl_mle
kl_depth_vals.plot(title = 'Kullback-Leibler divergences from true distribution \n for ' \
+ str(depth) + ' hidden layer(s)', \
figsize = (16,10))
# Try again, but now with RMSprop
neurons_list = [10, 20,50]
#neurons_list = [50]
depths_list = [2,3]
optimizer = 'RMSprop'
#%%
kl_df_list = []
for depth in depths_list:
for n_neurons in neurons_list:
nn_arch = [n_neurons]*depth
print("Training " + str(depth) + " layer(s) of " + str(n_neurons) + " neurons")
rl_net = rlf.rl_train_net(x_train, y_train, x_test, y_test, nn_arch, \
n_epoch = 300, optimizer = optimizer)
proba = rl_net['probs_nn']
print("\nPredicting with " + str(depth) + " layer(s) of " + str(n_neurons) + " neurons")
probs_kl_dict = rlf.probs_kl(proba, lambda_ts, t_start, t_end+1, bin_tops, mle_probs_vals)
probs = probs_kl_dict['Probs']
kl_df_n = probs_kl_dict['KL df']
kl_df_n['Hidden layers'] = depth
kl_df_n['Neurons per layer'] = n_neurons
kl_df_n['Architecture'] = str(depth) + '_layers_of_' + str(n_neurons) \
+ '_neurons'
kl_df_list.append(kl_df_n)
#%%
kl_df_hyper = pd.concat(kl_df_list)
# Plot
kl_mle = kl_df_n['KL MLE'] # These values are constant over the above loops (KL between MLE and true distribution)
for depth in depths_list:
kl_df_depth = kl_df_hyper[kl_df_hyper['Hidden layers'] == depth]
kl_df_depth = kl_df_hyper[kl_df_hyper['Hidden layers'] == depth]
kl_depth_vals = kl_df_depth.pivot(index = 'Tenor', columns = 'Neurons per layer', values = 'KL NN')
kl_depth_vals['KL MLE'] = kl_mle
kl_depth_vals.plot(title = 'Kullback-Leibler divergences from true distribution \n for ' \
+ str(depth) + ' hidden layer(s)', \
figsize = (16,10))
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.