text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we pull the returns of every asset in this universe across our desired time period.
Step3: Factor Returns and Exposures
Step4: Here we run our pipeline and create the return streams for high-minus-low and small-minus-big.
Step5: Calculating the Exposures
Step6: The factor exposures are shown as follows. Each individual asset in our universe will have a different exposure to the three included risk factors.
Step7: Summary of the Setup
Step8: While the B DataFrame contains point estimates of the beta exposures to MKT, SMB, and HML for every asset in our universe.
Step9: Now that we have these values, we can start to crack open the variance of any portfolio that contains these assets.
Step10: In order to actually calculate the percentage of our portfolio variance that is made up of common factor risk, we do the following
Step11: So we see that if we just take every single security in the Q3000US and equally-weight them, we will end up possessing a portfolio that effectively only contains common risk.
Step12: The variable $f$ contains the weighted factor exposures of our portfolio, with size equal to the number of factors we have. As we change $\omega$, our weights, our weighted exposures, $f$, also change.
Step13: A concrete example of this can be found here, in the docs for CVXPY.
Step14: Now we'll run the algorithm using Quantopian's built-in risk model and performance attribution tearsheet. We extend beyond the Fama-French Factors, looking into common factor risk due to sectors and due to particular styles of investment that are common in the market.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import CustomFactor, Returns
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.research import run_pipeline
# date range for building risk model
start = "2009-01-01"
end = "2011-01-01"
def qtus_returns(start_date, end_date):
pipe = Pipeline(
columns={'Close': USEquityPricing.close.latest},
screen = QTradableStocksUS()
)
stocks = run_pipeline(pipe, start_date, end_date)
unstacked_results = stocks.unstack()
prices = (unstacked_results['Close'].fillna(method='ffill').fillna(method='bfill')
.dropna(axis=1,how='any').shift(periods=-1).dropna())
qus_returns = prices.pct_change()[1:]
return qus_returns
R = qtus_returns(start, end)
print "The universe we define includes {} assets.".format(R.shape[1])
print 'The number of timestamps is {} from {} to {}.'.format(R.shape[0], start, end)
assets = R.columns
def make_pipeline():
Create and return our pipeline.
We break this piece of logic out into its own function to make it easier to
test and modify in isolation.
In particular, this function can be copy/pasted into the backtester and run by itself.
# Market Cap
market_cap = Fundamentals.shares_outstanding.latest/USEquityPricing.close.latest
# Book to Price ratio
book_to_price = 1/Fundamentals.pb_ratio.latest
# Build Filters representing the top and bottom 500 stocks by our combined ranking system.
biggest = market_cap.top(500, mask=QTradableStocksUS())
smallest = market_cap.bottom(500, mask=QTradableStocksUS())
highpb = book_to_price.top(500, mask=QTradableStocksUS())
lowpb = book_to_price.bottom(500, mask=QTradableStocksUS())
universe = biggest | smallest | highpb | lowpb
pipe = Pipeline(
columns = {
'returns' : Returns(window_length=2),
'market_cap' : market_cap,
'book_to_price' : book_to_price,
'biggest' : biggest,
'smallest' : smallest,
'highpb' : highpb,
'lowpb' : lowpb
},
screen=universe
)
return pipe
pipe = make_pipeline()
# This takes a few minutes.
results = run_pipeline(pipe, start, end)
R_biggest = results[results.biggest]['returns'].groupby(level=0).mean()
R_smallest = results[results.smallest]['returns'].groupby(level=0).mean()
R_highpb = results[results.highpb]['returns'].groupby(level=0).mean()
R_lowpb = results[results.lowpb]['returns'].groupby(level=0).mean()
SMB = R_smallest - R_biggest
HML = R_highpb - R_lowpb
df = pd.DataFrame({
'SMB': SMB, # company size
'HML': HML # company PB ratio
},columns =["SMB","HML"]).shift(periods =-1).dropna()
MKT = get_pricing('SPY', start_date=start, end_date=end, fields='price').pct_change()[1:]
MKT = pd.DataFrame({'MKT':MKT})
F = pd.concat([MKT,df],axis = 1).dropna()
ax = ((F + 1).cumprod() - 1).plot(subplots=True, title='Cumulative Fundamental Factors')
ax[0].set(ylabel = "daily returns")
ax[1].set(ylabel = "daily returns")
ax[2].set(ylabel = "daily returns")
plt.show()
# factor exposure
B = pd.DataFrame(index=assets, dtype=np.float32)
epsilon = pd.DataFrame(index=R.index, dtype=np.float32)
x = sm.add_constant(F)
for i in assets:
y = R.loc[:,i]
y_inlier = y[np.abs(y - y.mean())<=(3*y.std())]
x_inlier = x[np.abs(y - y.mean())<=(3*y.std())]
result = sm.OLS(y_inlier, x_inlier).fit()
B.loc[i,"MKT_beta"] = result.params[1]
B.loc[i,"SMB_beta"] = result.params[2]
B.loc[i,"HML_beta"] = result.params[3]
epsilon.loc[:,i] = y - (x.iloc[:,0] * result.params[0] +
x.iloc[:,1] * result.params[1] +
x.iloc[:,2] * result.params[2] +
x.iloc[:,3] * result.params[3])
fig,axes = plt.subplots(3, 1)
ax1,ax2,ax3 =axes
B.iloc[0:10,0].plot.barh(ax=ax1, figsize=[15,15], title=B.columns[0])
B.iloc[0:10,1].plot.barh(ax=ax2, figsize=[15,15], title=B.columns[1])
B.iloc[0:10,2].plot.barh(ax=ax3, figsize=[15,15], title=B.columns[2])
ax1.set(xlabel='beta')
ax2.set(xlabel='beta')
ax3.set(xlabel='beta')
plt.show()
B.loc[symbols('AAPL'),:]
F.head(3)
B.head(3)
w = np.ones([1,R.shape[1]])/R.shape[1]
def compute_common_factor_variance(factors, factor_exposures, w):
B = np.asarray(factor_exposures)
F = np.asarray(factors)
V = np.asarray(factors.cov())
return w.dot(B.dot(V).dot(B.T)).dot(w.T)
common_factor_variance = compute_common_factor_variance(F, B, w)[0][0]
print("Common Factor Variance: {0}".format(common_factor_variance))
def compute_specific_variance(epsilon, w):
D = np.diag(np.asarray(epsilon.var())) * epsilon.shape[0] / (epsilon.shape[0]-1)
return w.dot(D).dot(w.T)
specific_variance = compute_specific_variance(epsilon, w)[0][0]
print("Specific Variance: {0}".format(specific_variance))
common_factor_pct = common_factor_variance/(common_factor_variance + specific_variance)*100.0
print("Percentage of Portfolio Variance Due to Common Factor Risk: {0:.2f}%".format(common_factor_pct))
w_0 = np.random.rand(R.shape[1])
w_0 = w_0/np.sum(w_0)
f = B.T.dot(w_0)
f
bt_wsj = get_backtest('59232d19c931f1619e6423c9')
bt_wsj.create_perf_attrib_tear_sheet();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you'd like to turn information status messages off
Step2: But it's generally a good idea to leave them on
Step3: You can set the default level by setting the environment variable
Step4: Note that the location of the mne-python preferences file (for easier manual
Step5: By default logging messages print to the console, but look at
Step6: <div class="alert alert-info"><h4>Note</h4><p>The MNE sample dataset should be downloaded automatically but be
Step7: Look at the channels in raw
Step8: Read and plot a segment of raw data
Step9: Save a segment of 150s of raw data (MEG only)
Step10: Define and read epochs
Step11: Note that, by default, we use stim_channel='STI 014'. If you have a different
Step12: Events are stored as a 2D numpy array where the first column is the time
Step13: Exclude some channels (original bads + 2 more)
Step14: The variable raw.info['bads'] is just a python list.
Step15: Alternatively one can restrict to magnetometers or gradiometers with
Step16: Define the baseline period
Step17: Define peak-to-peak rejection parameters for gradiometers, magnetometers
Step18: Read epochs
Step19: Get single epochs for one condition
Step20: epochs_data is a 3D array of dimension (55 epochs, 365 channels, 106 time
Step21: or if you want to keep all the information about the data you can save your
Step22: and read them later with
Step23: Compute evoked responses for auditory responses by averaging and plot it
Step24: .. topic
Step25: It is also possible to read evoked data stored in a fif file
Step26: Or another one stored in the same file
Step27: Two evoked objects can be contrasted using
Step28: To do a weighted sum based on the number of averages, which will give
Step29: Instead of dealing with mismatches in the number of averages, we can use
Step30: Time-Frequency
Step31: Compute induced power and phase-locking values and plot gradiometers
Step32: Inverse modeling
Step33: Read the inverse operator
Step34: Define the inverse parameters
Step35: Compute the inverse solution
Step36: Save the source time courses to disk
Step37: Now, let's compute dSPM on a raw file within a label
Step38: Compute inverse solution during the first 15s
Step39: Save result in stc files
Step40: What else can you do?
|
<ASSISTANT_TASK:>
Python Code:
import mne
mne.set_log_level('WARNING')
mne.set_log_level('INFO')
mne.set_config('MNE_LOGGING_LEVEL', 'WARNING', set_env=True)
mne.get_config_path()
from mne.datasets import sample # noqa
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
print(raw_fname)
raw = mne.io.read_raw_fif(raw_fname)
print(raw)
print(raw.info)
print(raw.ch_names)
start, stop = raw.time_as_index([100, 115]) # 100 s to 115 s data segment
data, times = raw[:, start:stop]
print(data.shape)
print(times.shape)
data, times = raw[2:20:3, start:stop] # access underlying data
raw.plot()
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True,
exclude='bads')
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5])
mne.set_config('MNE_STIM_CHANNEL', 'STI101', set_env=True)
event_id = dict(aud_l=1, aud_r=2) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] += ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, stim=False,
exclude='bads')
mag_picks = mne.pick_types(raw.info, meg='mag', eog=True, exclude='bads')
grad_picks = mne.pick_types(raw.info, meg='grad', eog=True, exclude='bads')
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=False, reject=reject)
print(epochs)
epochs_data = epochs['aud_l'].get_data()
print(epochs_data.shape)
from scipy import io # noqa
io.savemat('epochs_data.mat', dict(epochs_data=epochs_data), oned_as='row')
epochs.save('sample-epo.fif')
saved_epochs = mne.read_epochs('sample-epo.fif')
evoked = epochs['aud_l'].average()
print(evoked)
evoked.plot(time_unit='s')
max_in_each_epoch = [e.max() for e in epochs['aud_l']] # doctest:+ELLIPSIS
print(max_in_each_epoch[:4]) # doctest:+ELLIPSIS
evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked1 = mne.read_evokeds(
evoked_fname, condition='Left Auditory', baseline=(None, 0), proj=True)
evoked2 = mne.read_evokeds(
evoked_fname, condition='Right Auditory', baseline=(None, 0), proj=True)
contrast = mne.combine_evoked([evoked1, evoked2], weights=[0.5, -0.5])
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
average = mne.combine_evoked([evoked1, evoked2], weights='nave')
print(contrast)
epochs_eq = epochs.copy().equalize_event_counts(['aud_l', 'aud_r'])[0]
evoked1, evoked2 = epochs_eq['aud_l'].average(), epochs_eq['aud_r'].average()
print(evoked1)
print(evoked2)
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
import numpy as np # noqa
n_cycles = 2 # number of cycles in Morlet wavelet
freqs = np.arange(7, 30, 3) # frequencies of interest
from mne.time_frequency import tfr_morlet # noqa
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles,
return_itc=True, decim=3, n_jobs=1)
power.plot([power.ch_names.index('MEG 1332')])
from mne.minimum_norm import apply_inverse, read_inverse_operator # noqa
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inverse_operator = read_inverse_operator(fname_inv)
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM"
stc = apply_inverse(evoked, inverse_operator, lambda2, method)
stc.save('mne_dSPM_inverse')
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
label = mne.read_label(fname_label)
from mne.minimum_norm import apply_inverse_raw # noqa
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
stc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,
start, stop)
stc.save('mne_dSPM_raw_inverse_Aud')
print("Done!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify the model
Step2: And the guide
Step3: Now create a CSIS instance
Step4: Now we 'compile' the instance to perform inference on this model
Step5: And now perform inference by importance sampling
Step6: We now plot the results and compare with importance sampling
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torch.nn as nn
import torch.functional as F
import pyro
import pyro.distributions as dist
import pyro.infer
import pyro.optim
import os
smoke_test = ('CI' in os.environ)
n_steps = 2 if smoke_test else 2000
def model(prior_mean, observations={"x1": 0, "x2": 0}):
x = pyro.sample("z", dist.Normal(prior_mean, torch.tensor(5**0.5)))
y1 = pyro.sample("x1", dist.Normal(x, torch.tensor(2**0.5)), obs=observations["x1"])
y2 = pyro.sample("x2", dist.Normal(x, torch.tensor(2**0.5)), obs=observations["x2"])
return x
class Guide(nn.Module):
def __init__(self):
super().__init__()
self.neural_net = nn.Sequential(
nn.Linear(2, 10),
nn.ReLU(),
nn.Linear(10, 20),
nn.ReLU(),
nn.Linear(20, 10),
nn.ReLU(),
nn.Linear(10, 5),
nn.ReLU(),
nn.Linear(5, 2))
def forward(self, prior_mean, observations={"x1": 0, "x2": 0}):
pyro.module("guide", self)
x1 = observations["x1"]
x2 = observations["x2"]
v = torch.cat((x1.view(1, 1), x2.view(1, 1)), 1)
v = self.neural_net(v)
mean = v[0, 0]
std = v[0, 1].exp()
pyro.sample("z", dist.Normal(mean, std))
guide = Guide()
optimiser = pyro.optim.Adam({'lr': 1e-3})
csis = pyro.infer.CSIS(model, guide, optimiser, num_inference_samples=50)
prior_mean = torch.tensor(1.)
for step in range(n_steps):
csis.step(prior_mean)
posterior = csis.run(prior_mean,
observations={"x1": torch.tensor(8.),
"x2": torch.tensor(9.)})
marginal = pyro.infer.EmpiricalMarginal(posterior, "z")
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
with torch.no_grad():
# Draw samples from empirical marginal for plotting
csis_samples = torch.stack([marginal() for _ in range(1000)])
# Calculate empirical marginal with importance sampling
is_posterior = pyro.infer.Importance(model, num_samples=50).run(
prior_mean, observations={"x1": torch.tensor(8.),
"x2": torch.tensor(9.)})
is_marginal = pyro.infer.EmpiricalMarginal(is_posterior, "z")
is_samples = torch.stack([is_marginal() for _ in range(1000)])
# Calculate true prior and posterior over z
true_posterior_z = torch.arange(-10, 10, 0.05)
true_posterior_p = dist.Normal(7.25, (5/6)**0.5).log_prob(true_posterior_z).exp()
prior_z = true_posterior_z
prior_p = dist.Normal(1., 5**0.5).log_prob(true_posterior_z).exp()
plt.rcParams['figure.figsize'] = [30, 15]
plt.rcParams.update({'font.size': 30})
fig, ax = plt.subplots()
plt.plot(prior_z, prior_p, 'k--', label='Prior')
plt.plot(true_posterior_z, true_posterior_p, color='k', label='Analytic Posterior')
plt.hist(csis_samples.numpy(), range=(-10, 10), bins=100, color='r', density=1,
label="Inference Compilation")
plt.hist(is_samples.numpy(), range=(-10, 10), bins=100, color='b', density=1,
label="Importance Sampling")
plt.xlim(-8, 10)
plt.ylim(0, 5)
plt.xlabel("z")
plt.ylabel("Estimated Posterior Probability Density")
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Some Data
Step2: Create An Operation To Execute On The Data
Step3: Traditional Approach
Step4: Parallelism Approach
|
<ASSISTANT_TASK:>
Python Code:
from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
# Create a list of some data
data = range(29999)
# Create a function that takes a data point
def some_function(datum):
# and returns the datum raised to the power of itself
return datum**datum
%%time
# Create an empty for the results
results = []
# For each value in the data
for datum in data:
# Append the output of the function when applied to that datum
results.append(some_function(datum))
# Create a pool of workers equaling cores on the machine
pool = ThreadPool()
%%time
# Apply (map) some_function to the data using the pool of workers
results = pool.map(some_function, data)
# Close the pool
pool.close()
# Combine the results of the workers
pool.join()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-1', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare the pySpark Environment
Step2: Initialize Spark Context
Step3: Load and Analyse Data
Step4: Ratings Histogram
Step5: Most popular movies
Step6: Similar Movies
Step7: Lets find similar movies for Toy Story (Movie ID
Step8: Recommender using MLLib
Step9: Recommendations
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import urllib2
import collections
import matplotlib.pyplot as plt
import math
from time import time, sleep
%pylab inline
spark_home = os.environ.get('SPARK_HOME', None)
if not spark_home:
raise ValueError("Please set SPARK_HOME environment variable!")
# Add the py4j to the path.
sys.path.insert(0, os.path.join(spark_home, 'python'))
sys.path.insert(0, os.path.join(spark_home, 'C:/spark/python/lib/py4j-0.9-src.zip'))
from pyspark.mllib.recommendation import ALS, Rating
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local[*]").setAppName("MovieRecommendationsALS").set("spark.executor.memory", "2g")
sc = SparkContext(conf = conf)
def loadMovieNames():
movieNames = {}
for line in urllib2.urlopen("https://raw.githubusercontent.com/psumank/DATA643/master/WK5/ml-100k/u.item"):
fields = line.split('|')
movieNames[int(fields[0])] = fields[1].decode('ascii', 'ignore')
return movieNames
print "\nLoading movie names..."
nameDict = loadMovieNames()
print "\nLoading ratings data..."
data = sc.textFile("file:///C:/Users/p_sum/.ipynb_checkpoints/ml-100k/u.data")
ratings = data.map(lambda x: x.split()[2])
#action -- just to trigger the driver [ lazy evaluation ]
rating_results = ratings.countByValue()
sortedResults = collections.OrderedDict(sorted(rating_results.items()))
for key, value in sortedResults.iteritems():
print "%s %i" % (key, value)
ratPlot = plt.bar(range(len(sortedResults)), sortedResults.values(), align='center')
plt.xticks(range(len(sortedResults)), list(sortedResults.keys()))
ratPlot[3].set_color('g')
print "Ratings Histogram"
movies = data.map(lambda x: (int(x.split()[1]), 1))
movieCounts = movies.reduceByKey(lambda x, y: x + y)
flipped = movieCounts.map( lambda (x, y) : (y, x))
sortedMovies = flipped.sortByKey(False)
sortedMoviesWithNames = sortedMovies.map(lambda (count, movie) : (nameDict[movie], count))
results = sortedMoviesWithNames.collect()
subset = results[0:10]
popular_movieNm = [str(i[0]) for i in subset]
popularity_strength = [int(i[1]) for i in subset]
popMovplot = plt.barh(range(len(subset)), popularity_strength, align='center')
plt.yticks(range(len(subset)), popular_movieNm)
popMovplot[0].set_color('g')
print "Most Popular Movies from the Dataset"
ratingsRDD = data.map(lambda l: l.split()).map(lambda l: (int(l[0]), (int(l[1]), float(l[2]))))
ratingsRDD.takeOrdered(10, key = lambda x: x[0])
ratingsRDD.take(4)
# Movies rated by same user. ==> [ user ID ==> ( (movieID, rating), (movieID, rating)) ]
userJoinedRatings = ratingsRDD.join(ratingsRDD)
userJoinedRatings.takeOrdered(10, key = lambda x: x[0])
# Remove dups
def filterDups( (userID, ratings) ):
(movie1, rating1) = ratings[0]
(movie2, rating2) = ratings[1]
return movie1 < movie2
uniqueUserJoinedRatings = userJoinedRatings.filter(filterDups)
uniqueUserJoinedRatings.takeOrdered(10, key = lambda x: x[0])
# Now key by (movie1, movie2) pairs ==> (movie1, movie2) => (rating1, rating2)
def makeMovieRatingPairs((user, ratings)):
(movie1, rating1) = ratings[0]
(movie2, rating2) = ratings[1]
return ((movie1, movie2), (rating1, rating2))
moviePairs = uniqueUserJoinedRatings.map(makeMovieRatingPairs)
moviePairs.takeOrdered(10, key = lambda x: x[0])
#collect all ratings for each movie pair and compute similarity. (movie1, movie2) = > (rating1, rating2), (rating1, rating2) ...
moviePairRatings = moviePairs.groupByKey()
moviePairRatings.takeOrdered(10, key = lambda x: x[0])
#Compute Similarity
def cosineSimilarity(ratingPairs):
numPairs = 0
sum_xx = sum_yy = sum_xy = 0
for ratingX, ratingY in ratingPairs:
sum_xx += ratingX * ratingX
sum_yy += ratingY * ratingY
sum_xy += ratingX * ratingY
numPairs += 1
numerator = sum_xy
denominator = sqrt(sum_xx) * sqrt(sum_yy)
score = 0
if (denominator):
score = (numerator / (float(denominator)))
return (score, numPairs)
moviePairSimilarities = moviePairRatings.mapValues(cosineSimilarity).cache()
moviePairSimilarities.takeOrdered(10, key = lambda x: x[0])
scoreThreshold = 0.97
coOccurenceThreshold = 50
inputMovieID = 1 #Toy Story.
# Filter for movies with this sim that are "good" as defined by our quality thresholds.
filteredResults = moviePairSimilarities.filter(lambda((pair,sim)): \
(pair[0] == inputMovieID or pair[1] == inputMovieID) and sim[0] > scoreThreshold and sim[1] > coOccurenceThreshold)
#Top 10 by quality score.
results = filteredResults.map(lambda((pair,sim)): (sim, pair)).sortByKey(ascending = False).take(10)
print "Top 10 similar movies for " + nameDict[inputMovieID]
for result in results:
(sim, pair) = result
# Display the similarity result that isn't the movie we're looking at
similarMovieID = pair[0]
if (similarMovieID == inputMovieID):
similarMovieID = pair[1]
print nameDict[similarMovieID] + "\tscore: " + str(sim[0]) + "\tstrength: " + str(sim[1])
ratings = data.map(lambda l: l.split()).map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2]))).cache()
ratings.take(3)
ratings.take(1)[0]
nratings = ratings.count()
nUsers = ratings.keys().distinct().count()
nMovies = ratings.values().distinct().count()
print "We have Got %d ratings from %d users on %d movies." % (nratings, nUsers, nMovies)
# Build the recommendation model using Alternating Least Squares
#Train a matrix factorization model given an RDD of ratings given by users to items, in the form of
#(userID, itemID, rating) pairs. We approximate the ratings matrix as the product of two lower-rank matrices
#of a given rank (number of features). To solve for these features, we run a given number of iterations of ALS.
#The level of parallelism is determined automatically based on the number of partitions in ratings.
#Our ratings are in the form of ==> [userid, (movie id, rating)] ==> [ (1, (61, 4.0)), (1, (189, 3.0)) etc. ]
start = time()
seed = 5L
iterations = 10
rank = 8
model = ALS.train(ratings, rank, iterations)
duration = time() - start
print "Model trained in %s seconds" % round(duration,3)
#Lets recommend movies for the user id - 2
userID = 2
print "\nTop 10 recommendations:"
recommendations = model.recommendProducts(userID, 10)
for recommendation in recommendations:
print nameDict[int(recommendation[1])] + \
" score " + str(recommendation[2])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hide Runtime warning messages to clean up output.
Step2: Now load additional libraries that will be used below.
Step3: Introducing ipt.att()
Step4: For problems where overlap is poor, the "rlgrz" regularization parameter can be an important user choice. Solving for the study/treatment and auxiliary/control "tilts" involves the solution to a convex problem over restricted domain. When $\bar{t}{eff}$ is _near the boundary of the convex hull of the scatter of $t(W)$ points across control units, solving for the auxiliary/control tilt can be numerically challenging and numerical underflow issues may arise. The default value of "rlgrz" (which is one) is generally sufficient for quick and reliable computation of the two tilts for most problems. For problems where overlap is poor sometimes the "rlgrz" parameter needs to be set to a smaller value to ensure that a valid solution is found. This corresponds to less regularizations, but also a harder optimization problem for the solver. Validity of the solution can be easily checked by ensuring that the balancing conditions are satisfied in practice (which are shown in the output). Note that in situations where overlap is very poor there will be no valid tilt of the control/auxiliary sample. In such cases the AST estimate is undefined. My general advice is to not proceed in such situations or to use parametric estimation procedures where researcher reliance on functional form assumptions is transparent. However, there are some interesting recent alternative proposals that may be worth carefully considering when overlap is poor. See for example the working paper by Athey, Imbens and Wagner (2016) on the Arxiv http
Step5: Experimental benchmark estimate of the ATT
Step6: Observational estimate of the ATT using PSID controls
Step7: Computing the AST estimates of the ATT using the PSID controls
Step8: Observe that the test of $H_{0}
Step9: The next snippet of code plots histograms of $N_{a} \times \pi_{a}$ for both the observational and experimental estimates. Under perfect covariate balance these normalized weights should be symmetrically centered around 1 (and tightly so in larger samples). When the PSID controls are used the distribution of these weights is very right skewed
Step10: Small Monte Carlo Study
|
<ASSISTANT_TASK:>
Python Code:
# Append location of ipt module base directory to system path
# NOTE: only required if permanent install not made (see comments above)
import sys
sys.path.append('/Users/bgraham/Dropbox/Sites/software/ipt/')
# Load ipt module
import ipt as ipt
import warnings
def fxn():
warnings.warn("runtime", RuntimeWarning)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
# Direct Python to plot all figures inline (i.e., not in a separate window)
%matplotlib inline
# Load additional libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.api as sm
help(ipt.att)
# Read in LaLonde NSW data as DataFrame directly from Rajeev Dehejia's webpage
nsw=pd.read_stata("http://www.nber.org/~rdehejia/data/nsw_dw.dta")
# Read in LaLonde PSID pseudo-controls as DataFrame directly from Rajeev Dehejia's webpage
psid = pd.read_stata("http://www.nber.org/~rdehejia/data/psid_controls.dta")
# Create pseudo-observational dataset of NSW treated subjects and PSID "controls"
frames = [nsw[(nsw.treat != 0)], psid]
nsw_obs = pd.concat(frames)
# Make some adjustments to variable definitions in observational dataframe
nsw_obs['constant'] = 1 # Add constant to observational dataframe
nsw_obs['age'] = nsw_obs['age']/10 # Rescale age to be in decades
nsw_obs['re74'] = nsw_obs['re74']/1000 # Recale earnings to be in thousands
nsw_obs['re75'] = nsw_obs['re75']/1000 # Recale earnings to be in thousands
nsw_obs['re74_X_re75'] = nsw_obs['re74']*nsw_obs['re75'] # Prior earnings interaction
nsw_obs['re74_sq'] = nsw_obs['re74']**2 # Squares of prior earnings
nsw_obs['re75_sq'] = nsw_obs['re75']**2 # Squares of prior earnings
nsw_obs['ue74'] = 1*(nsw_obs['re74']==0) # Dummy for unemployed in 1974
nsw_obs['ue75'] = 1*(nsw_obs['re75']==0) # Dummy for unemployed in 1975
nsw_obs['ue74_X_ue75'] = nsw_obs['ue74']*nsw_obs['ue75'] # Prior unemployment interaction
nsw_obs.describe()
Y = nsw['re78']
D = nsw['treat']
nsw_reg1=sm.OLS(Y,sm.add_constant(D)).fit(cov_type='HC0')
print('------------------------------------------------------------------------------')
print('- NSW Experimental Benchmark -')
print('------------------------------------------------------------------------------')
print('')
print(nsw_reg1.summary())
# Directory where figures generated below are saved
workdir = '/Users/bgraham/Dropbox/Sites/software/ipt/Notebooks/'
from scipy.spatial import ConvexHull
# Extract pre-treatment earnings for NSW treated units and PSID controls
earnings_treatment = np.asarray(nsw_obs[(nsw_obs.treat == 1)][['re74','re75']])
earnings_control = np.asarray(nsw_obs[(nsw_obs.treat == 0)][['re74','re75']])
# Calculate convex hull of PSID control units' earnings realizations
hull = ConvexHull(earnings_control)
# Create a figure object to plot the convex hull
convexhull_fig = plt.figure(figsize=(6, 6))
# Scatter plot of pre-treatment earnings in 1974 and 1975 for PSID controls
ax = convexhull_fig.add_subplot(1,1,1)
sns.regplot(x="re74", y="re75", data=nsw_obs[(nsw_obs.treat == 0)], \
fit_reg=False, color='#FDB515')
plt.title('Convex Hull of Earnings for PSID Controls', fontsize=12)
plt.xlabel('Earnings in 1974')
plt.ylabel('Earnings in 1975')
# Set plot range and tick marks
plt.ylim([0,175])
plt.xlim([0,175])
plt.xticks([0, 25, 50, 75, 100, 125, 150], fontsize=10)
plt.yticks([0, 25, 50, 75, 100, 125, 150], fontsize=10)
# Plot mean earnings for NSW treated units and PSID controls
plt.plot(np.mean(earnings_control[:,0]), np.mean(earnings_control[:,1]), \
color='#00B0DA', marker='o', markersize=10)
plt.plot(np.mean(earnings_treatment[:,0]), np.mean(earnings_treatment[:,1]), \
color='#EE1F60', marker='s', markersize=10)
# Plot convex hull
for simplex in hull.simplices:
plt.plot(earnings_control[simplex, 0], earnings_control[simplex, 1], 'k-')
# Clean up the plot, add frames, remove gridlines etc.
ax = plt.gca()
ax.patch.set_facecolor('gray') # Color of background
ax.patch.set_alpha(0.15) # Translucency of background
ax.grid(False) # Remove gridlines from plot
# Add frame around plot
for spine in ['left','right','top','bottom']:
ax.spines[spine].set_visible(True)
ax.spines[spine].set_color('k')
ax.spines[spine].set_linewidth(2)
# Add legend to the plot
import matplotlib.lines as mlines
psid_patch = mlines.Line2D([], [], color='#FDB515', marker='o', linestyle='None',\
markersize=5, label='PSID controls')
psid_mean_patch = mlines.Line2D([], [], color='#00B0DA', marker='o', linestyle='None',\
markersize=10, label='PSID mean')
nsw_mean_patch = mlines.Line2D([], [], color='#EE1F60', marker='s', linestyle='None',\
markersize=10, label='NSW mean')
lgd = plt.legend(handles=[psid_patch, psid_mean_patch, nsw_mean_patch], \
loc='upper left', fontsize=12, ncol=2, numpoints = 1)
# Render & save plot
plt.tight_layout()
plt.savefig(workdir+'Fig_LaLonde_Convex_Hull.png')
# Treatment indicator
D = nsw_obs['treat']
# Propensity score model
h_W = nsw_obs[['constant','black','hispanic','age','married','nodegree',\
're74','re75','re74_X_re75','ue74','ue75','ue74_X_ue75']]
# Balancing moments
t_W = h_W
# Outcome
Y = nsw_obs['re78']
# Compute AST estimate of ATT
[gamma_ast, vcov_gamma_ast, pscore_tests, tilts, exitflag] = \
ipt.att(D, Y, h_W, t_W, study_tilt = False, rlgrz = 1/2, \
s_wgt=None, nocons=True, c_id=None, silent=False)
# Compute log of auxiliary/control sample normalized weights to construct histogram below
pi_a = tilts[:,2] # Auxiliary tilt is the last columnn in np.array tilts
b = np.where(1-D)[0] # find indices of control units
Na = len(pi_a[b])
log_wgts = np.log(Na*pi_a[b]) # log of normalized weights to undo right skewness for plotting purposes
# Make some adjustments to variable definitions in experimental dataframe
nsw['constant'] = 1 # Add constant to observational dataframe
nsw['age'] = nsw['age']/10 # Rescale age to be in decades
nsw['re74'] = nsw['re74']/1000 # Recale earnings to be in thousands
nsw['re75'] = nsw['re75']/1000 # Recale earnings to be in thousands
nsw['re74_X_re75'] = nsw['re74']*nsw['re75'] # Prior earnings interaction
nsw['re74_sq'] = nsw['re74']**2 # Squares of prior earnings
nsw['re75_sq'] = nsw['re75']**2 # Squares of prior earnings
nsw['ue74'] = 1*(nsw['re74']==0) # Dummy for unemployed in 1974
nsw['ue75'] = 1*(nsw['re75']==0) # Dummy for unemployed in 1975
nsw['ue74_X_ue75'] = nsw['ue74']*nsw['ue75'] # Prior unemployment interaction
# Treatment indicator
D = nsw['treat']
# Balancing moments
t_W = nsw[['constant','black','hispanic','age','married','nodegree',\
're74','re75','re74_X_re75','ue74','ue75','ue74_X_ue75']]
# Propensity score variables
h_W = nsw[['constant']]
# Outcome
Y = nsw['re78']
# Compute AST estimate of ATT
[gamma_ast, vcov_gamma_ast, pscore_tests, tilts, exitflag] = \
ipt.att(D, Y, h_W, t_W, study_tilt = True, rlgrz = 1/2, \
s_wgt=None, nocons=True, c_id=None, silent=False)
# Compute log of auxiliary/control sample normalized weights to construct histogram below
pi_a_nsw = tilts[:,2] # Auxiliary tilt is the last columnn in np.array tilts
b = np.where(1-D)[0] # find indices of control units
Na = len(pi_a_nsw[b])
log_wgts_nsw = np.log(Na*pi_a_nsw[b]) # log of normalized weights to undo right skewness for plotting purposes
# Create a figure object to plot histograms of auxiliary weights
auxiliary_tilt_fig = plt.figure(figsize=(8, 3.5))
# ------------------------------------- #
# - EXPERIMENTAL NSW CONTROLS SUBPLOT - #
# ------------------------------------- #
# Histogram of Na*pi_a (with no selection Na*pi_a should be close to one)
ax1 = auxiliary_tilt_fig.add_subplot(1,2,1)
sns.distplot(log_wgts_nsw, kde=False, color='#FDB515', norm_hist = True)
plt.plot((0, 0), (0, 7.55), 'k-') # plot vertical line at x = 1
# Set axis limits and tick marks (log scale with tick labels in levels)
plt.ylim([0,7.5])
plt.xlim(np.log([0.5,2]))
ax1.set_xticks(np.log([0.5, 0.75, 1.0, 1.5, 2.0]))
ax1.set_xticklabels([0.5, 0.75, 1.0, 1.5, 2.0])
# Add title and axis labels to histogram
plt.title('Experimental: NSW Controls', fontsize=12)
plt.xlabel("Normalized weights, " r"$N_{a} \times \pi_{a} $")
plt.ylabel('Density')
# Clean up the plot, add frames etc.
ax1 = plt.gca()
ax1.patch.set_facecolor('gray') # Color of background
ax1.patch.set_alpha(0.15) # Translucency of background
ax1.grid(False) # Remove gridlines from plot
# Add frame around plot
for spine in ['left','right','top','bottom']:
ax1.spines[spine].set_visible(True)
ax1.spines[spine].set_color('k')
ax1.spines[spine].set_linewidth(2)
# ---------------------------------------- #
# - OBSERVATIONAL PSID CONTROLS SUBPLOT - #
# ---------------------------------------- #
# Histogram of Na*pi_a (with no selection Na*pi_a should be close to one)
ax2 = auxiliary_tilt_fig.add_subplot(1,2,2)
sns.distplot(log_wgts, kde=False, color='#FDB515', norm_hist = True)
plt.plot((0, 0), (0, 0.126), 'k-') # plot vertical line at x = 1
# Set axis limits and tick marks (log scale with tick labels in levels)
plt.ylim([0,0.125])
plt.xlim(np.log([1e-11,1000]))
ax2.set_xticks(np.log([1e-11,1e-9,1e-7,1e-5,1e-3,1e-1,1,10,100]))
ax2.set_xticklabels([1e-11,1e-9,1e-7,1e-5,1e-3,1e-1,1,10,100])
# Add title and axis labels to histogram
plt.title('Observational: PSID Controls', fontsize=12)
plt.xlabel("Normalized weights, " r"$N_{a} \times \pi_{a} $")
plt.ylabel('Density')
# Clean up the plot, add frames etc.
ax2 = plt.gca()
ax2.patch.set_facecolor('gray') # Color of background
ax2.patch.set_alpha(0.15) # Translucency of background
ax2.grid(False) # Remove gridlines from plot
# Add frame around plot
for spine in ['left','right','top','bottom']:
ax2.spines[spine].set_visible(True)
ax2.spines[spine].set_color('k')
ax2.spines[spine].set_linewidth(2)
# Render & save plot
plt.tight_layout()
plt.savefig(workdir+'Fig_LaLonde_Weights_Hist.png')
B = 1000
N = 1000
P = 50
# normalize matrix to store Monte Carlo results
Simulation_Results = np.zeros((B,5))
# generate parameter vectors for covariate and outcome DGPs
delta_dense = []
beta_dense = []
beta_harm = []
X_names = []
for p in range(0,P):
delta_dense.append(4/np.sqrt(N))
beta_dense.append(1/np.sqrt(p+1))
X_names.append(str(p+1))
# rescale beta vector to have length 10
beta_dense = 10*(beta_dense/np.linalg.norm(beta_dense))
beta_dense = np.reshape(beta_dense,(-1,1)) # Turn into two-dimensional array, P x 1
# ------------------------------------ #
# PERFORM B MONTE CARLO REPLICATIONS - #
# ------------------------------------ #
for b in range(0,B):
# generate treatment indicator
W = np.random.binomial(1, 0.5, (N,1))
# generate covariate vector (with 2-cluster structure)
C0 = np.random.binomial(1, 0.8, (N,1))
C1 = np.random.binomial(1, 0.2, (N,1))
mu_X = (1 - W) * (C0 * np.zeros((1,P)) + (1 - C0) * delta_dense) \
+ W * (C1 * np.zeros((1,P)) + (1 - C1) * delta_dense)
X = mu_X + np.random.multivariate_normal(np.zeros(P),np.identity(P),N)
# generate outcome vector
Y = np.dot(X,beta_dense) + W*10 + np.random.normal(0, 1, (N,1))
# add constant to X matrix for estimation purposes
X = np.concatenate((np.ones((N,1)), X), axis=1)
# Convert W, Y and X to pandas objects
W = pd.Series(np.ravel(W), name="W")
Y = pd.Series(np.ravel(Y), name="Y")
X = pd.DataFrame(X)
X.rename(columns = lambda x: str(x), inplace=True) # Ensures column names are strings
# compute AST estimate of the ATT
try:
[gamma_ast, vcov_gamma_ast, pscore_tests, tilts, exitflag] = \
ipt.att(W, Y, X, X, study_tilt=False, rlgrz=1, s_wgt=None, nocons=True, \
c_id=None, silent=True)
Simulation_Results[b,0] = (exitflag==1) # Record whether AST computation was successful
Simulation_Results[b,1] = gamma_ast # Record ATT estimate
Simulation_Results[b,2] = np.sqrt(vcov_gamma_ast) # Record standard error estimates
# Check coverage of 95 percent Wald asymptotic confidence interval
Simulation_Results[b,3] = (gamma_ast - 1.96*Simulation_Results[b,2] <= 10) & \
(gamma_ast + 1.96*Simulation_Results[b,2] >= 10)
# Check size/power properties of auxiliary tilt specification test
Simulation_Results[b,4] = (pscore_tests[1][2] <= 0.05)
except:
Simulation_Results[b,0] = False
# ------------------------------------------------ #
# - SUMMARIZE RESULTS OF MONTE CARLO SIMULATIONS - #
# ------------------------------------------------ #
print("")
print("Number of times AST estimate of the ATT was successfully computed")
print(np.sum(Simulation_Results[:,0], axis = 0))
# find simulation replicates where computation was successful
b = np.where(Simulation_Results[:,0])[0]
SimRes = Simulation_Results[b,:]
# create Pandas dataframe with AST Monte Carlo results
SR=pd.DataFrame({'ATT' : SimRes[:,1], 'StdErr' : SimRes[:,2],\
'Coverage' : SimRes[:,3], 'Size of p-test' : SimRes[:,4],})
print("")
print("Monte Carlo summary statistics for AST")
print(SR.describe())
Q = SR[['ATT']].quantile(q=[0.05,0.95])
print("")
print('Robust estimate of standard deviation of ATT')
print((Q['ATT'][0.95]-Q['ATT'][0.05])/(2*1.645))
# This imports an attractive notebook style from Github
from IPython.display import HTML
from urllib.request import urlopen
html = urlopen('http://bit.ly/1Bf5Hft')
HTML(html.read().decode('utf-8'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With this Github object, you can retreive all kinds of Github objects, which you can then futher explore.
Step2: Users
Step3: Repositories
Step4: There are lots of properties or methods of objects that return other objects (like repos, users, organizations), and you can quickly access properties or methods of these objects with a dot.
Step5: Commits
Step6: Working with timedeltas
Step7: Issues
Step8: Organizations
Step9: We can go through all the repositories in the organization with the get_repos() function. It returns a list of repository objects, which have their own properties and methods.
Step10: Getting location data for an organization's contributors
Step11: We can plot points on a map using ipyleaflets and ipywidgets. We first set up a map object, which is created with various parameters. Then we create Marker objects, which are then appended to the map. We then display the map inline in this notebook.
Step12: Rate limiting
Step13: This value is in seconds since the UTC epoch (Jan 1st, 1970), so we have to convert it. Here is a quick function that takes a GitHub object, queries the API to find our next reset time, and converts it to minutes.
Step15: Querying GitHub for location data
Step16: With this, we can easily query an organization. The U.S. Digital Service (org name
Step17: We are going to explore this dataset, but not plot names or usernames. I'm a bit hesitant to publish location data with unique identifiers, even if people put that information in their profiles. This code iterates through the dictionary and puts location data into a list.
Step19: Now we can map this data using another function I have written.
Step20: Now show the map inline! With the leaflet widget, you can zoom in and out directly in the notebook. And we can also export it to an html widget by going to the Widgets menu in Jupyter notebooks, clicking "Embed widgets," and copy/pasting this to an html file. It will not show up in rendered Jupyter notebooks on Github, but may show up in nbviewer.
|
<ASSISTANT_TASK:>
Python Code:
!pip install pygithub
!pip install geopy
!pip install ipywidgets
from github import Github
#this is my private login credentials, stored in ghlogin.py
import ghlogin
g = Github(login_or_token=ghlogin.gh_user, password=ghlogin.gh_passwd)
def vdir(obj):
return [x for x in dir(obj) if not x.startswith('_')]
vdir(g)
user = g.get_user("staeiou")
vdir(user)
print(user.name)
print(user.created_at)
print(user.location)
repo = g.get_repo("jupyter/notebook")
vdir(repo)
print(repo.name)
print(repo.description)
print(repo.organization)
print(repo.organization.name)
print(repo.organization.location)
print(repo.language)
print(repo.get_contributors())
print(repo.get_commits())
commits = repo.get_commits()
commit = commits[0]
print("Author name: ", commit.author.name)
print("Committer name: ", commit.committer.name)
print("Lines added: ", commit.stats.additions)
print("Lines deleted: ", commit.stats.deletions)
print("Commit message:\n---------\n", commit.commit.message)
import datetime
one_month_ago = datetime.datetime.now() - datetime.timedelta(days=30)
net_lines_added = 0
num_commits = 0
for commit in repo.get_commits(since = one_month_ago):
net_lines_added += commit.stats.additions
net_lines_added -= commit.stats.deletions
num_commits += 1
print(net_lines_added, num_commits)
dir(issue)
issues = repo.get_issues()
for issue in issues:
last_updated_delta = datetime.datetime.now() - issue.updated_at
if last_updated_delta > datetime.timedelta(days=365):
print(issue.title, last_updated_delta.days)
org = g.get_organization("jupyter")
print(org.name)
print(org.created_at)
print(org.html_url)
repos = {}
for repo in org.get_repos():
repos[repo.name] = repo.forks_count
repos
from geopy.geocoders import Nominatim
geolocator = Nominatim()
uk_loc = geolocator.geocode("UK")
print(uk_loc.longitude,uk_loc.latitude)
us_loc = geolocator.geocode("USA")
print(us_loc.longitude,us_loc.latitude)
bids_loc = geolocator.geocode("Doe Library, Berkeley CA, 94720 USA")
print(bids_loc.longitude,bids_loc.latitude)
import ipywidgets
from ipyleaflet import (
Map,
Marker,
TileLayer, ImageOverlay,
Polyline, Polygon, Rectangle, Circle, CircleMarker,
GeoJSON,
DrawControl
)
center = [30.0, 5.0]
zoom = 2
m = Map(default_tiles=TileLayer(opacity=1.0), center=center, zoom=zoom, layout=ipywidgets.Layout(height="600px"))
uk_mark = Marker(location=[uk_loc.latitude,uk_loc.longitude])
uk_mark.visible
m += uk_mark
us_mark = Marker(location=[us_loc.latitude,us_loc.longitude])
us_mark.visible
m += us_mark
bids_mark = Marker(location=[bids_loc.latitude,bids_loc.longitude])
bids_mark.visible
m += bids_mark
g.rate_limiting
reset_time = g.rate_limiting_resettime
reset_time
import datetime
def minutes_to_reset(github):
reset_time = github.rate_limiting_resettime
timedelta_to_reset = datetime.datetime.fromtimestamp(reset_time) - datetime.datetime.now()
return timedelta_to_reset.seconds / 60
minutes_to_reset(g)
def get_org_contributor_locations(github, org_name):
For a GitHub organization, get location for contributors to any repo in the org.
Returns a dictionary of {username URLS : geopy Locations}, then a dictionary of various metadata.
# Set up empty dictionaries and metadata variables
contributor_locs = {}
locations = []
none_count = 0
error_count = 0
user_loc_count = 0
duplicate_count = 0
geolocator = Nominatim()
# For each repo in the organization
for repo in github.get_organization(org_name).get_repos():
#print(repo.name)
# For each contributor in the repo
for contributor in repo.get_contributors():
print('.', end="")
# If the contributor_locs dictionary doesn't have an entry for this user
if contributor_locs.get(contributor.url) is None:
# Try-Except block to handle API errors
try:
# If the contributor has no location in profile
if(contributor.location is None):
#print("No Location")
none_count += 1
else:
# Get coordinates for location string from Nominatim API
location=geolocator.geocode(contributor.location)
#print(contributor.location, " | ", location)
# Add a new entry to the dictionary. Value is user's URL, key is geocoded location object
contributor_locs[contributor.url] = location
user_loc_count += 1
except Exception:
print('!', end="")
error_count += 1
else:
duplicate_count += 1
return contributor_locs,{'no_loc_count':none_count, 'user_loc_count':user_loc_count,
'duplicate_count':duplicate_count, 'error_count':error_count}
usds_locs, usds_metadata = get_org_contributor_locations(g,'usds')
usds_metadata
usds_locs_nousernames = []
for contributor, location in usds_locs.items():
usds_locs_nousernames.append(location)
usds_locs_nousernames
def map_location_dict(map_obj,org_location_dict):
Maps the locations in a dictionary of {ids : geoPy Locations}.
Must be passed a map object, then the dictionary. Returns the map object.
for username, location in org_location_dict.items():
if(location is not None):
mark = Marker(location=[location.latitude,location.longitude])
mark.visible
map_obj += mark
return map_obj
center = [30.0,5.0]
zoom = 2
usds_map = Map(default_tiles=TileLayer(opacity=1.0), center=center, zoom=zoom, layout=ipywidgets.Layout(height="600px"))
usds_map = map_location_dict(usds_map, usds_locs)
usds_map
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Step5: Making batches
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
Step8: Embedding
Step9: Negative sampling
Step10: Validation
Step11: Restore the trained network if you need to
Step12: Visualizing the word vectors
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction to TensorFlow
Step2: This cell runs a magic that tells Colab to use TensorFlow 2 instead of TensorFlow 1 by default. This magic needs to run before you actually load TensorFlow.
Step3: Tensors
Step4: Notice that the shape of the tensor is (), which indicates that the tensor is a scalar.
Step5: Notice now that our shape has changed to (3,).
Step6: Cubes
Step7: Higher-Order Tensors
Step8: But with variable tensors you can change the values using the assign() method.
Step9: Tensor Operations
Step10: We create two scalar constant tensors and add them together. The result is another tensor.
Step11: TensorFlow also has support for many more operations than can be represented by standard Python operators. For instance, there are a number of linear algebra operations such as the matrix transpositition
Step12: You can also calculate matrix multiplication using tf.tensordot()
Step13: This is necessary because by default * performs elementwise multiplication
Step14: Interestingly, Tensor objects can also by passed to NumPy functions in some cases
Step15: Exercise 2
Step16: Extracting Values
Step17: Exercise 3
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
%tensorflow_version 2.x
import tensorflow as tf
print(tf.__version__)
scalar_tensor = tf.constant("Hi Mom!")
print(scalar_tensor)
vector_tensor = tf.constant([1, 2, 3])
print(vector_tensor)
matrix_tensor = tf.constant([[1.2, 3.4, 5.6], [7.8, 9.0, 1.2]])
print(matrix_tensor)
# Your Code Goes Here
vector_tensor = tf.Variable([1, 2, 3])
print(vector_tensor)
vector_tensor = tf.Variable([1, 2, 3])
vector_tensor.assign([5, 6, 7])
print(vector_tensor)
tensor = tf.constant(3) + tf.constant(4)
print(tensor)
print(tf.constant(3) + 4)
print(3 + tf.constant(4))
matrix = tf.constant([
[11, 12, 13],
[21, 22, 23],
[31, 32, 33],
])
print(tf.transpose(matrix))
matrix_one = tf.constant([
[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
])
matrix_two = tf.constant([
[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
])
tf.tensordot(matrix_one, matrix_two, axes=1)
matrix_one = tf.constant([
[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
])
matrix_two = tf.constant([
[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
])
matrix_one * matrix_two
import tensorflow as tf
import numpy as np
matrix_one = tf.constant([
[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
])
matrix_two = tf.constant([
[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
])
np.dot(matrix_one, matrix_two)
# Your code goes here
tensor = tf.constant([[1, 2], [3, 4]])
tensor.numpy()
tensor = tf.constant([[1, 2], [3, 4]])
# Extract the values inside the tensor as a Python list stored
# in a variable named tensor_values.
# Print the type of tensor_values
# Print tensor_values
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting a feel for it
Step2: How likely is it that we get exactly X% of $tau_1$?
Step3: How likely is it that we get $t_1$ samples, with $t_1$ on some interval? Say, $s\tau_1 \pm$ 1%?
Step4: Standard deviation of the size of the sample
Step5: Using the standard deviation to estimate the most likely interval for $t_1$ to fall in
Step6: Now let's look at $t_1$ and $t_2$ together
Step7: Finally, the joint PMF
Step8: That was interesting, but it wasn't exactly the problem that we wanted to answer
Step9: I've heard people say the words "log-likelihood"
Step10: And finally, there's a way to do the above analytically (rather than numerically)
Step11: 3. Estimating the value of $\tau$
Step12: Finally
|
<ASSISTANT_TASK:>
Python Code:
# math
import numpy as np
from scipy.misc import comb
from scipy.stats import binom, norm
from itertools import accumulate
from math import sqrt, ceil, log
# python tools
from collections import Counter
from itertools import zip_longest
from multiprocessing import Process, Pool, Queue
from random import random
import os
# plotting
import matplotlib.pyplot as plt
%matplotlib inline
# Set up the problem. We have to fix tau_1, tau_2, and s
tau1 = 100
tau2 = 90
s = .1
# We can use the binomial PMF from scipy, which is better than coding it up myself
# because all of those factorials make for hard computations
# we're gonna use these a lot
binomial_tau1 = binom(tau1,s)
binomial_tau2 = binom(tau2,s)
# Let's show how likely it is to get exactly t_1 of T_1. Just for practice
# do the probability calculations
probabilities_t1 = []
for i in range(0,tau1):
#probabilities_t1.append((i,comb(tau1,i)*s**(i)*(1-s)**(tau1-i)))
probabilities_t1.append((i,binomial_tau1.pmf(i)))
probabilities_t2 = []
for i in range(0,tau2):
#probabilities_t2.append((i,comb(tau2,i)*s**(i)*(1-s)**(tau2-i)))
probabilities_t2.append((i,binomial_tau2.pmf(i)))
plt.plot([x[0] for x in probabilities_t1],[x[1] for x in probabilities_t1])
plt.title("Probability Mass Function for the Binomial distribution")
plt.xlabel("Number of successes in tau_1 trials (t_1)")
plt.ylabel("Probability of exactly t_1 successes")
print("The probability of getting exactly s*tau_1 samples is: {:f}".format(binomial_tau1.pmf(s*tau1)))
plus_or_minus_percent = .01
lower_bound_t1 = int((1 - plus_or_minus_percent)*s*tau1)
upper_bound_t1 = int((1 + plus_or_minus_percent)*s*tau1)
probability_t1_interval = 0
for i in range(lower_bound_t1,upper_bound_t1):
probability_t1_interval += binomial_tau1.pmf(i)
print("P({} < t_1 < {}) = {:f}".format(lower_bound_t1,upper_bound_t1,probability_t1_interval))
# standard deviation varies as the size of tau1 varies
std_dev_sizes = []
max_tau = 10**4
for i in range(0,max_tau,5):
std_dev_sizes.append((i,sqrt(i*s*(1-s))))
plt.plot([x[0] for x in std_dev_sizes],[x[1] for x in std_dev_sizes])
_ = plt.xlim([0,max_tau])
_ = plt.ylim([0,max([x[1] for x in std_dev_sizes])])
_ = plt.title("Standard deviation of the sample size, varying tau")
# Let's show how likely it is to get exactly t_1 of T_1. Just for practice
# we cen grab the
normal_distribution_tau1 = norm(tau1*s,sqrt(tau1*s*(1-s)))
normal_prob_t1 = []
probabilities_t1 = []
for i in range(0,tau1):
probabilities_t1.append((i,binomial_tau1.pmf(i)))
normal_prob_t1.append((i,normal_distribution_tau1.pdf(i)))
plt.plot([x[0] for x in probabilities_t1],[x[1] for x in probabilities_t1],linewidth = 5)
plt.plot([x[0] for x in normal_prob_t1],[x[1] for x in normal_prob_t1],"--",linewidth = 3)
plt.legend(["binomial","normal"])
plt.ylabel("Probabilty mass a x")
plt.xlabel("t_1")
# the total probability of falling within 3 standard deviations of the mean (s*tau_1):
plus_or_minus_stdv = .01
lower_bound_t1 = max(0,int(s*tau1 - 3 * sqrt(tau1*s*(1-s))))
upper_bound_t1 = int(s*tau1 + 3 * sqrt(tau1*s*(1-s)))
probability_t1_interval = 0
for i in range(lower_bound_t1,upper_bound_t1):
probability_t1_interval += binomial_tau1.pmf(i)
print("P({} < t_1 < {}) = {:f} (very nearly 99.7%)".format(lower_bound_t1,upper_bound_t1,probability_t1_interval))
# t1 and t2 most likely fall within some range
# t1 and t2 are selected form binomial distributions, they are symmetric
# and their standard deviations are easily calculated
fig, ax = plt.subplots(1, 1)
ax.plot([x[0] for x in probabilities_t1],[x[1] for x in probabilities_t1])
ax.plot([x[0] for x in probabilities_t2],[x[1] for x in probabilities_t2])
ax.legend(["t_1","t_2"])
ax.set_ylabel("Probabilty mass at x")
# Now let's code this up so we can simulate it for various values of s, tau_1, and tau_2
probabilities_t2_and_t1 = np.zeros((tau1,tau2))
# i can seriously speed up the amount of time this takes by not calculating the very, very low probabilities
# outside of 3 stdev from the mean
bool_truncate_window = True
if bool_truncate_window:
stdv1 = ceil(sqrt(tau1*s*(1-s)))
stdv2 = ceil(sqrt(tau2*s*(1-s)))
num_stdv = 3
window_t1 = [max(ceil(s*tau1)-num_stdv*stdv1,0), ceil(s*tau1)+num_stdv*stdv1]
window_t2 = [max(ceil(s*tau2)-num_stdv*stdv2,0), ceil(s*tau2)+num_stdv*stdv2]
else:
window_t1 = [0,tau1]
window_t2 = [0,tau2]
for j in range(window_t1[0],window_t1[1]):
for i in range(window_t2[0],window_t2[1]):
probabilities_t2_and_t1[j,i] = binomial_tau1.pmf(j)*binomial_tau2.pmf(i)
figure, ax = plt.subplots(1,1,figsize = (6,12))
ax.imshow(probabilities_t2_and_t1, cmap='Blues', interpolation='nearest', aspect = 'equal', origin = 'lower')
ax.set_xlim([0,tau2])
ax.set_ylim([0,tau1])
if bool_truncate_window:
ax.fill_between(window_t2,0,tau1,alpha = .2,color = 'navajowhite')
ax.fill_between([0,tau2],window_t1[0],window_t1[1],alpha = .2,color = 'lightsteelblue')
ax.plot([0,tau2],[0,tau2],'--', color = 'gray')
# The total probabilties should sum to 1
# because I truncated, they sum to something slightly less than one--you can see how small the actual
# values are in those other areas
print("Total probabilty represented: {:f}".format(probabilities_t2_and_t1.sum()))
# Now, sum only the lower triangle--the piece where t_2 >= t_1
print("The probabilty that t_2 >= t_1 given tau1 = {} and tau2 = {}: {}".format(
tau1, tau2, np.triu(probabilities_t2_and_t1).sum()))
def bernoulli_experiment(s):
if random() < s:
return True
else:
return False
def bernoulli_pmf(x, s):
if x:
return s
else:
return 1-s
def eval_bernoulli_likelihood_for_all_x_i(x_i, s):
likelihood = 1
for x in x_i:
likelihood = likelihood * bernoulli_pmf(x,s)
return likelihood
# set up the experiment
# number of trials
n = 100
# probabilty, s. let's assign it randomly and check later
s_sample = random()
s_range = list(np.linspace(0,1,100))
# results, x_i of n experiments
x_i = [bernoulli_experiment(s_sample) for x in range(0,n)]
# get the likelihood results
likelihood = [eval_bernoulli_likelihood_for_all_x_i(x_i, s_) for s_ in s_range]
estimated_s = max(list(zip(s_range,likelihood)), key = lambda x:x[1])
# plot
plt.plot(s_range,likelihood)
plt.title("Max Likelihood as a function of s")
plt.legend(["actual value of s: s = {:f}".format(s_sample) + "\n" + "estimated value of s: {:f}".format(estimated_s[0])])
plt.xlabel("s")
plt.ylabel("likelihood")
def eval_bernoulli_loglikelihood_for_all_x_i(x_i, p):
likelihood = 0
for x in x_i:
likelihood += np.log(bernoulli_pmf(x,p))
return likelihood
# get the likelihood results
loglikelihood = [eval_bernoulli_loglikelihood_for_all_x_i(x_i, s_) for s_ in s_range[1:-1]]
log_estimated_s = max(list(zip(s_range,likelihood)), key = lambda x:x[1])
# plot
# notice that the max is in the same place
plt.plot(s_range[1:-1],loglikelihood)
plt.title("Max Log Likelihood as a function of s")
plt.legend(["actual value of s: s = {:f}".format(s_sample) + "\n" + "estimated value of s: {:f}".format(
log_estimated_s[0])])
plt.xlabel("s")
plt.ylabel("likelihood")
# let's see what we would estimate s to be analytically
print("Estimate of s is: {} \n (The correct value is {})".format(sum(x_i)/n,s_sample))
# set up the experiment
# number of trials
n = 1000
# probabilty, s. i'll use the same "s" that we're using for the problem setup
print("s = {}".format(s))
# now, fix a "real" tau
# tau could really be anywhere from 0 to infinity, but I'll fix it between 0 and 100
tau_sample = ceil(random()*100)
# possible tau values
possible_tau_range = range(1,101)
print("The real tau is {}".format(tau_sample))
# the result of n experiments
binomial_tau_sample = binom(tau_sample, s)
t_i = [binomial_tau_sample.rvs() for x in range(0,n)]
#print("The results of the esperiments are: {}".format(t_i))
# Now, evaluate the binomial distribution likelihood for this collection of results
def binomial_loglikelihood(t_i, s, tau):
binomial_distribution = binom(tau,s)
loglikelihood = 0
for t in t_i:
loglikelihood += np.log(binomial_distribution.pmf(t))
return loglikelihood
# there's no sense in testing tau that are lower than the max of t_i
likelihood_tau = [(x, binomial_loglikelihood(t_i, s, x)) for x in possible_tau_range if x > max(t_i)]
# estimate tau
estimated_tau = max(likelihood_tau, key = lambda x:x[1])
plt.plot([x[0] for x in likelihood_tau], [x[1] for x in likelihood_tau])
plt.title("Max Likelihood as a function of tau")
plt.legend(["actual value of tau: tau = {:f}".format(tau_sample)
+ "\n" + "estimated value of tau: {:f}".format(estimated_tau[0])])
plt.xlabel("tau")
plt.ylabel("likelihood")
%%time
# Now let's code this up so we can simulate it for various values of s, tau_1, and tau_2
# fix t1 and t2--the values that we observed IRL
t1 = 100
t2 = 95
# we aren't going to evaluate this over every possible tau, because that would be wayyyyy to large
# let's estimate a window
stdv1 = ceil(sqrt(t1*(1-s)))
stdv2 = ceil(sqrt(t2*(1-s)))
num_stdv = 3
window_est_tau1 = [int(max(ceil(t1)-num_stdv*stdv1,0)*(1/s)), int((ceil(t1)+num_stdv*stdv1)*(1/s))]
window_est_tau2 = [int(max(ceil(t2)-num_stdv*stdv2,0)*(1/s)), int((ceil(t2)+num_stdv*stdv2)*(1/s))]
# calculate the probabilty that binomial distributions with tau1 = j and tau2 = i resulted in samples t1 and t2
# use mutiprocessing bc this is kinda slow
# function that we are evaluating
def evaluate_binomial(j_tau1_i_tau2):
binomial_tau1 = binom(j_tau1_i_tau2[0],s)
binomial_tau2 = binom(j_tau1_i_tau2[1],s)
return (j_tau1_i_tau2, binomial_tau1.pmf(t1)*binomial_tau2.pmf(t2))
# create the processes
pool = Pool(processes=12)
tau2_and_tau1_list = []
probabilities_tau2_and_tau1_list = []
for j_tau1 in range(window_est_tau1[0],window_est_tau1[1]):
for i_tau2 in range(window_est_tau2[0],window_est_tau2[1]):
tau2_and_tau1_list.append((j_tau1,i_tau2))
# get the results
probabilities_tau2_and_tau1_list = pool.map(evaluate_binomial, tau2_and_tau1_list)
probabilities_tau2_and_tau1 = np.zeros((window_est_tau1[1],window_est_tau2[1]))
for probabilty_calcd in probabilities_tau2_and_tau1_list:
probabilities_tau2_and_tau1[probabilty_calcd[0]] = probabilty_calcd[1]
figure, ax = plt.subplots(1,1,figsize = (6,12))
plt.imshow(probabilities_tau2_and_tau1, cmap='Blues', interpolation='nearest', aspect = 'equal', origin = 'lower')
plt.plot([0,window_est_tau2[1]],[0,window_est_tau2[1]],'--',color = 'gray')
ax.fill_between([0,window_est_tau2[1]],0,window_est_tau1[0],alpha = 1,color = 'white')
ax.fill_between([0,window_est_tau2[0]],window_est_tau1[0],window_est_tau1[1],alpha = 1,color = 'white')
_ = plt.xlim([0,window_est_tau2[1]])
_ = plt.ylim([0,window_est_tau1[1]])
print("Estimate of the probabilty that tau_1 < tau_2 given that t_1 = {} and t_2 = {}: {}".format(
t1,t2,np.triu(probabilities_tau2_and_tau1).sum()/probabilities_tau2_and_tau1.sum()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Display verbose output
Step2: Explicitly specify a project
Step3: Assign the query results to a variable
Step4: Run a parameterized query
|
<ASSISTANT_TASK:>
Python Code:
%%bigquery
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT 10
%%bigquery --verbose
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT 10
project_id = 'your-project-id'
%%bigquery --project $project_id
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT 10
%%bigquery df
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT 10
df
params = {"limit": 10}
%%bigquery --params $params
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT @limit
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DDSP Processor Demo
Step2: Example
Step3: get_controls()
Step4: Consider the plots above as outputs of a neural network. These outputs violate the synthesizer's expectations
Step5: Notice that
Step6: get_signal()
Step7: __call__()
Step8: Example
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
#@title Install and import dependencies
%tensorflow_version 2.x
!pip install -qU ddsp
# Ignore a bunch of deprecation warnings
import warnings
warnings.filterwarnings("ignore")
import ddsp
import ddsp.training
from ddsp.colab.colab_utils import play, specplot, DEFAULT_SAMPLE_RATE
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
sample_rate = DEFAULT_SAMPLE_RATE # 16000
n_frames = 1000
hop_size = 64
n_samples = n_frames * hop_size
# Create a synthesizer object.
harmonic_synth = ddsp.synths.Harmonic(n_samples=n_samples,
sample_rate=sample_rate)
# Generate some arbitrary inputs.
# Amplitude [batch, n_frames, 1].
# Make amplitude linearly decay over time.
amps = np.linspace(1.0, -3.0, n_frames)
amps = amps[np.newaxis, :, np.newaxis]
# Harmonic Distribution [batch, n_frames, n_harmonics].
# Make harmonics decrease linearly with frequency.
n_harmonics = 30
harmonic_distribution = (np.linspace(-2.0, 2.0, n_frames)[:, np.newaxis] +
np.linspace(3.0, -3.0, n_harmonics)[np.newaxis, :])
harmonic_distribution = harmonic_distribution[np.newaxis, :, :]
# Fundamental frequency in Hz [batch, n_frames, 1].
f0_hz = 440.0 * np.ones([1, n_frames, 1], dtype=np.float32)
# Plot it!
time = np.linspace(0, n_samples / sample_rate, n_frames)
plt.figure(figsize=(18, 4))
plt.subplot(131)
plt.plot(time, amps[0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Amplitude')
plt.subplot(132)
plt.plot(time, harmonic_distribution[0, :, :])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Harmonic Distribution')
plt.subplot(133)
plt.plot(time, f0_hz[0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
_ = plt.title('Fundamental Frequency')
controls = harmonic_synth.get_controls(amps, harmonic_distribution, f0_hz)
print(controls.keys())
# Now let's see what they look like...
time = np.linspace(0, n_samples / sample_rate, n_frames)
plt.figure(figsize=(18, 4))
plt.subplot(131)
plt.plot(time, controls['amplitudes'][0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Amplitude')
plt.subplot(132)
plt.plot(time, controls['harmonic_distribution'][0, :, :])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Harmonic Distribution')
plt.subplot(133)
plt.plot(time, controls['f0_hz'][0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
_ = plt.title('Fundamental Frequency')
x = tf.linspace(-10.0, 10.0, 1000)
y = ddsp.core.exp_sigmoid(x)
plt.figure(figsize=(18, 4))
plt.subplot(121)
plt.plot(x, y)
plt.subplot(122)
_ = plt.semilogy(x, y)
audio = harmonic_synth.get_signal(**controls)
play(audio)
specplot(audio)
audio = harmonic_synth(amps, harmonic_distribution, f0_hz)
play(audio)
specplot(audio)
## Some weird control envelopes...
# Amplitude [batch, n_frames, 1].
amps = np.ones([n_frames]) * -5.0
amps[:50] += np.linspace(0, 7.0, 50)
amps[50:200] += 7.0
amps[200:900] += (7.0 - np.linspace(0.0, 7.0, 700))
amps *= np.abs(np.cos(np.linspace(0, 2*np.pi * 10.0, n_frames)))
amps = amps[np.newaxis, :, np.newaxis]
# Harmonic Distribution [batch, n_frames, n_harmonics].
n_harmonics = 20
harmonic_distribution = np.ones([n_frames, 1]) * np.linspace(1.0, -1.0, n_harmonics)[np.newaxis, :]
for i in range(n_harmonics):
harmonic_distribution[:, i] = 1.0 - np.linspace(i * 0.09, 2.0, 1000)
harmonic_distribution[:, i] *= 5.0 * np.abs(np.cos(np.linspace(0, 2*np.pi * 0.1 * i, n_frames)))
if i % 2 != 0:
harmonic_distribution[:, i] = -3
harmonic_distribution = harmonic_distribution[np.newaxis, :, :]
# Fundamental frequency in Hz [batch, n_frames, 1].
f0_hz = np.ones([n_frames]) * 200.0
f0_hz[:100] *= np.linspace(2, 1, 100)**2
f0_hz[200:1000] += 20 * np.sin(np.linspace(0, 8.0, 800) * 2 * np.pi * np.linspace(0, 1.0, 800)) * np.linspace(0, 1.0, 800)
f0_hz = f0_hz[np.newaxis, :, np.newaxis]
# Get valid controls
controls = harmonic_synth.get_controls(amps, harmonic_distribution, f0_hz)
# Plot!
time = np.linspace(0, n_samples / sample_rate, n_frames)
plt.figure(figsize=(18, 4))
plt.subplot(131)
plt.plot(time, controls['amplitudes'][0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Amplitude')
plt.subplot(132)
plt.plot(time, controls['harmonic_distribution'][0, :, :])
plt.xticks([0, 1, 2, 3, 4])
plt.title('Harmonic Distribution')
plt.subplot(133)
plt.plot(time, controls['f0_hz'][0, :, 0])
plt.xticks([0, 1, 2, 3, 4])
_ = plt.title('Fundamental Frequency')
audio = harmonic_synth.get_signal(**controls)
play(audio)
specplot(audio)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: OK. Tudo certo. Bate com os grรกficos mostrados pelo G1, apenas estรก sendo mostrado de uma forma diferente.
Step2: o cantareira tem capacidade total de quase 1 trilhรฃo de litros, segundo a matรฉria do G1.
Step3: Pronto. Agora faz sentido, sem aquela variaรงรฃo absurda.
Step4: O G1 errou catastrรณficamente. ~~errou feio, errou rude.~~
Step5: Viu?
|
<ASSISTANT_TASK:>
Python Code:
from sabesPy import getData
import pandas as pd
df = pd.DataFrame([getData('2014-03-14'), getData('2015-03-14')])
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns ## sรณ pra deixar o matplotlib com o estilo bonitรฃo do seaborn ;)
sns.set_context("talk")
sns.set_style("darkgrid", {"grid.linewidth": .5, "axes.facecolor": ".9"})
#pd.options.display.mpl_style = 'default' ## estilo ggplot
datasG1 = ['2014-03-14','2015-03-14']
df.ix[datasG1,:].T.plot(kind='bar', rot=0, figsize=(8,4))
plt.show()
datas = ['2014-03-14',
'2014-05-15', # prรฉ-volume morto
'2014-05-16', # estrรฉia da "primeira reserva tรฉcnica", a.k.a. volume morto
'2014-07-12', # data em que a "reserva normal" รฉ esgotada
'2014-10-23',
'2014-10-24', # "segunda reserva tรฉcnica" ou "VOLUME MORTO 2: ELECTRIC BOOGALOO"
'2015-01-01', # feliz ano novo ?
'2015-03-14']
import numpy as np
df = pd.DataFrame(pd.concat(map(getData, datas), axis=1))
df = df.T
from sabesPy import plotSideBySide
dados = df.ix[['2014-05-15','2014-05-16']], df.ix[['2014-10-23','2014-10-24']]
plotSideBySide(dados, titles=['$1^a$ Reserva tรฉcnica', '$2^a$ Reserva tรฉcnica'])
from sabesPy import fixPerc
dFixed = df.copy()
dFixed.Cantareira = ([fixPerc(p, dia) for p, dia in zip(df.Cantareira, df.index)])
dados = dFixed.ix[['2014-05-15','2014-05-16']], dFixed.ix[['2014-10-23','2014-10-24']]
plotSideBySide(dados,
titles=['1$^a$ reserva tรฉcnica\n(percentuais corrigidos)',
'2$^a$ reserva tรฉcnica\n(percentuais corrigidos)'])
dados = df.ix[datasG1], dFixed.ix[datasG1]
plotSideBySide(dados,cm=[None,None],
titles=["S/ CORRIGIR as %'s", "%'s CORRIGIDAS"])
def percFixer2(p,volMax, volumeMorto=0):
volAtual = (volMax + volumeMorto)*(p/100) - volumeMorto
q = 100*volAtual/volMax
#import numpy as np
q = round(q,1)
return q
dFixed2 = df.copy()
dFixed2.Cantareira = [fixPerc(p, dia, fixFunc=percFixer2) for p, dia in zip(df.Cantareira, df.index)]
dados = [[dFixed2.ix[['2014-05-15','2014-05-16']], dFixed.ix[['2014-05-15','2014-05-16']]],
[dFixed2.ix[['2014-10-23','2014-10-24']], dFixed.ix[['2014-10-23','2014-10-24']]]]
plotSideBySide(dados[0], cm=['Spectral', 'Spectral'],
titles=['1$^a$ reserva tรฉcnica\n$ Vol_1 = p \\times (Vol_0 + Vol_{morto})$',
'1$^a$ reserva tรฉcnica\n$ Vol_1 = p \\times Vol_0$'])
plotSideBySide(dados[1], cm=['coolwarm', 'coolwarm'],
titles=['2$^a$ reserva tรฉcnica\n$ Vol_1 = p \\times (Vol_0 + Vol_{morto})$',
'2$^a$ reserva tรฉcnica\n$ Vol_1 = p \\times Vol_0$'])
dadosCantareira = pd.DataFrame(pd.concat([df.Cantareira, dFixed2.Cantareira, dFixed.Cantareira], axis=1))
titles = ['$\%$ divulgada pela Sabesp ($p$)',
'$ Vol_{atual} = p \\times (Vol_{max} +Vol_{morto})$',
'$ Vol_{atual} = p \\times Vol_{max}$']
dadosCantareira.columns = titles
#dadosCantareira.index = pd.to_datetime(dadosCantareira.index)
from sabesPy import reverseDate
dadosCantareira.index = map(reverseDate, dadosCantareira.index)
sns.set_context("poster")
ax = dadosCantareira.plot(kind='bar', ylim=(-25,30), yticks=range(-25,30,5), rot=20, title='Sistema Cantareira\n')
ax.set_ylabel("$\%$", fontsize=30)
ax.text(2.5, -10, r" $ \% =\frac{Vol_{atual} - Vol_{morto}}{Vol_{max}} $", fontsize=30, ha='right')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: You can see, adding two lists just results in a longer list, catenation of the two.
|
<ASSISTANT_TASK:>
Python Code:
a = [1,2,3]
b = [4,5,6]
c = a+b
print(c)
a.append(b)
print(a)
def sum(data):
sum the elements of an array
asum = 0.0
for i in data:
asum = asum + i
return asum
# the length of the array is defined here, and re-used below
# to test performance, we can make this number very large
# 1000, 1000000 etc.
n = 10
%%time
a = list(range(n))
%%time
print(sum(a))
import numpy as np
%%time
a=np.arange(n)
%%time
print(sum(a))
#%%time
%time print(a.sum())
%time print(a.sum())
a=np.arange(10)
b=np.arange(10)
c = a + b
d = 3*a*a + b + 2.0
print(c)
print(d)
c.shape
c2=c.reshape(5,2)
c3=c.reshape(2,5)
print(c)
print(c2)
print(c3)
type(c)
c[0]=999
print(c2)
d2=c.reshape(5,2)[1:3,:]
print(d2)
d2[1,1]=888
print(c)
print(c2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.1 Read a number N and print the numbers from 1 to N.
Step2: 3.2 Read numbers until 0 and collect them in a list. Print the elements backwards (zero not included).
Step3: 4.2 Write a function that takes an integer parameter (N) and returns the Nth Fibonacci number.
Step4: 4.3 Write a function that takes an integer parameter (N) and returns a list of the first N Fibonacci numbers.
Step5: *4.4 Write a function that takes two integers, A and B and returns B-A Fibonacci numbers starting from the Ath Fibonacci number to the (B-1)th number. See the example below.
Step6: 5. Default and keyword arguments
Step7: 5.2 Write a function that takes two numbers and an arithmetic operator as a string ("+", "-", "*" or "/") and returns the result of the arithmetic operation on the two numbers. The default operation should be addition.
Step8: *5.3 Write a function that takes three parameters. The third is a callable (function) that takes two parameters. The function should call its third parameter with the first two as parameters. If the third parameter is not specified the function should add the two arguments.
Step9: 6. Extra exercises
Step10: *6.2 Reduce is a function that applies a two argument function against an accumulator and each element in the sequence (from left to right) to reduce it to a single value. If no initial value is provided, the accumulator is initialized with the return value of the function run on the first two elements of the sequence.
Step11: *6.3 Use your reduce function for the following operations
Step12: *6.5 Implement bubble sort (in place).
|
<ASSISTANT_TASK:>
Python Code:
n = input()
n = int(n)
l = [2, 3, -2, -7, 0, 2, 3]
def get_n_primes(N):
# TODO
assert(get_n_primes(3) == [2, 3, 5])
def get_nth_fibonacci(N):
# TODO
assert(get_nth_fibonacci(3) == 2)
def get_n_fibonacci(N):
# TODO
assert(get_n_fibonacci(4) == [1, 1, 2, 3])
def get_range_fibonacci(A, B):
# TODO
assert(get_range_fibonacci(2, 5) == [1, 2, 3])
# TODO def sum_list...
assert(sum_list([1, 2, 3]) == 6)
assert(sum_list([1, 2, 3], 5) == 11)
# TODO def arithmetic...
assert(arithmetic(2, 3) == 5)
assert(arithmetic(2, 3, "-") == -1)
assert(arithmetic(2, 3, "*") == 6)
assert(arithmetic(2, 3, "/") == 2/3)
# TODO def call_func(...)
def product(x, y):
return x * y
assert(call_func(3, 4, product) == 12)
assert(call_func("foo", "bar") == "foobar")
def filter_list(input_list, predicate):
# TODO
return output_list
def is_prime(n):
# TODO
pass
def is_odd(n):
return n % 2 == 1
l1 = [1, 2, 3, 4, 19, 35, 11]
assert(filter_list(l1, is_odd) == [1, 3, 19, 35, 11])
assert(filter_list(l1, is_prime) == [2, 3, 19, 11])
def reduce(l, acc_func, accumulator=None):
# TODO
def add(x, y):
return x + y
l1 = [1, 2, -1, 5]
assert(reduce(l1, add) == 7)
assert(reduce(l1, add, 10) == 17)
def string_len_add(acc, s):
return acc + len(s)
l2 = ["foo", "bar", "hello"]
assert(reduce(l2, string_len_add, 0) == 11)
def qsort(l):
# TODO
l = [10, -1, 2, 11, 0]
qsort(l)
assert(l == [-1, 0, 2, 10, 11])
def bubblesort(l):
# TODO
l = [10, -1, 2, 11, 0]
bubblesort(l)
assert(l == [-1, 0, 2, 10, 11])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import data from Google Clod Storage
Step2: Prepare data for ARIMA
Step3: Let's create a column for weekly returns. Take the log to of the returns to normalize large fluctuations.
Step4: Test for stationarity of the udiff series
Step5: With a p-value < 0.05, we can reject the null hypotehsis. This data set is stationary.
Step6: The table below summarizes the patterns of the ACF and PACF.
Step7: Our model doesn't do a good job predicting variance in the original data (peaks and valleys).
Step8: Let's make a forecast 2 weeks ahead
|
<ASSISTANT_TASK:>
Python Code:
!pip install --user statsmodels
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime
%config InlineBackend.figure_format = 'retina'
df = pd.read_csv('gs://cloud-training/ai4f/AAPL10Y.csv')
df['date'] = pd.to_datetime(df['date'])
df.sort_values('date', inplace=True)
df.set_index('date', inplace=True)
print(df.shape)
df.head()
df_week = # TODO: Use the df DataFrame to resample the 'close' column to a weekly granularity. Use the mean as the aggregator.
df_week = df_week[['close']]
df_week.head()
df_week['weekly_ret'] = np.log(df_week['close']).diff()
df_week.head()
# drop null rows
df_week.dropna(inplace=True)
df_week.weekly_ret.plot(kind='line', figsize=(12, 6));
udiff = df_week.drop(['close'], axis=1)
udiff.head()
import statsmodels.api as sm
from statsmodels.tsa.stattools import adfuller
rolmean = udiff.rolling(20).mean()
rolstd = udiff.rolling(20).std()
plt.figure(figsize=(12, 6))
orig = plt.plot(udiff, color='blue', label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std Deviation')
plt.title('Rolling Mean & Standard Deviation')
plt.legend(loc='best')
plt.show(block=False)
# Perform Dickey-Fuller test
dftest = sm.tsa.adfuller(udiff.weekly_ret, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used'])
for key, value in dftest[4].items():
dfoutput['Critical Value ({0})'.format(key)] = value
dfoutput
from statsmodels.graphics.tsaplots import plot_acf
# the autocorrelation chart provides just the correlation at increasing lags
fig, ax = plt.subplots(figsize=(12,5))
plot_acf(udiff.values, lags=10, ax=ax)
plt.show()
from statsmodels.graphics.tsaplots import plot_pacf
fig, ax = plt.subplots(figsize=(12,5))
plot_pacf(udiff.values, lags=10, ax=ax)
plt.show()
from statsmodels.tsa.arima_model import ARMA
# Notice that you have to use udiff - the differenced data rather than the original data.
ar1 = # TODO: Fit an ARMA model to the differenced data
ar1.summary()
# TODO: Plot the ARMA fitted values on the same plot as the differenced time series
forecast = # TODO: Use the ARMA model to create a forecast two weeks into the future
plt.figure(figsize=(12, 8))
plt.plot(udiff.values, color='blue')
preds = ar1.fittedvalues
plt.plot(preds, color='red')
plt.plot(pd.DataFrame(np.array([preds[-1],forecast[0]]).T,index=range(len(udiff.values)+1, len(udiff.values)+3)), color='green')
plt.plot(pd.DataFrame(forecast,index=range(len(udiff.values)+1, len(udiff.values)+1+steps)), color='green')
plt.title('Display the predictions with the ARIMA model')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Real data
Step2: Synthetic data
|
<ASSISTANT_TASK:>
Python Code:
import zarr; print('zarr', zarr.__version__)
import dask; print('dask', dask.__version__)
import dask.array as da
import numpy as np
# here's the real data
callset = zarr.open_group('/kwiat/2/coluzzi/ag1000g/data/phase1/release/AR3.1/variation/main/zarr2/zstd/ag1000g.phase1.ar3',
mode='r')
callset
# here's the array we're going to work with
g = callset['3R/calldata/genotype']
g
# wrap as dask array with very simple chunking of first dim only
%time gd = da.from_array(g, chunks=(g.chunks[0], None, None))
gd
# load condition used to make selection on first axis
dim0_condition = callset['3R/variants/FILTER_PASS'][:]
dim0_condition.shape, dim0_condition.dtype, np.count_nonzero(dim0_condition)
# invent a random selection for second axis
dim1_indices = sorted(np.random.choice(765, size=100, replace=False))
# setup the 2D selection - this is the slow bit
%time gd_sel = gd[dim0_condition][:, dim1_indices]
gd_sel
# now load a slice from this new selection - quick!
%time gd_sel[1000000:1100000].compute(optimize_graph=False)
# what's taking so long?
import cProfile
cProfile.run('gd[dim0_condition][:, dim1_indices]', sort='time')
cProfile.run('gd[dim0_condition][:, dim1_indices]', sort='cumtime')
# create a synthetic dataset for profiling
a = zarr.array(np.random.randint(-1, 4, size=(20000000, 200, 2), dtype='i1'),
chunks=(10000, 100, 2), compressor=zarr.Blosc(cname='zstd', clevel=1, shuffle=2))
a
# create a synthetic selection for first axis
c = np.random.randint(0, 2, size=a.shape[0], dtype=bool)
# create a synthetic selection for second axis
s = sorted(np.random.choice(a.shape[1], size=100, replace=False))
%time d = da.from_array(a, chunks=(a.chunks[0], None, None))
d
%time ds = d[c][:, s]
cProfile.run('d[c][:, s]', sort='time')
%time ds[1000000:1100000].compute(optimize_graph=False)
# problem is in fact just the dim0 selection
cProfile.run('d[c]', sort='time')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
from sklearn.svm import SVC
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
# declare parameter ranges to try
params = {'C':[1, 2, 3],
'kernel':['linear', 'poly', 'rbf']}
# initialise estimator
svm_classifier = SVC(class_weight='balanced')
# initialise grid search model
model = GridSearchCV(estimator=svm_classifier,
param_grid=params,
scoring='accuracy',
n_jobs=-1)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print(model.best_params_)
print(classification_report(y_test, y_pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ไฝฟ็จ Tensorflow Lattice ๅฎ็ฐ้ๅพทๅฝข็ถ็บฆๆ
Step2: ๅฏผๅ
ฅๆ้็่ฝฏไปถๅ
๏ผ
Step3: ๆฌๆ็จไธญไฝฟ็จ็้ป่ฎคๅผ๏ผ
Step4: ๆกไพ็ ็ฉถ 1๏ผๆณๅญฆ้ขๅ
ฅๅญฆ
Step5: ้ขๅค็ๆฐๆฎ้๏ผ
Step7: ๅฐๆฐๆฎๅๅไธบ่ฎญ็ป/้ช่ฏ/ๆต่ฏ้
Step8: ๅฏ่งๅๆฐๆฎๅๅธ
Step11: ่ฎญ็ปๆ กๅ็บฟๆงๆจกๅไปฅ้ขๆต่่ฏ็้่ฟๆ
ๅต
Step14: ็จไบ้
็ฝฎๆณๅญฆ้ขๆฐๆฎ้็นๅพ็่พ
ๅฉๅฝๆฐ
Step15: ็จไบๅฏ่งๅ่ฎญ็ป็ๆจกๅ่พๅบ็่พ
ๅฉๅฝๆฐ
Step16: ่ฎญ็ปๆ ็บฆๆ๏ผ้ๅ่ฐ๏ผ็ๆ กๅ็บฟๆงๆจกๅ
Step17: ่ฎญ็ปๅ่ฐ็ๆ กๅ็บฟๆงๆจกๅ
Step18: ่ฎญ็ปๅ
ถไปๆ ็บฆๆ็ๆจกๅ
Step19: ่ฎญ็ปๆ ็บฆๆ็ๆขฏๅบฆๆๅๆ (GBT) ๆจกๅ
Step20: ๆกไพ็ ็ฉถ 2๏ผไฟก็จ่ฟ็บฆ
Step21: ๅฐๆฐๆฎๅๅไธบ่ฎญ็ป/้ช่ฏ/ๆต่ฏ้
Step22: ๅฏ่งๅๆฐๆฎๅๅธ
Step25: ่ฎญ็ปๆ กๅ็บฟๆงๆจกๅไปฅ้ขๆตไฟก็จ่ฟ็บฆ็
Step26: ็จไบๅฏ่งๅ่ฎญ็ป็ๆจกๅ่พๅบ็่พ
ๅฉๅฝๆฐ
Step27: ่ฎญ็ปๆ ็บฆๆ๏ผ้ๅ่ฐ๏ผ็ๆ กๅ็บฟๆงๆจกๅ
Step28: ่ฎญ็ปๅ่ฐ็ๆ กๅ็บฟๆงๆจกๅ
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@test {"skip": true}
!pip install tensorflow-lattice seaborn
import tensorflow as tf
import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
# List of learning rate hyperparameters to try.
# For a longer list of reasonable hyperparameters, try [0.001, 0.01, 0.1].
LEARNING_RATES = [0.01]
# Default number of training epochs and batch sizes.
NUM_EPOCHS = 1000
BATCH_SIZE = 1000
# Directory containing dataset files.
DATA_DIR = 'https://raw.githubusercontent.com/serenalwang/shape_constraints_for_ethics/master'
# Load data file.
law_file_name = 'lsac.csv'
law_file_path = os.path.join(DATA_DIR, law_file_name)
raw_law_df = pd.read_csv(law_file_path, delimiter=',')
# Define label column name.
LAW_LABEL = 'pass_bar'
def preprocess_law_data(input_df):
# Drop rows with where the label or features of interest are missing.
output_df = input_df[~input_df[LAW_LABEL].isna() & ~input_df['ugpa'].isna() &
(input_df['ugpa'] > 0) & ~input_df['lsat'].isna()]
return output_df
law_df = preprocess_law_data(raw_law_df)
def split_dataset(input_df, random_state=888):
Splits an input dataset into train, val, and test sets.
train_df, test_val_df = train_test_split(
input_df, test_size=0.3, random_state=random_state)
val_df, test_df = train_test_split(
test_val_df, test_size=0.66, random_state=random_state)
return train_df, val_df, test_df
law_train_df, law_val_df, law_test_df = split_dataset(law_df)
def plot_dataset_contour(input_df, title):
plt.rcParams['font.family'] = ['serif']
g = sns.jointplot(
x='ugpa',
y='lsat',
data=input_df,
kind='kde',
xlim=[1.4, 4],
ylim=[0, 50])
g.plot_joint(plt.scatter, c='b', s=10, linewidth=1, marker='+')
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels('Undergraduate GPA', 'LSAT score', fontsize=14)
g.fig.suptitle(title, fontsize=14)
# Adust plot so that the title fits.
plt.subplots_adjust(top=0.9)
plt.show()
law_df_pos = law_df[law_df[LAW_LABEL] == 1]
plot_dataset_contour(
law_df_pos, title='Distribution of students that passed the bar')
law_df_neg = law_df[law_df[LAW_LABEL] == 0]
plot_dataset_contour(
law_df_neg, title='Distribution of students that failed the bar')
def train_tfl_estimator(train_df, monotonicity, learning_rate, num_epochs,
batch_size, get_input_fn,
get_feature_columns_and_configs):
Trains a TFL calibrated linear estimator.
Args:
train_df: pandas dataframe containing training data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rate: learning rate of Adam optimizer for gradient descent.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
estimator: a trained TFL calibrated linear estimator.
feature_columns, feature_configs = get_feature_columns_and_configs(
monotonicity)
model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs, use_bias=False)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=get_input_fn(input_df=train_df, num_epochs=1),
optimizer=tf.keras.optimizers.Adam(learning_rate))
estimator.train(
input_fn=get_input_fn(
input_df=train_df, num_epochs=num_epochs, batch_size=batch_size))
return estimator
def optimize_learning_rates(
train_df,
val_df,
test_df,
monotonicity,
learning_rates,
num_epochs,
batch_size,
get_input_fn,
get_feature_columns_and_configs,
):
Optimizes learning rates for TFL estimators.
Args:
train_df: pandas dataframe containing training data.
val_df: pandas dataframe containing validation data.
test_df: pandas dataframe containing test data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rates: list of learning rates to try.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
A single TFL estimator that achieved the best validation accuracy.
estimators = []
train_accuracies = []
val_accuracies = []
test_accuracies = []
for lr in learning_rates:
estimator = train_tfl_estimator(
train_df=train_df,
monotonicity=monotonicity,
learning_rate=lr,
num_epochs=num_epochs,
batch_size=batch_size,
get_input_fn=get_input_fn,
get_feature_columns_and_configs=get_feature_columns_and_configs)
estimators.append(estimator)
train_acc = estimator.evaluate(
input_fn=get_input_fn(train_df, num_epochs=1))['accuracy']
val_acc = estimator.evaluate(
input_fn=get_input_fn(val_df, num_epochs=1))['accuracy']
test_acc = estimator.evaluate(
input_fn=get_input_fn(test_df, num_epochs=1))['accuracy']
print('accuracies for learning rate %f: train: %f, val: %f, test: %f' %
(lr, train_acc, val_acc, test_acc))
train_accuracies.append(train_acc)
val_accuracies.append(val_acc)
test_accuracies.append(test_acc)
max_index = val_accuracies.index(max(val_accuracies))
return estimators[max_index]
def get_input_fn_law(input_df, num_epochs, batch_size=None):
Gets TF input_fn for law school models.
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['ugpa', 'lsat']],
y=input_df['pass_bar'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_law(monotonicity):
Gets TFL feature configs for law school models.
feature_columns = [
tf.feature_column.numeric_column('ugpa'),
tf.feature_column.numeric_column('lsat'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='ugpa',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='lsat',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
def get_predicted_probabilities(estimator, input_df, get_input_fn):
predictions = estimator.predict(
input_fn=get_input_fn(input_df=input_df, num_epochs=1))
return [prediction['probabilities'][1] for prediction in predictions]
def plot_model_contour(estimator, input_df, num_keypoints=20):
x = np.linspace(min(input_df['ugpa']), max(input_df['ugpa']), num_keypoints)
y = np.linspace(min(input_df['lsat']), max(input_df['lsat']), num_keypoints)
x_grid, y_grid = np.meshgrid(x, y)
positions = np.vstack([x_grid.ravel(), y_grid.ravel()])
plot_df = pd.DataFrame(positions.T, columns=['ugpa', 'lsat'])
plot_df[LAW_LABEL] = np.ones(len(plot_df))
predictions = get_predicted_probabilities(
estimator=estimator, input_df=plot_df, get_input_fn=get_input_fn_law)
grid_predictions = np.reshape(predictions, x_grid.shape)
plt.rcParams['font.family'] = ['serif']
plt.contour(
x_grid,
y_grid,
grid_predictions,
colors=('k',),
levels=np.linspace(0, 1, 11))
plt.contourf(
x_grid,
y_grid,
grid_predictions,
cmap=plt.cm.bone,
levels=np.linspace(0, 1, 11)) # levels=np.linspace(0,1,8));
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
cbar = plt.colorbar()
cbar.ax.set_ylabel('Model score', fontsize=20)
cbar.ax.tick_params(labelsize=20)
plt.xlabel('Undergraduate GPA', fontsize=20)
plt.ylabel('LSAT score', fontsize=20)
nomon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(nomon_linear_estimator, input_df=law_df)
mon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(mon_linear_estimator, input_df=law_df)
feature_names = ['ugpa', 'lsat']
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
hidden_units=[100, 100],
optimizer=tf.keras.optimizers.Adam(learning_rate=0.008),
activation_fn=tf.nn.relu)
dnn_estimator.train(
input_fn=get_input_fn_law(
law_train_df, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS))
dnn_train_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
dnn_val_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
dnn_test_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for DNN: train: %f, val: %f, test: %f' %
(dnn_train_acc, dnn_val_acc, dnn_test_acc))
plot_model_contour(dnn_estimator, input_df=law_df)
tree_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
n_batches_per_layer=2,
n_trees=20,
max_depth=4)
tree_estimator.train(
input_fn=get_input_fn_law(
law_train_df, num_epochs=NUM_EPOCHS, batch_size=BATCH_SIZE))
tree_train_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
tree_val_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
tree_test_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for GBT: train: %f, val: %f, test: %f' %
(tree_train_acc, tree_val_acc, tree_test_acc))
plot_model_contour(tree_estimator, input_df=law_df)
# Load data file.
credit_file_name = 'credit_default.csv'
credit_file_path = os.path.join(DATA_DIR, credit_file_name)
credit_df = pd.read_csv(credit_file_path, delimiter=',')
# Define label column name.
CREDIT_LABEL = 'default'
credit_train_df, credit_val_df, credit_test_df = split_dataset(credit_df)
def get_agg_data(df, x_col, y_col, bins=11):
xbins = pd.cut(df[x_col], bins=bins)
data = df[[x_col, y_col]].groupby(xbins).agg(['mean', 'sem'])
return data
def plot_2d_means_credit(input_df, x_col, y_col, x_label, y_label):
plt.rcParams['font.family'] = ['serif']
_, ax = plt.subplots(nrows=1, ncols=1)
plt.setp(ax.spines.values(), color='black', linewidth=1)
ax.tick_params(
direction='in', length=6, width=1, top=False, right=False, labelsize=18)
df_single = get_agg_data(input_df[input_df['MARRIAGE'] == 1], x_col, y_col)
df_married = get_agg_data(input_df[input_df['MARRIAGE'] == 2], x_col, y_col)
ax.errorbar(
df_single[(x_col, 'mean')],
df_single[(y_col, 'mean')],
xerr=df_single[(x_col, 'sem')],
yerr=df_single[(y_col, 'sem')],
color='orange',
marker='s',
capsize=3,
capthick=1,
label='Single',
markersize=10,
linestyle='')
ax.errorbar(
df_married[(x_col, 'mean')],
df_married[(y_col, 'mean')],
xerr=df_married[(x_col, 'sem')],
yerr=df_married[(y_col, 'sem')],
color='b',
marker='^',
capsize=3,
capthick=1,
label='Married',
markersize=10,
linestyle='')
leg = ax.legend(loc='upper left', fontsize=18, frameon=True, numpoints=1)
ax.set_xlabel(x_label, fontsize=18)
ax.set_ylabel(y_label, fontsize=18)
ax.set_ylim(0, 1.1)
ax.set_xlim(-2, 8.5)
ax.patch.set_facecolor('white')
leg.get_frame().set_edgecolor('black')
leg.get_frame().set_facecolor('white')
leg.get_frame().set_linewidth(1)
plt.show()
plot_2d_means_credit(credit_train_df, 'PAY_0', 'default',
'Repayment Status (April)', 'Observed default rate')
def get_input_fn_credit(input_df, num_epochs, batch_size=None):
Gets TF input_fn for credit default models.
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['MARRIAGE', 'PAY_0']],
y=input_df['default'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_credit(monotonicity):
Gets TFL feature configs for credit default models.
feature_columns = [
tf.feature_column.numeric_column('MARRIAGE'),
tf.feature_column.numeric_column('PAY_0'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='MARRIAGE',
lattice_size=2,
pwl_calibration_num_keypoints=3,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='PAY_0',
lattice_size=2,
pwl_calibration_num_keypoints=10,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
def plot_predictions_credit(input_df,
estimator,
x_col,
x_label='Repayment Status (April)',
y_label='Predicted default probability'):
predictions = get_predicted_probabilities(
estimator=estimator, input_df=input_df, get_input_fn=get_input_fn_credit)
new_df = input_df.copy()
new_df.loc[:, 'predictions'] = predictions
plot_2d_means_credit(new_df, x_col, 'predictions', x_label, y_label)
nomon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, nomon_linear_estimator, 'PAY_0')
mon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, mon_linear_estimator, 'PAY_0')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Attribute Information
Step2: There ore over 2000 samples with the pm 2.5 value missing
Step3: 2 - Suppose our data became corrupted after we downloaded it and values were missing. Randomly insert 5000 NaN into the dataset accross all the columns.
Step4: 3 - Which variables lend themselves to be in a regression model? Select those variables, and then fit a regression model for each of the following imputation strategies, commenting on your results.
Step5: Dropping all rows with at least 1 NA
Step6: Dropping all row with at least 3 NA gets me an error because I have nans in some rows
Step7: Imputing 0
Step8: Imputing the mean
Step9: The median
Step10: And the mode
Step11: The best result I get is from simply dropping all rows with NAs, mean and median gives similar performances while the mode is the worst imputation (surprisingly worst than imputing 0, which is quite random).
Step12: The result is slightly better than simply imputing mean or median, but still worse than dropping all NAs.
Step13: The variable is nominal, so I'm going to use one-hot encoding
Step14: 2 - Perform a multilinear regression, using the classified data, removing the NA values. Comment on your results.
Step15: The results are a bit better than before, but the performances are still very bad.
Step16: Wow, now the fit is perfect!
Step17: The accuracy has gone down again after adding this new column, this may be due to the fact that this adds useless noise to the data or that this binning is too coarse maybe.
Step18: Feature Scaling
Step19: 2 - Fit a Nearest Neighbors model to the data, using a normalized data set, a stardardized data set, and the original. Split into test and train sets and compute the accuracy of the classifications and comment on your results.
Step20: Original dataset
Step21: Normalized dataset
Step22: Standardized dataset
Step23: The accuracy is way better for a normalized or standardized dataset, with the least having a slightly better generalization
Step24: Original dataset
Step25: Normalized dataset
Step26: Standardized dataset
Step27: For this algorithm there is no difference at all, so scaling the data isn't necessary.
Step28: 2 - Fit a series of least squares multilinear regression models to the data, and use the F-Statistic to select the K best features for values of k ranging from 1 to the total number of features. Plot the MSE for each model against the test set and print the best features for each iteration. Comment on your results.
Step29: The MSE keeps going down adding features, there is a great gain after the 11th feature is added.
Step30: The MSE keeps going down adding features but after the sixth feature is added there isn't much improvement.
Step31: After the fourth feature there is no improvement.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
pm2 = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/00381/PRSA_data_2010.1.1-2014.12.31.csv',
na_values='NA')
pm2.columns = ['id', 'year', 'month', 'day', 'hour', 'pm2', 'dew_point', 'temperature',
'pressure', 'wind_dir', 'wind_speed', 'hours_snow', 'hours_rain']
pm2.head()
pm2.info()
pm2.dropna(inplace=True)
pm2.describe().T
pm2.describe(include=['O'])
pm2.wind_dir.value_counts()
# setting the seed
np.random.seed(0)
# creating an array of dimension equal to the number of cells of the dataframe and with exactly 5000 ones
dim = pm2.shape[0]*pm2.shape[1]
arr = np.array([0]*(dim-5000) + [1]*5000)
# shuffling and reshaping the array
np.random.shuffle(arr)
arr = arr.reshape(pm2.shape[0], pm2.shape[1])
# looping through all the values and setting the corresponding position in the dataframe to nan
it = np.nditer(arr, flags=['multi_index'])
while not it.finished:
if it[0] == 1:
pm2.iloc[it.multi_index[0], it.multi_index[1]] = np.nan
it.iternext()
# solution: inserted nans on all columns at random
data_na = pm2.copy()
nrow = data_na.shape[0]
for col in data_na:
rows = np.random.randint(0, nrow, 5000)
data_na[col].iloc[rows] = np.nan
pm2.info()
# I'm dropping wind_dir and id
regr_cols = ['year', 'month', 'day', 'hour', 'dew_point', 'temperature',
'pressure', 'wind_speed', 'hours_snow', 'hours_rain', 'pm2']
pm2_regr = pm2.loc[:, regr_cols]
# in the solution there is no year, month, day and hour
# also, he discards hours_snow and hours_rain (though they aren't binary or categorical)
# from sklearn.model_selection import train_test_split
from sklearn.preprocessing import Imputer
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
# X = pm2_regr.iloc[:, :-1]
# y = pm2_regr.iloc[:, -1]
# Xtrain, Xtest, ytrain, ytest = train_test_split(pm2_regr.iloc[:, :-1], pm2_regr.iloc[:, -1], test_size=0.2, random_state=0)
#just a note to self
pm2_regr1 = pm2_regr.dropna(thresh=7) # same as dropna without thresh
# thresh is the number of non nan columns required to mantain the rows
pm2_regr1 = pm2_regr.dropna(thresh=5)
pm2_regr1.info()
lr.fit(pm2_regr.dropna().iloc[:, :-1], pm2_regr.dropna().iloc[:, -1])
lr.score(pm2_regr.dropna().iloc[:, :-1], pm2_regr.dropna().iloc[:, -1])
lr.fit(pm2_regr.dropna(thresh=5).iloc[:, :-1], pm2_regr.dropna(thresh=5).iloc[:, -1])
lr.score(pm2_regr.dropna(thresh=5).iloc[:, :-1], pm2_regr.dropna(thresh=5).iloc[:, -1])
lr.fit(pm2_regr.fillna(0).iloc[:, :-1], pm2_regr.fillna(0).iloc[:, -1])
lr.score(pm2_regr.fillna(0).iloc[:, :-1], pm2_regr.fillna(0).iloc[:, -1])
imp = Imputer(strategy='mean')
pm2_regr_mean = imp.fit_transform(pm2_regr)
lr.fit(pm2_regr_mean[:, :-1], pm2_regr_mean[:, -1])
lr.score(pm2_regr_mean[:, :-1], pm2_regr_mean[:, -1])
imp = Imputer(strategy='median')
pm2_regr_median = imp.fit_transform(pm2_regr)
lr.fit(pm2_regr_median[:, :-1], pm2_regr_median[:, -1])
lr.score(pm2_regr_median[:, :-1], pm2_regr_median[:, -1])
imp = Imputer(strategy='most_frequent')
pm2_regr_mode = imp.fit_transform(pm2_regr)
lr.fit(pm2_regr_mode[:, :-1], pm2_regr_mode[:, -1])
lr.score(pm2_regr_mode[:, :-1], pm2_regr_mode[:, -1])
pm2_regr_imp = pm2_regr.dropna(subset=['year', 'month', 'day', 'hour', 'pm2'])
imp = Imputer(strategy = 'median')
pm2_regr_imp = imp.fit_transform(pm2_regr_imp)
lr.fit(pm2_regr_imp[:, :-1], pm2_regr_imp[:, -1])
lr.score(pm2_regr_imp[:, :-1], pm2_regr_imp[:, -1])
pm2.describe(include=['O'])
# for simplicity I'm using pandas function
pm2_enc = pd.get_dummies(pm2)
pm2_enc = pm2_enc.loc[:, regr_cols[:-1] + ['wind_dir_NE', 'wind_dir_NW', 'wind_dir_SE', 'wind_dir_cv'] + regr_cols[-1:]].dropna()
# from solutions using sklearn:
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
l_enc = LabelEncoder()
oh_enc = OneHotEncoder(sparse=False)
# change categorical data labels to integers
data_sub = pm2.copy()
data_sub.wind_dir = l_enc.fit_transform(data_sub.wind_dir)
# one-hot encode
dummies = pd.DataFrame(oh_enc.fit_transform(data_sub.wind_dir.values.reshape(-1, 1)), columns=l_enc.classes_)
# join with original df
data_sub = data_sub.drop('wind_dir', axis=1)
data_sub = data_sub.join(dummies)
data_sub.head()
lr.fit(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
lr.score(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
# hours_snow and hours_rain are cumulative across days, so I'm taking the max for each day to see if it snowed
days = pm2_enc.groupby(['year', 'month', 'day'])['hours_snow', 'hours_rain'].max()
# creating columns for the encodings
days['snow'] = pd.Series(days['hours_snow'] > 0, dtype='int')
days['rain'] = pd.Series(days['hours_rain'] > 0, dtype='int')
days['rain_snow'] = pd.Series((days['hours_rain'] > 0) & (days['hours_snow'] > 0), dtype='int')
days['no_rain_snow'] = pd.Series((days['hours_rain'] == 0) & (days['hours_snow'] == 0), dtype='int')
# resetting index and dropping hours_snow and hours_rain
days.reset_index(inplace=True)
days.drop(['hours_snow', 'hours_rain'], inplace=True, axis=1)
# joining the dataframe with the new columns to the original one
pm2_enc = pm2_enc.merge(days, left_on=['year', 'month', 'day'], right_on=['year', 'month', 'day'])
pm2_enc.info()
lr.fit(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
lr.score(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
# using pandas cut and subtracting 0.1 to include the min values
pm2_enc['wind_speed_quartile'] = pd.cut(pm2_enc.wind_speed,
bins=list(pm2_enc.wind_speed.quantile([0])-0.1) + list(pm2_enc.wind_speed.quantile([0.25, 0.5, 0.75, 1])),
labels=[0.25, 0.5, 0.75, 1])
# from solutions: using np.percentile:
quartile = np.percentile(data_sub['wind_speed_quartile'], [25, 50, 75, 100])
cat = []
for row in range(len(data_sub)):
wind_speed = data_sub['wind_speed_quartile'].iloc[row]
if wind_speed <= quartile[0]:
cat.append('1st')
if wind_speed <= quartile[1]:
cat.append('2nd')
if wind_speed <= quartile[2]:
cat.append('3rd')
if wind_speed <= quartile[3]:
cat.append('4th')
data_sub['wind_quart'] = cat
# and then create dummies...
# transforming the column in numeric
pm2_enc.wind_speed_quartile = pd.to_numeric(pm2_enc.wind_speed_quartile)
lr.fit(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
lr.score(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
# using pandas cut and subtracting 0.1 to include the min values
pm2_enc['dew_point_decile'] = pd.cut(pm2_enc.dew_point,
bins=list(pm2_enc.dew_point.quantile([0])-0.1) + list(pm2_enc.dew_point.quantile([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1])))
# from solutions: not using cut but creating new column and then:
data_sub.dew_dec = pd.Categorical(data_sub.dew_dec, categories=data_sub.dew_dec.unique(), ordered=True)
decile = pm2_enc.iloc[pm2_enc.temperature.argmax()].dew_point_decile
print(decile)
pm2_enc.loc[pm2_enc.dew_point_decile < decile]
wine = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
wine.columns = ['class', 'alcohol', 'malic_acid', 'ash', 'alcalinity_ash', 'magnesium', 'total_phenols',
'flavanoids', 'nonflavanoid_phenols', 'proanthocyanins', 'color_intensity',
'hue', 'OD280_OD315', 'proline']
wine.head()
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(wine.iloc[:, 1:], wine.iloc[:, 0], test_size=0.3, random_state=0)
knn = KNeighborsClassifier()
knn.fit(Xtrain, ytrain)
print(knn.score(Xtrain, ytrain))
print(knn.score(Xtest, ytest))
mms = MinMaxScaler()
Xtrain_norm = mms.fit_transform(Xtrain)
Xtest_norm = mms.transform(Xtest)
knn.fit(Xtrain_norm, ytrain)
print(knn.score(Xtrain_norm, ytrain))
print(knn.score(Xtest_norm, ytest))
ssc = StandardScaler()
Xtrain_std = ssc.fit_transform(Xtrain)
Xtest_std = ssc.transform(Xtest)
knn.fit(Xtrain_std, ytrain)
print(knn.score(Xtrain_std, ytrain))
print(knn.score(Xtest_std, ytest))
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(Xtrain, ytrain)
print(gnb.score(Xtrain, ytrain))
print(gnb.score(Xtest, ytest))
gnb.fit(Xtrain_norm, ytrain)
print(gnb.score(Xtrain_norm, ytrain))
print(gnb.score(Xtest_norm, ytest))
gnb.fit(Xtrain_std, ytrain)
print(gnb.score(Xtrain_std, ytrain))
print(gnb.score(Xtest_std, ytest))
from sklearn.datasets import load_boston
boston = load_boston()
boston_df = pd.DataFrame(boston.data, columns=boston.feature_names)
boston_target = boston.target
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(boston_df, boston_target, test_size=0.3, random_state=0)
from sklearn.feature_selection import f_classif, SelectKBest
from sklearn.metrics import mean_squared_error
# in the solutions he uses f_regression and not f_classif
# also, best features are obtained by cols[sel.get_support()] with cols = Xtrain.columns
# and lr is instantiated with normalize=True
from sklearn.feature_selection import f_regression
from sklearn.linear_model import LinearRegression
mse = []
cols = Xtrain.columns
lr = LinearRegression(normalize=True)
# looping through the number of features desired and storing the results in mse
for k in range(1, boston_df.shape[1]+1):
# using SelectKBest with the F-statistic as the score
sel = SelectKBest(score_func=f_regression, k=k)
# fitting the selector
sel.fit(Xtrain, ytrain)
# transforming train and test sets
Xtrain_k = sel.transform(Xtrain)
Xtest_k = sel.transform(Xtest)
# fitting linear regression model and printing out the k best features
lr.fit(Xtrain_k, ytrain)
print('Top {} features {}'.format(sel.k, cols[sel.get_support()]))
mse.append(mean_squared_error(lr.predict(Xtest_k), ytest))
mse
mse = []
# looping through the number of features desired and storing the results in mse
for k in range(1, boston_df.shape[1]+1):
# using SelectKBest with the F-statistic as the score
sel = SelectKBest(score_func=f_classif, k=k)
# fitting the selector
sel.fit(Xtrain, ytrain)
# transforming train and test sets
Xtrain_k = sel.transform(Xtrain)
Xtest_k = sel.transform(Xtest)
# fitting linear regression model and printing out the k best features
lr.fit(Xtrain_k, ytrain)
print('Top {} features {}'.format(k, pd.Series(sel.scores_, index=Xtrain.columns).\
sort_values(ascending=False).\
head(k).index.values))
mse.append(mean_squared_error(lr.predict(Xtest_k), ytest))
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(16, 8))
plt.plot(range(1, len(mse)+1), mse)
plt.title('MSE for models with different number of features')
plt.xlabel('Number of Features')
plt.ylabel('MSE');
from sklearn.feature_selection import RFE
mse = []
# looping through the number of features desired and storing the results in mse
for k in range(1, boston_df.shape[1]+1):
# using Recursive Feature Selection with linear regression as estimator
sel = RFE(estimator=lr, n_features_to_select=k)
# fitting the selector
sel.fit(Xtrain, ytrain)
# transforming train and test sets
Xtrain_k = sel.transform(Xtrain)
Xtest_k = sel.transform(Xtest)
# fitting linear regression model and printing out the k best features
lr.fit(Xtrain_k, ytrain)
print('Top {} features {}'.format(k, pd.Series(sel.support_, index=Xtrain.columns).\
sort_values(ascending=False).\
head(k).index.values))
mse.append(mean_squared_error(lr.predict(Xtest_k), ytest))
plt.figure(figsize=(16, 8))
plt.plot(range(1, len(mse)+1), mse)
plt.title('MSE for models with different number of features')
plt.xlabel('Number of Features')
plt.ylabel('MSE');
# in solutions he doesn't use select from model but repeats the previous exercise only using ridge instead
# ok no, it does both
# in selectfrommodel it uses c_vals = np.arange(0.1, 2.1, 0.1) to loop through and the threshold is set to
# str(c) + '*mean' for c in c_vals
# also, he always fits the ridge model
from sklearn.linear_model import Ridge
from sklearn.feature_selection import SelectFromModel
# fitting ridge regression
ridge = Ridge()
c_vals = np.arange(0.1, 2.1, 0.1)
cols = Xtrain.columns
mse = []
# looping through the possible threshholds from above and storing the results in mse
for c in c_vals:
# using SelectFromModel with the ridge scores from above
selfrmod = SelectFromModel(ridge, threshold=str(c) + '*mean')
# fitting the selector
selfrmod.fit(Xtrain, ytrain)
# transforming train and test sets
Xtrain_k = selfrmod.transform(Xtrain)
Xtest_k = selfrmod.transform(Xtest)
# fitting linear regression model and printing out the k best features
ridge.fit(Xtrain_k, ytrain)
print('c={} features {}'.format(c, cols[selfrmod.get_support()]))
mse.append(mean_squared_error(ridge.predict(Xtest_k), ytest))
mse
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(16, 8))
plt.plot(c_vals, mse)
plt.title('MSE for different thresholds')
plt.xlabel('c')
plt.ylabel('MSE');
from sklearn.linear_model import Ridge
# from sklearn.feature_selection import RFECV
from sklearn.feature_selection import SelectFromModel
# fitting ridge regression
ridge = Ridge()
ridge.fit(Xtrain, ytrain)
# storing features importance
coef = ridge.coef_
mse = []
# looping through the possible threshholds from above and storing the results in mse
for k, thresh in enumerate(sorted(coef, reverse=True)):
# using SelectFromModel with the ridge scores from above
selfrmod = SelectFromModel(ridge, threshold=thresh)
# fitting the selector
selfrmod.fit(Xtrain, ytrain)
# transforming train and test sets
Xtrain_k = selfrmod.transform(Xtrain)
Xtest_k = selfrmod.transform(Xtest)
# fitting linear regression model and printing out the k best features
lr.fit(Xtrain_k, ytrain)
print('Top {} features {}'.format(k+1, pd.Series(ridge.coef_, index=Xtrain.columns).\
sort_values(ascending=False).\
head(k+1).index.values))
mse.append(mean_squared_error(lr.predict(Xtest_k), ytest))
plt.figure(figsize=(16, 8))
plt.plot(range(1, len(mse)+1), mse)
plt.title('MSE for models with different number of features')
plt.xlabel('Number of Features')
plt.ylabel('MSE');
# again, in solutions he uses the c_vals as before and he fits the lasso
from sklearn.linear_model import LassoCV
from sklearn.feature_selection import SelectFromModel
# fitting ridge regression
lasso = LassoCV()
c_vals = np.arange(0.1, 2.1, 0.1)
cols = Xtrain.columns
mse = []
# looping through the possible threshholds from above and storing the results in mse
for c in c_vals:
# using SelectFromModel with the ridge scores from above
selfrmod = SelectFromModel(lasso, threshold=str(c) + '*mean')
# fitting the selector
selfrmod.fit(Xtrain, ytrain)
# transforming train and test sets
Xtrain_k = selfrmod.transform(Xtrain)
Xtest_k = selfrmod.transform(Xtest)
# fitting linear regression model and printing out the k best features
lasso.fit(Xtrain_k, ytrain)
print('c={} features {}'.format(c, cols[selfrmod.get_support()]))
mse.append(mean_squared_error(lasso.predict(Xtest_k), ytest))
mse
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(16, 8))
plt.plot(c_vals, mse)
plt.title('MSE for different thresholds')
plt.xlabel('c')
plt.ylabel('MSE');
from sklearn.linear_model import LassoCV
# fitting lasso regression
lasso = LassoCV()
lasso.fit(Xtrain, ytrain)
# storing features importance
coef = lasso.coef_
mse = []
# looping through the possible threshholds from above and storing the results in mse
for k, thresh in enumerate(sorted(coef, reverse=True)):
# using SelectFromModel with the lasso scores from above
selfrmod = SelectFromModel(lasso, threshold=thresh)
# fitting the selector
selfrmod.fit(Xtrain, ytrain)
# transforming train and test sets
Xtrain_k = selfrmod.transform(Xtrain)
Xtest_k = selfrmod.transform(Xtest)
# fitting linear regression model and printing out the k best features
lr.fit(Xtrain_k, ytrain)
print('Top {} features {}'.format(k+1, pd.Series(lasso.coef_, index=Xtrain.columns).\
sort_values(ascending=False).\
head(k+1).index.values))
mse.append(mean_squared_error(lr.predict(Xtest_k), ytest))
plt.figure(figsize=(16, 8))
plt.plot(range(1, len(mse)+1), mse)
plt.title('MSE for models with different number of features')
plt.xlabel('Number of Features')
plt.ylabel('MSE');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Generate Random Routes
Step2: Save or load a specific set of routes
Step3: 2. Grid Search for Optimal Parameter Values
Step4: 3. Plot the Curves
Step5: 5. Visualize Routes
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
import os
import urllib
import json
import pandas as pd
from random import shuffle, choice
import pickle
import sys; sys.path.insert(0, os.path.abspath('..'));
import validator.validator as val
%matplotlib inline
mapzenKey = os.environ.get('MAPZEN_API')
gmapsKey = os.environ.get('GOOGLE_MAPS')
# routeList = val.get_POI_routes_by_length('Paris', 1, 5, 20, gmapsKey)
routeList = val.get_routes_by_length('San Francisco', 1, 5, 20, mapzenKey)
# routeList = pickle.load(open('sf_routes.pkl','rb'))
# pickle.dump(routeList, open('saf_routes.pkl','wb'))
df = pd.DataFrame(columns=['beta', 'sigma_z', 'score'])
outDfRow = -1
saveResults = False
noiseLevels = np.linspace(0, 100, 21)
noiseLevels = [5]
sampleRates = [1, 5, 10, 20, 30]
sampleRates = [5]
betas = np.linspace(1,10,19)
sigmaZs = np.linspace(1,10,19)
for i, rteCoords in enumerate(routeList):
print("Processig route {0} of {1}".format(i, len(routeList)))
shape, routeUrl = val.get_route_shape(rteCoords)
for beta in betas:
for sigmaZ in sigmaZs:
print("Computing score for sigma_z: {0}, beta: {1}".format(
sigmaZ, beta))
edges, shapeCoords, traceAttrUrl = val.get_trace_attrs(
shape, beta=beta, sigmaZ=sigmaZ)
edges = val.get_coords_per_second(shapeCoords, edges, '2768')
for noise in noiseLevels:
noise = round(noise,3)
for sampleRate in sampleRates:
outDfRow += 1
df.loc[outDfRow, ['beta','sigma_z']] = [beta, sigmaZ]
dfEdges = val.format_edge_df(edges)
dfEdges, jsonDict, geojson = val.synthesize_gps(
dfEdges, shapeCoords, '2768', noise=noise, sampleRate=sampleRate,
beta=beta, sigmaZ=sigmaZ)
segments, reportUrl = val.get_reporter_segments(jsonDict)
matches, score = val.get_matches(segments, dfEdges)
df.loc[outDfRow, 'score'] = score
if saveResults:
matches.to_csv(
'../data/matches_{0}_to_{1}_w_{2}_m_noise_at_{3}_Hz.csv'.format(
stName, endName, str(noise), str(Hz)), index=False)
with open('../data/trace_{0}_to_{1}_w_{2}_m_noise_at_{3}_Hz.geojson'.format(
stName, endName, str(noise), str(Hz)), 'w+') as fp:
json.dump(geojson, fp)
df['score'] = df['score'].astype(float)
df['beta'] = df['beta'].astype(float)
df['sigma_z'] = df['sigma_z'].astype(float)
df.groupby('beta').agg('mean').reset_index().plot('beta','score', figsize=(12,8))
df.groupby('sigma_z').agg('mean').reset_index().plot('sigma_z','score', figsize=(12,8))
geojsonList = [trace for trace in os.listdir('../data/') if trace.endswith('json')]
fname = '../data/' + choice(geojsonList)
val.generate_route_map(fname, 14)
fname
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def testSomeNumbers(limit , n ) :
if(n < 3 ) :
return
for a in range(1 , limit + 1 ) :
for b in range(a , limit + 1 ) :
pow_sum = pow(a , n ) + pow(b , n )
c = pow(pow_sum , 1.0 / n )
c_pow = pow(int(c ) , n )
if(c_pow == pow_sum ) :
print("Count โ example โ found ")
return
print("No โ counter โ example โ within โ given โ range โ and โ data ")
testSomeNumbers(10 , 3 )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Marvin Query object allows you to specify a string search condition with which you want to look for results. It will construct the necessary SQL syntax for you, send it to the database at Utah using the Marvin API, and return the results. The Query accepts as a keyword argument searchfilter.
Step2: The above string search condition is a pseudo-natural language format. Natural language in that you type what you mean to say, and pseudo because it still must be formatted in the standard SQL where condition syntax. This syntax generally takes the form of parameter_name operand value.
Step3: Running the query produces a Marvin Results object (r)
Step4: For number of results < 1000, Marvin will return the entire set of results. For queries that return > 1000, Marvin will paginate the results and only return the first 100, by default. (This can be modified with the limit keyword).
Step5: It can be useful for informational and debugging purposes to see the raw SQL of your query, and your query runtime. If your query times out or crashes, the Marvin team will need these pieces of info to assess anything.
Step6: Query results are stored in r.results. This is a Python list object, and be indexed like an array. Since we have 100 results, let's only look at 10 for brevity.
Step7: We will learn how to use the features of our Results object a little bit later, but first let's revise our search to see how more complex search queries work.
Step8: Compound Search Statements
Step9: Returning Additional Parameters
Step10: Local (Sub-Spaxel) Queries (... or DAP Zonal Queries)
Step11: Spaxel queries will return a list of all spaxels satisfying your criteria. By default spaxel queries will return the galaxy information, and spaxel x and y.
Step12: Once you have a set of query Results, you can easily convert your results into Marvin objects in your workflow. Depending on your result parameters, you can convert to Marvin Cubes, Maps, Spaxels, ModelCubes, or RSS. Let's convert our Results to Marvin Cubes. Note
Step13: or since our results are from a spaxel query, we can convert to Marvin Spaxels
Step14: You can also convert your query results into other formats like an Astropy Table, or FITS
Step15: A note on Table and Name shortcuts
|
<ASSISTANT_TASK:>
Python Code:
# Python 2/3 compatibility
from __future__ import print_function, division, absolute_import
# import matplolib just in case
import matplotlib.pyplot as plt
# this line tells the notebook to plot matplotlib static plots in the notebook itself
%matplotlib inline
# this line does the same thing but makes the plots interactive
#%matplotlib notebook
# Import the config and set to remote. Let's query MPL-5 data
from marvin import config
# by default the mode is set to 'auto', but let's set it explicitly to remote.
config.mode = 'remote'
# by default, Marvin uses the latest MPL but let's set it explicitly to MPL-5
config.setRelease('MPL-5')
# By default the API will query using the Utah server, at api.sdss.org/marvin2. See the config.sasurl attribute.
config.sasurl
# If you are using one of the two local ngrok Marvins, you need to switch the SAS Url to one of our ngrok ids.
# Uncomment out the following lines and replace the ngrokid with the provided string
#ngrokid = 'ngrok_number_string'
#config.switchSasUrl('local', ngrokid=ngrokid)
#print(config.sasurl)
# this is the Query tool
from marvin.tools.query import Query
# the string search condition
my_search = 'z < 0.1'
# the search condition using the full parameter name
my_search = 'nsa.z < 0.1'
# Let's setup the query. This will not run it automatically.
q = Query(searchfilter=my_search)
print(q)
# To run the query
r = q.run()
# Print result counts
print('total', r.totalcount)
print('returned', r.count)
# See the raw SQL
print(r.showQuery())
# See the runtime of your query. This produces a Python datetime.timedelta object showing days, seconds, microseconds
print('timedelta', r.query_runtime)
# See the total time in seconds
print('query time in seconds:', r.query_runtime.total_seconds())
# Show the results.
r.results[0:10]
# my new search
new_search = 'nsa.z < 0.1 and nsa.sersic_mass > 3e11'
config.setRelease('MPL-5')
q2 = Query(searchfilter=new_search)
r2 = q2.run()
print(r2.totalcount)
r2.results
# new search
new_search = '(z<0.1 and nsa.sersic_logmass > 11.47) or (ifu.name=19* and nsa.sersic_n < 2)'
q3 = Query(searchfilter=new_search)
r3 = q3.run()
r3.results[0:5]
my_search = 'nsa.z < 0.1'
q = Query(searchfilter=my_search, returnparams=['cube.ra', 'cube.dec'])
r = q.run()
r.results[0:5]
spax_search = 'nsa.z < 0.1 and emline_gflux_ha_6564 > 30'
q4 = Query(searchfilter=spax_search, returnparams=['emline_sew_ha_6564', 'emline_gflux_hb_4862', 'stellar_vel'])
r4 = q4.run()
r4.totalcount
r4.query_runtime.total_seconds()
r4.results[0:5]
# We have a large number query spaxel results but from how many actual galaxies?
plateifu = r4.getListOf('plateifu')
print('# unique galaxies', len(set(plateifu)))
print(set(plateifu))
# Convert to Cubes. For brevity, let's only convert only the first object.
r4.convertToTool('cube', limit=1)
print(r4.objects)
cube = r4.objects[0]
# From a cube, now we can do all things from Marvin Tools, like get a MaNGA MAPS object
maps = cube.getMaps()
print(maps)
# get a emission line sew map
em=maps.getMap('emline_sew', channel='ha_6564')
# plot it
em.plot()
# .. and a stellar velocity map
st=maps.getMap('stellar_vel')
# plot it
st.plot()
# let's convert to Marvin Spaxels. Again, for brevity, let's only convert the first two.
r4.convertToTool('spaxel', limit=2)
print(r4.objects)
# Now we can do all the Spaxel things, like plot
spaxel = r4.objects[0]
spaxel.spectrum.plot()
r4.toTable()
r4.toFits('my_r4_results_2.fits')
# retrieve the list
allparams = q.get_available_params()
allparams
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining Materials
Step2: On the XML side, you have no choice but to supply an ID. However, in the Python API, if you don't give an ID, one will be automatically generated for you
Step3: We see that an ID of 2 was automatically assigned. Let's now move on to adding nuclides to our uo2 material. The Material object has a method add_nuclide() whose first argument is the name of the nuclide and second argument is the atom or weight fraction.
Step4: We see that by default it assumes we want an atom fraction.
Step5: Now we need to assign a total density to the material. We'll use the set_density for this.
Step6: You may sometimes be given a material specification where all the nuclide densities are in units of atom/b-cm. In this case, you just want the density to be the sum of the constituents. In that case, you can simply run mat.set_density('sum').
Step7: An astute observer might now point out that this water material we just created will only use free-atom cross sections. We need to tell it to use an $S(\alpha,\beta)$ table so that the bound atom cross section is used at thermal energies. To do this, there's an add_s_alpha_beta() method. Note the use of the GND-style name "c_H_in_H2O".
Step8: When we go to run the transport solver in OpenMC, it is going to look for a materials.xml file. Thus far, we have only created objects in memory. To actually create a materials.xml file, we need to instantiate a Materials collection and export it to XML.
Step9: Note that Materials is actually a subclass of Python's built-in list, so we can use methods like append(), insert(), pop(), etc.
Step10: Finally, we can create the XML file with the export_to_xml() method. In a Jupyter notebook, we can run a shell command by putting ! before it, so in this case we are going to display the materials.xml file that we created.
Step11: Element Expansion
Step12: We see that now O16 and O17 were automatically added. O18 is missing because our cross sections file (which is based on ENDF/B-VII.1) doesn't have O18. If OpenMC didn't know about the cross sections file, it would have assumed that all isotopes exist.
Step13: Enrichment
Step14: Mixtures
Step15: The 'wo' argument in the mix_materials() method specifies that the fractions are weight fractions. Materials can also be mixed by atomic and volume fractions with 'ao' and 'vo', respectively. For 'ao' and 'wo' the fractions must sum to one. For 'vo', if fractions do not sum to one, the remaining fraction is set as void.
Step16: Note that by default the sphere is centered at the origin so we didn't have to supply x0, y0, or z0 arguments. Strictly speaking, we could have omitted R as well since it defaults to one. To get the negative or positive half-space, we simply need to apply the - or + unary operators, respectively.
Step17: Now let's see if inside_sphere actually contains points inside the sphere
Step18: Everything works as expected! Now that we understand how to create half-spaces, we can create more complex volumes by combining half-spaces using Boolean operators
Step19: For many regions, OpenMC can automatically determine a bounding box. To get the bounding box, we use the bounding_box property of a region, which returns a tuple of the lower-left and upper-right Cartesian coordinates for the bounding box
Step20: Now that we see how to create volumes, we can use them to create a cell.
Step21: By default, the cell is not filled by any material (void). In order to assign a material, we set the fill property of a Cell.
Step22: Universes and in-line plotting
Step23: The Universe object has a plot method that will display our the universe as current constructed
Step24: By default, the plot will appear in the $x$-$y$ plane. We can change that with the basis argument.
Step25: If we have particular fondness for, say, fuchsia, we can tell the plot() method to make our cell that color.
Step26: Pin cell geometry
Step27: With the surfaces created, we can now take advantage of the built-in operators on surfaces to create regions for the fuel, the gap, and the clad
Step28: Now we can create corresponding cells that assign materials to these regions. As with materials, cells have unique IDs that are assigned either manually or automatically. Note that the gap cell doesn't have any material assigned (it is void by default).
Step29: Finally, we need to handle the coolant outside of our fuel pin. To do this, we create x- and y-planes that bound the geometry.
Step30: The water region is going to be everything outside of the clad outer radius and within the box formed as the intersection of four half-spaces.
Step31: OpenMC also includes a factory function that generates a rectangular prism that could have made our lives easier.
Step32: Pay attention here -- the object that was returned is NOT a surface. It is actually the intersection of four surface half-spaces, just like we created manually before. Thus, we don't need to apply the unary operator (-box). Instead, we can directly combine it with +clad_or.
Step33: The final step is to assign the cells we created to a universe and tell OpenMC that this universe is the "root" universe in our geometry. The Geometry is the final object that is actually exported to XML.
Step34: Starting source and settings
Step35: Now let's create a Settings object and give it the source we created along with specifying how many batches and particles we want to run.
Step36: User-defined tallies
Step37: The what is the total, fission, absorption, and (n,$\gamma$) reaction rates in $^{235}$U. By default, if we only specify what reactions, it will gives us tallies over all nuclides. We can use the nuclides attribute to name specific nuclides we're interested in.
Step38: Similar to the other files, we need to create a Tallies collection and export it to XML.
Step39: Running OpenMC
Step40: Great! OpenMC already told us our k-effective. It also spit out a file called tallies.out that shows our tallies. This is a very basic method to look at tally data; for more sophisticated methods, see other example notebooks.
Step41: Geometry plotting
Step42: With our plot created, we need to add it to a Plots collection which can be exported to XML.
Step43: Now we can run OpenMC in plotting mode by calling the plot_geometry() function. Under the hood this is calling openmc --plot.
Step44: OpenMC writes out a peculiar image with a .ppm extension. If you have ImageMagick installed, this can be converted into a more normal .png file.
Step45: We can use functionality from IPython to display the image inline in our notebook
Step46: That was a little bit cumbersome. Thankfully, OpenMC provides us with a method on the Plot class that does all that "boilerplate" work.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import openmc
uo2 = openmc.Material(1, "uo2")
print(uo2)
mat = openmc.Material()
print(mat)
help(uo2.add_nuclide)
# Add nuclides to uo2
uo2.add_nuclide('U235', 0.03)
uo2.add_nuclide('U238', 0.97)
uo2.add_nuclide('O16', 2.0)
uo2.set_density('g/cm3', 10.0)
zirconium = openmc.Material(name="zirconium")
zirconium.add_element('Zr', 1.0)
zirconium.set_density('g/cm3', 6.6)
water = openmc.Material(name="h2o")
water.add_nuclide('H1', 2.0)
water.add_nuclide('O16', 1.0)
water.set_density('g/cm3', 1.0)
water.add_s_alpha_beta('c_H_in_H2O')
materials = openmc.Materials([uo2, zirconium, water])
materials = openmc.Materials()
materials.append(uo2)
materials += [zirconium, water]
isinstance(materials, list)
materials.export_to_xml()
!cat materials.xml
water.remove_nuclide('O16')
water.add_element('O', 1.0)
materials.export_to_xml()
!cat materials.xml
!cat $OPENMC_CROSS_SECTIONS | head -n 10
print(' ...')
!cat $OPENMC_CROSS_SECTIONS | tail -n 10
uo2_three = openmc.Material()
uo2_three.add_element('U', 1.0, enrichment=3.0)
uo2_three.add_element('O', 2.0)
uo2_three.set_density('g/cc', 10.0)
# Create PuO2 material
puo2 = openmc.Material()
puo2.add_nuclide('Pu239', 0.94)
puo2.add_nuclide('Pu240', 0.06)
puo2.add_nuclide('O16', 2.0)
puo2.set_density('g/cm3', 11.5)
# Create the mixture
mox = openmc.Material.mix_materials([uo2, puo2], [0.97, 0.03], 'wo')
sphere = openmc.Sphere(r=1.0)
inside_sphere = -sphere
outside_sphere = +sphere
print((0,0,0) in inside_sphere, (0,0,2) in inside_sphere)
print((0,0,0) in outside_sphere, (0,0,2) in outside_sphere)
z_plane = openmc.ZPlane(z0=0)
northern_hemisphere = -sphere & +z_plane
northern_hemisphere.bounding_box
cell = openmc.Cell()
cell.region = northern_hemisphere
# or...
cell = openmc.Cell(region=northern_hemisphere)
cell.fill = water
universe = openmc.Universe()
universe.add_cell(cell)
# this also works
universe = openmc.Universe(cells=[cell])
universe.plot(width=(2.0, 2.0))
universe.plot(width=(2.0, 2.0), basis='xz')
universe.plot(width=(2.0, 2.0), basis='xz',
colors={cell: 'fuchsia'})
fuel_outer_radius = openmc.ZCylinder(r=0.39)
clad_inner_radius = openmc.ZCylinder(r=0.40)
clad_outer_radius = openmc.ZCylinder(r=0.46)
fuel_region = -fuel_outer_radius
gap_region = +fuel_outer_radius & -clad_inner_radius
clad_region = +clad_inner_radius & -clad_outer_radius
fuel = openmc.Cell(name='fuel')
fuel.fill = uo2
fuel.region = fuel_region
gap = openmc.Cell(name='air gap')
gap.region = gap_region
clad = openmc.Cell(name='clad')
clad.fill = zirconium
clad.region = clad_region
pitch = 1.26
left = openmc.XPlane(x0=-pitch/2, boundary_type='reflective')
right = openmc.XPlane(x0=pitch/2, boundary_type='reflective')
bottom = openmc.YPlane(y0=-pitch/2, boundary_type='reflective')
top = openmc.YPlane(y0=pitch/2, boundary_type='reflective')
water_region = +left & -right & +bottom & -top & +clad_outer_radius
moderator = openmc.Cell(name='moderator')
moderator.fill = water
moderator.region = water_region
box = openmc.rectangular_prism(width=pitch, height=pitch,
boundary_type='reflective')
type(box)
water_region = box & +clad_outer_radius
root_universe = openmc.Universe(cells=(fuel, gap, clad, moderator))
geometry = openmc.Geometry()
geometry.root_universe = root_universe
# or...
geometry = openmc.Geometry(root_universe)
geometry.export_to_xml()
!cat geometry.xml
# Create a point source
point = openmc.stats.Point((0, 0, 0))
source = openmc.Source(space=point)
settings = openmc.Settings()
settings.source = source
settings.batches = 100
settings.inactive = 10
settings.particles = 1000
settings.export_to_xml()
!cat settings.xml
cell_filter = openmc.CellFilter(fuel)
tally = openmc.Tally(1)
tally.filters = [cell_filter]
tally.nuclides = ['U235']
tally.scores = ['total', 'fission', 'absorption', '(n,gamma)']
tallies = openmc.Tallies([tally])
tallies.export_to_xml()
!cat tallies.xml
openmc.run()
!cat tallies.out
plot = openmc.Plot()
plot.filename = 'pinplot'
plot.width = (pitch, pitch)
plot.pixels = (200, 200)
plot.color_by = 'material'
plot.colors = {uo2: 'yellow', water: 'blue'}
plots = openmc.Plots([plot])
plots.export_to_xml()
!cat plots.xml
openmc.plot_geometry()
!convert pinplot.ppm pinplot.png
from IPython.display import Image
Image("pinplot.png")
plot.to_ipython_image()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To print multiple strings, import print_function to prevent Py2 from interpreting it as a tuple
Step2: Raising exceptions
Step3: Raising exceptions with a traceback
Step4: Exception chaining (PEP 3134)
Step5: Catching exceptions
Step6: Division
Step7: "True division" (float division)
Step8: "Old division" (i.e. compatible with Py2 behaviour)
Step9: Long integers
Step10: To test whether a value is an integer (of any kind)
Step11: Octal constants
Step12: Backtick repr
Step13: Metaclasses
Step14: Strings and bytes
Step15: The futurize and python-modernize tools do not currently offer an option to do this automatically.
Step16: See http
Step17: To loop over a byte-string with possible high-bit characters, obtaining each character as a byte-string of length 1
Step18: As an alternative, chr() and .encode('latin-1') can be used to convert an int into a 1-char byte string
Step19: basestring
Step20: unicode
Step21: StringIO
Step22: Imports relative to a package
Step23: Dictionaries
Step24: Iterating through dict keys/values/items
Step25: Iterable dict values
Step26: Iterable dict items
Step27: dict keys/values/items as a list
Step28: dict values as a list
Step29: dict items as a list
Step30: Custom class behaviour
Step31: Custom __str__ methods
Step32: Custom __nonzero__ vs __bool__ method
Step33: Lists versus iterators
Step34: range
Step35: map
Step36: imap
Step37: zip, izip
Step38: reduce()
Step39: raw_input()
Step40: input()
Step41: Warning
Step42: exec
Step43: But note that Py3's exec() is less powerful (and less dangerous) than Py2's exec statement.
Step44: unichr()
Step45: intern()
Step46: apply()
Step47: chr()
Step48: cmp()
Step49: reload()
Step50: Standard library
Step51: commands / subprocess modules
Step52: subprocess.check_output()
Step53: collections
Step54: StringIO module
Step55: http module
Step56: xmlrpc module
Step57: html escaping and entities
Step58: html parsing
Step59: urllib module
Step60: Tkinter
Step61: socketserver
Step62: copy_reg, copyreg
Step63: configparser
Step64: queue
Step65: repr, reprlib
Step66: UserDict, UserList, UserString
Step67: itertools
|
<ASSISTANT_TASK:>
Python Code:
# Python 2 only:
print 'Hello'
# Python 2 and 3:
print('Hello')
# Python 2 only:
print 'Hello', 'Guido'
# Python 2 and 3:
from __future__ import print_function # (at top of module)
print('Hello', 'Guido')
# Python 2 only:
print >> sys.stderr, 'Hello'
# Python 2 and 3:
from __future__ import print_function
print('Hello', file=sys.stderr)
# Python 2 only:
print 'Hello',
# Python 2 and 3:
from __future__ import print_function
print('Hello', end='')
# Python 2 only:
raise ValueError, "dodgy value"
# Python 2 and 3:
raise ValueError("dodgy value")
# Python 2 only:
traceback = sys.exc_info()[2]
raise ValueError, "dodgy value", traceback
# Python 3 only:
raise ValueError("dodgy value").with_traceback()
# Python 2 and 3: option 1
from six import reraise as raise_
# or
from future.utils import raise_
traceback = sys.exc_info()[2]
raise_(ValueError, "dodgy value", traceback)
# Python 2 and 3: option 2
from future.utils import raise_with_traceback
raise_with_traceback(ValueError("dodgy value"))
# Setup:
class DatabaseError(Exception):
pass
# Python 3 only
class FileDatabase:
def __init__(self, filename):
try:
self.file = open(filename)
except IOError as exc:
raise DatabaseError('failed to open') from exc
# Python 2 and 3:
from future.utils import raise_from
class FileDatabase:
def __init__(self, filename):
try:
self.file = open(filename)
except IOError as exc:
raise_from(DatabaseError('failed to open'), exc)
# Testing the above:
try:
fd = FileDatabase('non_existent_file.txt')
except Exception as e:
assert isinstance(e.__cause__, IOError) # FileNotFoundError on Py3.3+ inherits from IOError
# Python 2 only:
try:
...
except ValueError, e:
...
# Python 2 and 3:
try:
...
except ValueError as e:
...
# Python 2 only:
assert 2 / 3 == 0
# Python 2 and 3:
assert 2 // 3 == 0
# Python 3 only:
assert 3 / 2 == 1.5
# Python 2 and 3:
from __future__ import division # (at top of module)
assert 3 / 2 == 1.5
# Python 2 only:
a = b / c # with any types
# Python 2 and 3:
from past.utils import old_div
a = old_div(b, c) # always same as / on Py2
# Python 2 only
k = 9223372036854775808L
# Python 2 and 3:
k = 9223372036854775808
# Python 2 only
bigint = 1L
# Python 2 and 3
from builtins import int
bigint = int(1)
# Python 2 only:
if isinstance(x, (int, long)):
...
# Python 3 only:
if isinstance(x, int):
...
# Python 2 and 3: option 1
from builtins import int # subclass of long on Py2
if isinstance(x, int): # matches both int and long on Py2
...
# Python 2 and 3: option 2
from past.builtins import long
if isinstance(x, (int, long)):
...
0644 # Python 2 only
0o644 # Python 2 and 3
`x` # Python 2 only
repr(x) # Python 2 and 3
class BaseForm(object):
pass
class FormType(type):
pass
# Python 2 only:
class Form(BaseForm):
__metaclass__ = FormType
pass
# Python 3 only:
class Form(BaseForm, metaclass=FormType):
pass
# Python 2 and 3:
from six import with_metaclass
# or
from future.utils import with_metaclass
class Form(with_metaclass(FormType, BaseForm)):
pass
# Python 2 only
s1 = 'The Zen of Python'
s2 = u'ใใใชใใฎใใใใใใชๆนใใใ\n'
# Python 2 and 3
s1 = u'The Zen of Python'
s2 = u'ใใใชใใฎใใใใใใชๆนใใใ\n'
# Python 2 and 3
from __future__ import unicode_literals # at top of module
s1 = 'The Zen of Python'
s2 = 'ใใใชใใฎใใใใใใชๆนใใใ\n'
# Python 2 only
s = 'This must be a byte-string'
# Python 2 and 3
s = b'This must be a byte-string'
# Python 2 only:
for bytechar in 'byte-string with high-bit chars like \xf9':
...
# Python 3 only:
for myint in b'byte-string with high-bit chars like \xf9':
bytechar = bytes([myint])
# Python 2 and 3:
from builtins import bytes
for myint in bytes(b'byte-string with high-bit chars like \xf9'):
bytechar = bytes([myint])
# Python 3 only:
for myint in b'byte-string with high-bit chars like \xf9':
char = chr(myint) # returns a unicode string
bytechar = char.encode('latin-1')
# Python 2 and 3:
from builtins import bytes, chr
for myint in bytes(b'byte-string with high-bit chars like \xf9'):
char = chr(myint) # returns a unicode string
bytechar = char.encode('latin-1') # forces returning a byte str
# Python 2 only:
a = u'abc'
b = 'def'
assert (isinstance(a, basestring) and isinstance(b, basestring))
# Python 2 and 3: alternative 1
from past.builtins import basestring # pip install future
a = u'abc'
b = b'def'
assert (isinstance(a, basestring) and isinstance(b, basestring))
# Python 2 and 3: alternative 2: refactor the code to avoid considering
# byte-strings as strings.
from builtins import str
a = u'abc'
b = b'def'
c = b.decode()
assert isinstance(a, str) and isinstance(c, str)
# ...
# Python 2 only:
templates = [u"blog/blog_post_detail_%s.html" % unicode(slug)]
# Python 2 and 3: alternative 1
from builtins import str
templates = [u"blog/blog_post_detail_%s.html" % str(slug)]
# Python 2 and 3: alternative 2
from builtins import str as text
templates = [u"blog/blog_post_detail_%s.html" % text(slug)]
# Python 2 only:
from StringIO import StringIO
# or:
from cStringIO import StringIO
# Python 2 and 3:
from io import BytesIO # for handling byte strings
from io import StringIO # for handling unicode strings
# Python 2 only:
import submodule2
# Python 2 and 3:
from . import submodule2
# Python 2 and 3:
# To make Py2 code safer (more like Py3) by preventing
# implicit relative imports, you can also add this to the top:
from __future__ import absolute_import
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
# Python 2 only:
for key in heights.iterkeys():
...
# Python 2 and 3:
for key in heights:
...
# Python 2 only:
for value in heights.itervalues():
...
# Idiomatic Python 3
for value in heights.values(): # extra memory overhead on Py2
...
# Python 2 and 3: option 1
from builtins import dict
heights = dict(Fred=175, Anne=166, Joe=192)
for key in heights.values(): # efficient on Py2 and Py3
...
# Python 2 and 3: option 2
from future.utils import itervalues
# or
from six import itervalues
for key in itervalues(heights):
...
# Python 2 only:
for (key, value) in heights.iteritems():
...
# Python 2 and 3: option 1
for (key, value) in heights.items(): # inefficient on Py2
...
# Python 2 and 3: option 2
from future.utils import viewitems
for (key, value) in viewitems(heights): # also behaves like a set
...
# Python 2 and 3: option 3
from future.utils import iteritems
# or
from six import iteritems
for (key, value) in iteritems(heights):
...
# Python 2 only:
keylist = heights.keys()
assert isinstance(keylist, list)
# Python 2 and 3:
keylist = list(heights)
assert isinstance(keylist, list)
# Python 2 only:
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
valuelist = heights.values()
assert isinstance(valuelist, list)
# Python 2 and 3: option 1
valuelist = list(heights.values()) # inefficient on Py2
# Python 2 and 3: option 2
from builtins import dict
heights = dict(Fred=175, Anne=166, Joe=192)
valuelist = list(heights.values())
# Python 2 and 3: option 3
from future.utils import listvalues
valuelist = listvalues(heights)
# Python 2 and 3: option 4
from future.utils import itervalues
# or
from six import itervalues
valuelist = list(itervalues(heights))
# Python 2 and 3: option 1
itemlist = list(heights.items()) # inefficient on Py2
# Python 2 and 3: option 2
from future.utils import listitems
itemlist = listitems(heights)
# Python 2 and 3: option 3
from future.utils import iteritems
# or
from six import iteritems
itemlist = list(iteritems(heights))
# Python 2 only
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def next(self): # Py2-style
return self._iter.next().upper()
def __iter__(self):
return self
itr = Upper('hello')
assert itr.next() == 'H' # Py2-style
assert list(itr) == list('ELLO')
# Python 2 and 3: option 1
from builtins import object
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __next__(self): # Py3-style iterator interface
return next(self._iter).upper() # builtin next() function calls
def __iter__(self):
return self
itr = Upper('hello')
assert next(itr) == 'H' # compatible style
assert list(itr) == list('ELLO')
# Python 2 and 3: option 2
from future.utils import implements_iterator
@implements_iterator
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __next__(self): # Py3-style iterator interface
return next(self._iter).upper() # builtin next() function calls
def __iter__(self):
return self
itr = Upper('hello')
assert next(itr) == 'H'
assert list(itr) == list('ELLO')
# Python 2 only:
class MyClass(object):
def __unicode__(self):
return 'Unicode string: \u5b54\u5b50'
def __str__(self):
return unicode(self).encode('utf-8')
a = MyClass()
print(a) # prints encoded string
# Python 2 and 3:
from future.utils import python_2_unicode_compatible
@python_2_unicode_compatible
class MyClass(object):
def __str__(self):
return u'Unicode string: \u5b54\u5b50'
a = MyClass()
print(a) # prints string encoded as utf-8 on Py2
# Python 2 only:
class AllOrNothing(object):
def __init__(self, l):
self.l = l
def __nonzero__(self):
return all(self.l)
container = AllOrNothing([0, 100, 200])
assert not bool(container)
# Python 2 and 3:
from builtins import object
class AllOrNothing(object):
def __init__(self, l):
self.l = l
def __bool__(self):
return all(self.l)
container = AllOrNothing([0, 100, 200])
assert not bool(container)
# Python 2 only:
for i in xrange(10**8):
...
# Python 2 and 3: forward-compatible
from builtins import range
for i in range(10**8):
...
# Python 2 and 3: backward-compatible
from past.builtins import xrange
for i in xrange(10**8):
...
# Python 2 only
mylist = range(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: forward-compatible: option 1
mylist = list(range(5)) # copies memory on Py2
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: forward-compatible: option 2
from builtins import range
mylist = list(range(5))
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: option 3
from future.utils import lrange
mylist = lrange(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: backward compatible
from past.builtins import range
mylist = range(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 only:
mynewlist = map(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 1
# Idiomatic Py3, but inefficient on Py2
mynewlist = list(map(f, myoldlist))
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 2
from builtins import map
mynewlist = list(map(f, myoldlist))
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 3
try:
from itertools import imap as map
except ImportError:
pass
mynewlist = list(map(f, myoldlist)) # inefficient on Py2
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 4
from future.utils import lmap
mynewlist = lmap(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 5
from past.builtins import map
mynewlist = map(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 only:
from itertools import imap
myiter = imap(func, myoldlist)
assert isinstance(myiter, iter)
# Python 3 only:
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 and 3: option 1
from builtins import map
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 and 3: option 2
try:
from itertools import imap as map
except ImportError:
pass
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 only
f = open('myfile.txt')
data = f.read() # as a byte string
text = data.decode('utf-8')
# Python 2 and 3: alternative 1
from io import open
f = open('myfile.txt', 'rb')
data = f.read() # as bytes
text = data.decode('utf-8') # unicode, not bytes
# Python 2 and 3: alternative 2
from io import open
f = open('myfile.txt', encoding='utf-8')
text = f.read() # unicode, not bytes
# Python 2 only:
assert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5
# Python 2 and 3:
from functools import reduce
assert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5
# Python 2 only:
name = raw_input('What is your name? ')
assert isinstance(name, str) # native str
# Python 2 and 3:
from builtins import input
name = input('What is your name? ')
assert isinstance(name, str) # native str on Py2 and Py3
# Python 2 only:
input("Type something safe please: ")
# Python 2 and 3
from builtins import input
eval(input("Type something safe please: "))
# Python 2 only:
f = file(pathname)
# Python 2 and 3:
f = open(pathname)
# But preferably, use this:
from io import open
f = open(pathname, 'rb') # if f.read() should return bytes
# or
f = open(pathname, 'rt') # if f.read() should return unicode text
# Python 2 only:
exec 'x = 10'
# Python 2 and 3:
exec('x = 10')
# Python 2 only:
g = globals()
exec 'x = 10' in g
# Python 2 and 3:
g = globals()
exec('x = 10', g)
# Python 2 only:
l = locals()
exec 'x = 10' in g, l
# Python 2 and 3:
exec('x = 10', g, l)
# Python 2 only:
execfile('myfile.py')
# Python 2 and 3: alternative 1
from past.builtins import execfile
execfile('myfile.py')
# Python 2 and 3: alternative 2
exec(compile(open('myfile.py').read()))
# This can sometimes cause this:
# SyntaxError: function ... uses import * and bare exec ...
# See https://github.com/PythonCharmers/python-future/issues/37
# Python 2 only:
assert unichr(8364) == 'โฌ'
# Python 3 only:
assert chr(8364) == 'โฌ'
# Python 2 and 3:
from builtins import chr
assert chr(8364) == 'โฌ'
# Python 2 only:
intern('mystring')
# Python 3 only:
from sys import intern
intern('mystring')
# Python 2 and 3: alternative 1
from past.builtins import intern
intern('mystring')
# Python 2 and 3: alternative 2
from six.moves import intern
intern('mystring')
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from sys import intern
intern('mystring')
# Python 2 and 3: alternative 2
try:
from sys import intern
except ImportError:
pass
intern('mystring')
args = ('a', 'b')
kwargs = {'kwarg1': True}
# Python 2 only:
apply(f, args, kwargs)
# Python 2 and 3: alternative 1
f(*args, **kwargs)
# Python 2 and 3: alternative 2
from past.builtins import apply
apply(f, args, kwargs)
# Python 2 only:
assert chr(64) == b'@'
assert chr(200) == b'\xc8'
# Python 3 only: option 1
assert chr(64).encode('latin-1') == b'@'
assert chr(0xc8).encode('latin-1') == b'\xc8'
# Python 2 and 3: option 1
from builtins import chr
assert chr(64).encode('latin-1') == b'@'
assert chr(0xc8).encode('latin-1') == b'\xc8'
# Python 3 only: option 2
assert bytes([64]) == b'@'
assert bytes([0xc8]) == b'\xc8'
# Python 2 and 3: option 2
from builtins import bytes
assert bytes([64]) == b'@'
assert bytes([0xc8]) == b'\xc8'
# Python 2 only:
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 and 3: alternative 1
from past.builtins import cmp
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 and 3: alternative 2
cmp = lambda(x, y): (x > y) - (x < y)
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 only:
reload(mymodule)
# Python 2 and 3
from imp import reload
reload(mymodule)
# Python 2 only
import anydbm
import whichdb
import dbm
import dumbdbm
import gdbm
# Python 2 and 3: alternative 1
from future import standard_library
standard_library.install_aliases()
import dbm
import dbm.ndbm
import dbm.dumb
import dbm.gnu
# Python 2 and 3: alternative 2
from future.moves import dbm
from future.moves.dbm import dumb
from future.moves.dbm import ndbm
from future.moves.dbm import gnu
# Python 2 and 3: alternative 3
from six.moves import dbm_gnu
# (others not supported)
# Python 2 only
from commands import getoutput, getstatusoutput
# Python 2 and 3
from future import standard_library
standard_library.install_aliases()
from subprocess import getoutput, getstatusoutput
# Python 2.7 and above
from subprocess import check_output
# Python 2.6 and above: alternative 1
from future.moves.subprocess import check_output
# Python 2.6 and above: alternative 2
from future import standard_library
standard_library.install_aliases()
from subprocess import check_output
# Python 2.7 and above
from collections import Counter, OrderedDict, ChainMap
# Python 2.6 and above: alternative 1
from future.backports import Counter, OrderedDict, ChainMap
# Python 2.6 and above: alternative 2
from future import standard_library
standard_library.install_aliases()
from collections import Counter, OrderedDict, ChainMap
# Python 2 only
from StringIO import StringIO
from cStringIO import StringIO
# Python 2 and 3
from io import BytesIO
# and refactor StringIO() calls to BytesIO() if passing byte-strings
# Python 2 only:
import httplib
import Cookie
import cookielib
import BaseHTTPServer
import SimpleHTTPServer
import CGIHttpServer
# Python 2 and 3 (after ``pip install future``):
import http.client
import http.cookies
import http.cookiejar
import http.server
# Python 2 only:
import DocXMLRPCServer
import SimpleXMLRPCServer
# Python 2 and 3 (after ``pip install future``):
import xmlrpc.server
# Python 2 only:
import xmlrpclib
# Python 2 and 3 (after ``pip install future``):
import xmlrpc.client
# Python 2 and 3:
from cgi import escape
# Safer (Python 2 and 3, after ``pip install future``):
from html import escape
# Python 2 only:
from htmlentitydefs import codepoint2name, entitydefs, name2codepoint
# Python 2 and 3 (after ``pip install future``):
from html.entities import codepoint2name, entitydefs, name2codepoint
# Python 2 only:
from HTMLParser import HTMLParser
# Python 2 and 3 (after ``pip install future``)
from html.parser import HTMLParser
# Python 2 and 3 (alternative 2):
from future.moves.html.parser import HTMLParser
# Python 2 only:
from urlparse import urlparse
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
# Python 3 only:
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: easiest option
from future.standard_library import install_aliases
install_aliases()
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: alternative 2
from future.standard_library import hooks
with hooks():
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: alternative 3
from future.moves.urllib.parse import urlparse, urlencode
from future.moves.urllib.request import urlopen, Request
from future.moves.urllib.error import HTTPError
# or
from six.moves.urllib.parse import urlparse, urlencode
from six.moves.urllib.request import urlopen
from six.moves.urllib.error import HTTPError
# Python 2 and 3: alternative 4
try:
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
except ImportError:
from urlparse import urlparse
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
# Python 2 only:
import Tkinter
import Dialog
import FileDialog
import ScrolledText
import SimpleDialog
import Tix
import Tkconstants
import Tkdnd
import tkColorChooser
import tkCommonDialog
import tkFileDialog
import tkFont
import tkMessageBox
import tkSimpleDialog
import ttk
# Python 2 and 3 (after ``pip install future``):
import tkinter
import tkinter.dialog
import tkinter.filedialog
import tkinter.scrolledtext
import tkinter.simpledialog
import tkinter.tix
import tkinter.constants
import tkinter.dnd
import tkinter.colorchooser
import tkinter.commondialog
import tkinter.filedialog
import tkinter.font
import tkinter.messagebox
import tkinter.simpledialog
import tkinter.ttk
# Python 2 only:
import SocketServer
# Python 2 and 3 (after ``pip install future``):
import socketserver
# Python 2 only:
import copy_reg
# Python 2 and 3 (after ``pip install future``):
import copyreg
# Python 2 only:
from ConfigParser import ConfigParser
# Python 2 and 3 (after ``pip install future``):
from configparser import ConfigParser
# Python 2 only:
from Queue import Queue, heapq, deque
# Python 2 and 3 (after ``pip install future``):
from queue import Queue, heapq, deque
# Python 2 only:
from repr import aRepr, repr
# Python 2 and 3 (after ``pip install future``):
from reprlib import aRepr, repr
# Python 2 only:
from UserDict import UserDict
from UserList import UserList
from UserString import UserString
# Python 3 only:
from collections import UserDict, UserList, UserString
# Python 2 and 3: alternative 1
from future.moves.collections import UserDict, UserList, UserString
# Python 2 and 3: alternative 2
from six.moves import UserDict, UserList, UserString
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from collections import UserDict, UserList, UserString
# Python 2 only:
from itertools import ifilterfalse, izip_longest
# Python 3 only:
from itertools import filterfalse, zip_longest
# Python 2 and 3: alternative 1
from future.moves.itertools import filterfalse, zip_longest
# Python 2 and 3: alternative 2
from six.moves import filterfalse, zip_longest
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from itertools import filterfalse, zip_longest
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linearizing An Exponential
Step2: NO, we cannot linearize and ignore the impact on noise
Step3: The Shapiro-Wilk test says absolutely not are they normal, as we can imagine from looking at the graph.
Step4: Error Analysis in Nonlinear Least Squares
Step5: Now to actually compute the standard error
Step6: Of course, we could continue on and do hypothesis tests
Step7: You can see that there are probably 3 peaks in here. What we'd like to find out is what percentage each peak contributes. For example, this would tell us the amount of absorbance contributed by each of these three bonds or perhaps the amount of each compound in chromatography.
Step8: It's always good to test your functions, so let's do that
Step9: Ok! Now let's do the regression
Step10: Looks like there is a correlation, as indicated by the $p$-value.
Step11: Optimizing
Step12: Let's see if 100 iterations gave us good data!
Step13: What a bad fit! Let's try plotting the individual peaks
Step14: Wow, that is really wrong! We can hit on particular peaks, but they usually have no meaning. Let's try to add more info. What info do we have? The peak centers. Let's try adding some constraints describing this info
Step15: Checking residuals
Step16: Looks like they are normal
Step17: We have to decide how we want to build the ${\mathbf F}$ matrix. I want to build it as
Step18: Now we compute all the confidence intervals
Step19: The relative populations, the integrated peaks, are just the $a$ values. I'll normalize them into percent
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
import scipy.linalg as linalg
import matplotlib
#NOT PART OF REGRESSION!
#Make up an equation and create data from it
x = np.linspace(0, 10, 20)
y = 2 * np.exp(-x**2 * 0.1) + scipy.stats.norm.rvs(size=len(x), loc=0, scale=0.2)
#END
plt.plot(x,y, 'o', label='data')
plt.plot(x, 2 * np.exp(-x**2 * 0.1), '-', label='exact solution')
plt.legend(loc='upper right')
plt.show()
#Now we compute the least squares solution
x_mat = np.column_stack( (np.ones(len(x)), -x**2) )
#Any negative y-values will not work, since log of a negative number is undefined
y_clean = []
for yi in y:
if yi < 0:
y_clean.append(0.0000001)
else:
y_clean.append(yi)
lin_y = np.log(y_clean)
#recall that the *_ means put all the other
#return values into the _ variable, which we
#don't need
lin_beta, *_ = linalg.lstsq(x_mat, lin_y)
print(lin_beta)
beta_0 = np.exp(lin_beta[0])
beta_1 = lin_beta[1]
print(beta_0, 2)
print(beta_1, 0.1)
plt.plot(x,y, 'o', label='data')
plt.plot(x, 2 * np.exp(-x**2 * 0.1), '-', label='exact solution')
plt.plot(x, beta_0 * np.exp(-x**2 * beta_1), '-', label='linearized least squares')
plt.legend(loc='upper right')
plt.show()
resids = y - beta_0 * np.exp(-x**2 * beta_1)
scipy.stats.shapiro(resids)
#Create an objective function, that takes in 1 D-dimensional argument and outputs a measure of the goodness of fit (SSR)
def obj(beta, x, y):
beta_0 = beta[0] #<- extract the elements of the beta vector
beta_1 = beta[1]
yhat = beta_0 * np.exp(-beta_1 * x**2) # <- This is our model equation
resids = yhat - y #<- compute residuals
SSR = np.sum(resids**2) #<- square and sum them
return SSR
#Use the minimize (BGFS) function, with starting points
result = scipy.optimize.minimize(obj, x0=[1,1], args=(x, y))
beta_opt = result.x #<- remember, we get out a whole bunch of extra stuff from optimization
print(result)
plt.plot(x,y, 'o', label='data')
plt.plot(x, 2 * np.exp(-x**2 * 0.1), '-', label='exact solution')
plt.plot(x, beta_0 * np.exp(-x**2 * beta_1), '-', label='linearized least squares')
plt.plot(x, beta_opt[0] * np.exp(-x**2 * beta_opt[1]), '-', label='Nonlinear least squares')
plt.legend(loc='upper right')
plt.show()
def build_F(beta, x):
#Compute the individual partials for each data point
beta_0_vec = np.exp(-beta[1] * x**2)
beta_1_vec = -beta[0] * x**2 * np.exp(-beta[1] * x**2)
#Now stack them together
return np.column_stack( (beta_0_vec, beta_1_vec) )
print(build_F(beta_opt, x))
#The code below is our normal way of computing the standard error in the noise
resids = y - beta_opt[0] * np.exp(-x**2 * beta_opt[1])
SSR = np.sum(resids**2)
s2_epsilon = SSR / (len(x) - len(beta_opt))
print(s2_epsilon)
#Using our F, compute the standard error in beta
F = build_F(beta_opt, x)
s2_beta = s2_epsilon * linalg.inv(F.transpose() @ F)
print(s2_beta)
#We have standard error and can now compute a confidence interval
T = scipy.stats.t.ppf(0.975, len(x) - len(beta_opt))
c0_width = T * np.sqrt(s2_beta[0,0])
print('95% confidence interval for beta_0 is {:.3} +/ {:.2f}'.format(beta_opt[0], c0_width))
c1_width = T * np.sqrt(s2_beta[1,1])
print('95% confidence interval for beta_1 is {:.3} +/ {:.2}'.format(beta_opt[1], c1_width))
data = np.genfromtxt('spectrum.txt')
plt.plot(data[:,0], data[:,1])
plt.show()
def peak(x, a, b, c):
'''computes the peak equation for given parameters'''
return a / np.sqrt(2 * np.pi * c) * np.exp(-(x - b)**2 / c)
def spectrum(x, a_array, b_array, c_array):
'''Takes in the x data and parameters for a set of peaks. Computes spectrum'''
yhat = np.zeros(np.shape(x))
for i in range(len(a_array)):
yhat += peak(x, a_array[i], b_array[i], c_array[i])
return yhat
x = np.linspace(0, 10, 100)
y = peak(x, 1, 5, 1)
plt.plot(x,y)
plt.show()
y = spectrum(x, [1, 2], [3, 5], [1,1])
plt.plot(x,y)
plt.show()
spec_x = data[:,0]
spec_y = data[:,1]
scipy.stats.spearmanr(spec_x, spec_y)
def spec_ssr(params, data, M):
'''Compute SSR given the parameters, data, and number of desired peaks.'''
x = data[:,0]
y = data[:,1]
a_array = params[:M]
b_array = params[M:2*M]
c_array = params[2*M:3 * M]
yhat = spectrum(x, a_array, b_array, c_array)
return np.sum((yhat - y)**2)
def obj(params):
return spec_ssr(params, data=data, M=3)
import scipy.optimize as opt
result = opt.basinhopping(obj, x0=[100, 100, 100, 600, 650, 700, 100, 100, 100], niter=100)
print(result.x)
def spec_yhat(params, data, M):
'''compute the yhats for the spectrum problem'''
x = data[:,0]
a_array = params[:M]
b_array = params[M:2*M]
c_array = params[2*M:3 * M]
return spectrum(x, a_array, b_array, c_array)
plt.plot(spec_x, spec_y, label='data')
plt.plot(spec_x, spec_yhat(result.x, data, 3), label='fit')
plt.legend()
plt.show()
for i in range(3):
plt.plot(spec_x, peak(spec_x, result.x[i], result.x[i + 3], result.x[i + 6]))
#constraints follow the order above:
constraints = [{'type': 'ineq', 'fun': lambda params: params[3] - 600},
{'type': 'ineq', 'fun': lambda params: 630 - params[3]},
{'type': 'ineq', 'fun': lambda params: params[4] - 630},
{'type': 'ineq', 'fun': lambda params: 650 - params[4]},
{'type': 'ineq', 'fun': lambda params: params[5] - 650},
{'type': 'ineq', 'fun': lambda params: 690 - params[5]}]
minimizer_kwargs = {'constraints': constraints}
result = opt.basinhopping(obj, x0=[100, 100, 100, 600, 650, 700, 100, 100, 100], niter=350, minimizer_kwargs=minimizer_kwargs)
print(result.x)
plt.plot(spec_x, spec_y, label='data')
plt.plot(spec_x, spec_yhat(result.x, data, 3), label='fit')
for i in range(3):
plt.plot(spec_x, peak(spec_x, result.x[i], result.x[i + 3], result.x[i + 6]), label='peak {}'.format(i))
plt.legend()
plt.show()
resids = spec_y - spec_yhat(result.x, data, 3)
plt.hist(resids)
plt.show()
scipy.stats.shapiro(resids)
def peak_partials(x, a, b, c):
'''Returns partial derivatives of peak functions with respect to parameters as a tuple'''
return (1 / (np.sqrt(2 * np.pi * c)) * np.exp(-(x - b)**2 / c), \
2 * a * (x - b) / c / np.sqrt(2 * np.pi * c) * np.exp(-(x - b)**2 / c),\
a / np.sqrt( 2 * np.pi) * np.exp(-(x - b)**2 / c) * ((x - b)**2 / c**(5 / 2) - 1 / 2 / c**(3/2)))
def spectrum_partials(x, a_array, b_array, c_array):
'''Takes in the x data and parameters for a set of peaks. Computes partial derivatives and returns as matrix'''
result = np.empty( (len(x), len(a_array) * 3 ) )
for i in range(len(a_array)):
a_p, b_p, c_p = peak_partials(x, a_array[i], b_array[i], c_array[i])
result[:, i] = a_p
result[:, i + 3] = b_p
result[:, i + 6] = c_p
return result
M = 3
F = spectrum_partials(spec_x, result.x[:M], result.x[M:2*M], result.x[2*M:3*M])
print(F)
SSR = np.sum(resids**2)
s2_epsilon = SSR / (len(spec_x) - len(result.x))
s2_beta = np.diag(s2_epsilon * linalg.inv(F.transpose() @ F))
ci = np.sqrt(s2_beta) * scipy.stats.norm.ppf(0.975)
for pi, c in zip(result.x, ci):
print('{} +/- {}'.format(pi, c))
for pi, c in zip(result.x[:3], ci[:3]):
print('{:%} +/- {:%}'.format(pi / np.sum(result.x[:3]), c / np.sum(result.x[:3])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building the model
Step2: Now we need to define the model object itself. The main thing the model needs is the grid, which the trees are placed on. But since the model is dynamic, it also needs to include time -- it needs a schedule, to manage the trees activation as they spread the fire from one to the other.
Step3: Running the model
Step4: To run the model until it's done (that is, until it sets its running property to False) just use the run_model() method. This is implemented in the Model parent object, so we didn't need to implement it above.
Step5: That's all there is to it!
Step6: And chart it, to see the dynamics.
Step7: In this case, the fire burned itself out after about 90 steps, with many trees left unburned.
Step8: ... But to really understand how the final outcome varies with density, we can't just tweak the parameter by hand over and over again. We need to do a batch run.
Step9: Now the BatchRunner, which we've named param_run, is ready to go. To run the model at every combination of parameters (in this case, every density value), just use the run_all() method.
Step10: Like with the data collector, we can extract the data the batch runner collected into a dataframe
Step11: As you can see, each row here is a run of the model, identified by its parameter values (and given a unique index by the Run column). To view how the BurnedOut fraction varies with density, we can easily just plot them
Step12: And we see the very clear emergence of a critical value around 0.5, where the model quickly shifts from almost no trees being burned, to almost all of them.
|
<ASSISTANT_TASK:>
Python Code:
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from mesa import Model, Agent
from mesa.time import RandomActivation
from mesa.space import Grid
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
class TreeCell(Agent):
'''
A tree cell.
Attributes:
x, y: Grid coordinates
condition: Can be "Fine", "On Fire", or "Burned Out"
unique_id: (x,y) tuple.
unique_id isn't strictly necessary here, but it's good practice to give one to each
agent anyway.
'''
def __init__(self, model, pos):
'''
Create a new tree.
Args:
pos: The tree's coordinates on the grid. Used as the unique_id
'''
super().__init__(pos, model)
self.pos = pos
self.unique_id = pos
self.condition = "Fine"
def step(self):
'''
If the tree is on fire, spread it to fine trees nearby.
'''
if self.condition == "On Fire":
neighbors = self.model.grid.get_neighbors(self.pos, moore=False)
for neighbor in neighbors:
if neighbor.condition == "Fine":
neighbor.condition = "On Fire"
self.condition = "Burned Out"
class ForestFire(Model):
'''
Simple Forest Fire model.
'''
def __init__(self, height, width, density):
'''
Create a new forest fire model.
Args:
height, width: The size of the grid to model
density: What fraction of grid cells have a tree in them.
'''
# Initialize model parameters
self.height = height
self.width = width
self.density = density
# Set up model objects
self.schedule = RandomActivation(self)
self.grid = Grid(height, width, torus=False)
self.dc = DataCollector({"Fine": lambda m: self.count_type(m, "Fine"),
"On Fire": lambda m: self.count_type(m, "On Fire"),
"Burned Out": lambda m: self.count_type(m, "Burned Out")})
# Place a tree in each cell with Prob = density
for x in range(self.width):
for y in range(self.height):
if random.random() < self.density:
# Create a tree
new_tree = TreeCell(self, (x, y))
# Set all trees in the first column on fire.
if x == 0:
new_tree.condition = "On Fire"
self.grid[y][x] = new_tree
self.schedule.add(new_tree)
self.running = True
def step(self):
'''
Advance the model by one step.
'''
self.schedule.step()
self.dc.collect(self)
# Halt if no more fire
if self.count_type(self, "On Fire") == 0:
self.running = False
@staticmethod
def count_type(model, tree_condition):
'''
Helper method to count trees in a given condition in a given model.
'''
count = 0
for tree in model.schedule.agents:
if tree.condition == tree_condition:
count += 1
return count
fire = ForestFire(100, 100, 0.6)
fire.run_model()
results = fire.dc.get_model_vars_dataframe()
results.plot()
fire = ForestFire(100, 100, 0.8)
fire.run_model()
results = fire.dc.get_model_vars_dataframe()
results.plot()
param_set = dict(height=50, # Height and width are constant
width=50,
# Vary density from 0.01 to 1, in 0.01 increments:
density=np.linspace(0,1,101)[1:])
# At the end of each model run, calculate the fraction of trees which are Burned Out
model_reporter = {"BurnedOut": lambda m: (ForestFire.count_type(m, "Burned Out") /
m.schedule.get_agent_count()) }
# Create the batch runner
param_run = BatchRunner(ForestFire, param_set, model_reporters=model_reporter)
param_run.run_all()
df = param_run.get_model_vars_dataframe()
df.head()
plt.scatter(df.density, df.BurnedOut)
plt.xlim(0,1)
param_run = BatchRunner(ForestFire, param_set, iterations=5, model_reporters=model_reporter)
param_run.run_all()
df = param_run.get_model_vars_dataframe()
plt.scatter(df.density, df.BurnedOut)
plt.xlim(0,1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a name="HW10.1.1"><h2 style="color
Step2: <a name="HW10.2"> <h2 style="color
Step3: <a name="HW10.3"><h2 style="color
Step4: <a name="HW10.4"><h2 style="color
Step5: <a name="HW10.4.1"><h2 style="color
Step6: <a name="HW10.5"><h2 style="color
Step7: <a name="HW10.6"><h2 style="color
Step8: <a name="HW10.7"><h2 style="color
|
<ASSISTANT_TASK:>
Python Code:
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
## Code goes here
## Drivers & Runners
## Run Scripts, S3 Sync
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Not interested in the trivial solution $\beta_nL=0$, it's apparent that the roots are $\beta_nL\simeq(n+1/4)\pi$ but we can compute numerically more precise approximations to the roots of the determinant and eventually the frequencies of vibration, $\omega_n^2 = (\beta_nL)^4\omega_0^2$.
Step2: The eigenfunctions
Step3: Static equilibrium position
Step4: Modal expansion of static displacement
Step5: Plotting the approximation to the static displacements obtained using a few eigenfunctions is not really useful, but it's so easy... We plot the approximant and the difference between the static displacements and the approximant.
Step6: The Modal Responses
Step7: The Bending Moment $M(0, t)$
Step8: Static bending moment
Step9: As you can see, we are in trouble inasmuch the bending moment in $x=L$ can not be approximated correctly,
Step10: Let's see what happens if we take into account a very small value of viscous damping.
Step11: As you can see, the effects of viscous damping are really important for the high frequency modal components, that is because the decrement
Step12: The Initialization Cells
Step13: Configuration statements and function definitions
|
<ASSISTANT_TASK:>
Python Code:
BetaL = linspace(0,5,501)
det_exact = sin(pi*BetaL)-cos(pi*BetaL)*tanh(pi*BetaL)
det_approx = sin(pi*BetaL)-cos(pi*BetaL)
plt.plot(BetaL, det_exact, label='exact')
plt.plot(BetaL, det_approx, label='approx')
plt.xticks([0]+[n+0.25 for n in (1,2,3,4)])
plt.title('The determinant of the homogeneous system as a function of $\\beta L$')
plt.xticks((0.00, 1.00, 1.25, 2.00, 2.25, 3.00, 3.25, 4.00, 4.25, 5.00),
('0', '1', '1.25', '2', '2.25', '3', '3.25', '4', '4.25', '5'))
plt.legend(); plt.xlabel(r'$\beta L/\pi$');
N = 3
roots = array([newton(lambda x: sin(x)-cos(x)*tanh(x), (r+1.25)*pi)
for r in range(N)])
BetaL = roots
w1 = BetaL**2
w2 = w1**2
dL(r'\noindent The adimensional wavenumbers',
groupalign(roots, 3, variable='\\beta_{%d}L'),
r'$\text{The adimensional frequencies}$',
groupalign(w1, 3, variable=r'\frac{\omega_{%d}}{\omega_0}'),
r'$\text{The adimensional squared frequencies}$',
groupalign(w2, 3, variable=r'\frac{\omega^2_{%d}}{\omega^2_0}'))
CA, CB = sin(roots)-sinh(roots), cos(roots)-cosh(roots)
A = 1.0 ; B = -CA*A/CB
xi = linspace(0, 1, 1001)
b_xi = outer(xi, roots)
phi = A*(sin(b_xi)-sinh(b_xi)) + B*(cos(b_xi)-cosh(b_xi))
tlbl = '$\\tau=\\omega_0t$'
xlbl = '$\\xi=x/L$'
plt.legend(plt.plot(xi, phi), [r'$\phi_{%d}$'%n for n in range(1,7)], loc=0, ncol=3)
plt.xlabel(xlbl)
plt.title('The first %d eigenfunctions'%N )
rev_y()
y = (1-xi)*xi*xi/4
plt.plot(xi, y, label='$y_\mathrm{stat}/wL$')
plt.title('Normalized static displacement')
plt.legend() ; plt.xlabel(xlbl)
rev_y()
modal_mass = array([trapz(phi_i**2, dx=0.001) for phi_i in phi.T])
modmass_eta = array([trapz(phi_i*y, dx=0.001) for phi_i in phi.T])
eta = modmass_eta/modal_mass
dL(groupalign(eta, 3, variable='\\eta_{%d}', fmt='%+g'))
y3 = phi@eta
plt.plot(xi, y3); plt.xlabel(xlbl)
plt.title(r'Approximant to $y_\mathrm{stat}$ using %d terms'%N)
rev_y()
plt.show()
plt.plot(xi, y-y3) ; plt.xlabel(xlbl)
plt.title(r'Difference $y_\mathrm{stat} - \sum_1^{%d} \eta_n\phi_n(\beta_n x)$'%N)
rev_y()
responses = [r'\frac{q_{%d}}{wL} &= %+g \cos(%g\omega_0t)'%(n, eta_n, w_n)
for n, eta_n, w_n in zip(count(1), eta, w1)]
align_res = r',\\\\'.join(',&'.join(res for res in row if res) for row in grouper(responses, 2))
dL(r'\begin{align}', align_res, r'.\end{align}')
dL(r'$$\frac{M(0)}{W} =', ' '.join(
r'%+.6g\cos(%.2f\omega_0t)'%(coef, w) for coef, w in zip(w1*eta*B*2, w1)), r'+\ldots$$')
# d2y is the NEGATIVE of the second spatial derivative of \phi
d2y = A*(sin(b_xi)+sinh(b_xi)) + B*(cos(b_xi)+cosh(b_xi))
M_stat = d2y@(eta*BetaL**2)
# plot the derivatives
plt.legend(plt.plot(xi, d2y), (r"$-\phi''_{%d}$"%(n) for n, _ in enumerate(d2y, 1)), ncol=3)
plt.xlabel(xlbl)
plt.ylabel(r'$-d^2\phi/d\xi^2$')
plt.title('The second spatial derivatives of the mode shapes')
rev_y()
plt.show()
# plot the static M(x)/W (from -0.5 to +1) and the approximation using N modes
plt.plot(xi, M_stat, label=r'$\sum_1^{%d}M_{\mathrm{stat, }n}$'%N)
plt.plot(xi, 0.5*(xi-1)+xi, label=r'$M_\mathrm{stat}$')
plt.xlabel(xlbl)
plt.ylabel(r'$M\, /\, W$')
plt.title('Comparison of the static bending moment and its approximation')
plt.legend(ncol=3)
rev_y()
M_stat[0]
t = linspace(0, 5, 5001)
wt = outer(t, BetaL**2)
cwt = cos(wt)
M = cwt@(2*B*eta*BetaL**2)
if 1:
plt.plot(t, M)
plt.title('Bending moment @ x=0, with $\\zeta=0.0\\%$')
plt.xlabel(r'$\tau=\omega_0t$')
plt.ylabel(r'$M(0)/W$')
plt.ylim((-1.5, 1.5))
plt.yticks((-1, -0.5, 0, 1))
rev_y()
zeta = 0.005
swt = sin(wt)
Md = (exp(-zeta*wt)*(zeta*swt+cwt))@(2*B*eta*BetaL**2)
if 1:
plt.plot(t, Md)
plt.title('Bending moment @ x=0, with $\\zeta=0.5\\%$')
plt.xlabel(r'$\tau=\omega_0t$')
plt.ylabel(r'$M(0)/W$')
plt.ylim((-1.5, 1.5))
plt.yticks((-1, -0.5, 0, 1))
rev_y()
def p(z=0.5):
zeta = z/100.0
M = (exp(-zeta*wt)*(zeta*swt+cwt))@(2*B*eta*BetaL**2)
plt.plot(t, M)
plt.title('Bending moment @ x=0, with $\\zeta=%.1f\\%%$'%z)
plt.xlabel(r'$\tau=\omega_0t$')
plt.ylabel(r'$M(0)/W$')
plt.ylim((-1.5, 1.5))
plt.yticks((-1, -0.5, 0, 1))
rev_y()
interact(p, z=(0, 2.0)) ;
%matplotlib inline
import numpy as np
from numpy import array, cos, cosh, exp, linspace, outer
from numpy import pi, sin, sinh, sqrt, tanh, trapz
import matplotlib.pyplot as plt
from scipy.optimize import newton
from jupyterthemes import jtplot
from itertools import count, zip_longest
from ipywidgets import interact
jtplot.style(context='paper', fscale=1.5, figsize=(15, 3))
def rev_y(): plt.ylim(plt.ylim()[::-1])
def dL(*l):
from IPython.display import Latex
display(Latex(' '.join(l)))
def grouper( iterable, N):
return zip_longest(*([iter(iterable)]*N))
def groupalign(seq, N, variable='x_{%d}', fmt='%g', closing='.'):
beg = r'\begin{align*}'
end = closing + r'\end{align*}'
xfmt = variable + '&=' + fmt
g = grouper(seq, N)
body = r',\\'.join(
',&'.join(
xfmt%(1+i+N*j, x) for i, x in enumerate(items) if x != None)
for j, items in enumerate(g))
return ''.join((beg, body, end))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Stem and leaf
Step3: Boxplot
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import matplotlib.pyplot as plt
import matplotlib as mpl
import palettable
import numpy as np
import math
import seaborn as sns
from collections import defaultdict
%matplotlib inline
# Here, we customize the various matplotlib parameters for font sizes and define a color scheme.
# As mentioned in the lecture, the typical defaults in most software are not optimal from a
# data presentation point of view. You need to work hard at selecting these parameters to ensure
# an effective data presentation.
colors = palettable.colorbrewer.qualitative.Set1_4.mpl_colors
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['lines.color'] = 'r'
mpl.rcParams['axes.titlesize'] = 32
mpl.rcParams['axes.labelsize'] = 30
mpl.rcParams['axes.labelsize'] = 30
mpl.rcParams['xtick.labelsize'] = 24
mpl.rcParams['ytick.labelsize'] = 24
data = 105 221 183 186 121 181 180 143
97 154 153 174 120 168 167 141
245 228 174 199 181 158 176 110
163 131 154 115 160 208 158 133
207 180 190 193 194 133 156 123
134 178 76 167 184 135 229 146
218 157 101 171 165 172 158 169
199 151 142 163 145 171 148 158
160 175 149 87 160 237 150 135
196 201 200 176 150 170 118 149
data = [[int(x) for x in d.split()] for d in data.split("\n")]
d = np.array(data).flatten()
min_val = d.min()
max_val = d.max()
start_val = math.floor(min_val / 10) * 10
mean = np.average(d)
median = np.median(d)
print("Min value = %d, Max value = %d." % (min_val, max_val))
print("Mean = %.1f" % mean)
print("Median = %.1f" % median)
print("Standard deviation = %.1f" % np.sqrt(np.var(d)))
freq, bins = np.histogram(d, bins=np.arange(70, 260, 20))
plt.figure(figsize=(12,8))
bins = np.arange(70, 260, 20)
plt.hist(np.array(d), bins=bins)
plt.xticks(bins + 10, ["%d-%d" % (bins[i], bins[i+1]) for i in range(len(bins) - 1)], rotation=-45)
ylabel = plt.ylabel("f")
xlabel = plt.xlabel("Compressive strength (psi)")
def generate_stem_and_leaf(data):
stem_and_leaf = defaultdict(list)
for i in data:
k = int(math.floor(i / 10))
v = int(i % 10)
stem_and_leaf[k].append(v)
for k in sorted(stem_and_leaf.keys()):
print("%02d | %s" % (k, " ".join(["%d" % i for i in stem_and_leaf[k]])))
generate_stem_and_leaf(d)
plt.figure(figsize=(12,8))
ax = sns.boxplot(y=d, color="c")
ax = sns.swarmplot(y=d, color=".25") # We add the swarm plot as well to show all data points.
ylabel = plt.ylabel("Compressive Strength (psi)")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.preprocessing import PolynomialFeatures
estimators = [('reduce_dIm', PCA()), ('pOly', PolynomialFeatures()), ('svdm', SVC())]
clf = Pipeline(estimators)
clf.steps.pop(1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Categories
Step2: Clusters
Step3: Counts
Step4: Entities
Step5: Sentiment
Step6: Topics
Step7: Wordcloud
|
<ASSISTANT_TASK:>
Python Code:
# A few sample short texts with user comments on two Facebook pages
texts_telia = json.load(open('texts_telia.json', 'r'))
texts_tele2 = json.load(open('texts_tele2.json', 'r'))
res_anomalies = bw.anomalies(texts_telia, texts_tele2, max_words=10, min_score=1)
pprint(res_anomalies)
category_texts = [
'Denna text handlar om pengar och dollar och ekonomi',
'Denna text handlar om hamburgare, vitlรถk och grรถnsaker',
'Hรคr รคr en till text om mat och tomater'
]
categories = [
['ekonomi', 'valuta'],
['mat', 'matvaror', 'frukt', 'grรถnsaker'],
['djur', 'hund']
]
res_cat = bw.categories(category_texts, categories)
pprint(res_cat)
res_clusters = bw.clusters(texts_tele2, min_cluster_size=10)
pprint(res_clusters)
# Print a description of first cluster and all docs that belong to it
print(res_clusters['descriptions'][0])
first_cluster = [doc for doc, cluster in zip(texts_tele2, res_clusters['clusters']) if cluster == 0]
for doc in first_cluster:
print(doc)
res_counts = bw.counts(texts_tele2)
pprint(res_counts)
docs = ['De tvรฅ stรถrsta stรคderna i Sverige รคr Stockholm och Gรถteborg',
'Donald Trump รคr Barack Obamas eftertrรคdare',
'En stad i Frankrike รคr Lyon']
bw.entities(docs)
docs_sentiment = [
'Nej, det hรคr var inte alls bra',
'Detta รคr en bra mening, som visar pรฅ en helt fantastisk fantasi hos fรถrfattaren'
]
res_sentiment = bw.sentiment(docs_sentiment)
pprint(res_sentiment)
docs_topics = [
'De tvรฅ stรถrsta stรคderna i Sverige รคr Stockholm och Gรถteborg, men Rรคvlanda รคr mest creddig',
'Donald Trump รคr Barack Obamas eftertrรคdare',
'Jag gillar รคpplen och pรคron och jag kan jรคmfรถra dem']
res_topics = bw.topics(docs_topics)
pprint(res_topics)
cloud = bw.wordcloud(bw.counts(texts_tele2))
cloud
from IPython.display import Image
from IPython.core.display import HTML
Image(url= cloud['url'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assume that each step is done in 10 seconds and define time stamps
Step2: Add displacements to initial latitude and longitude
Step3: We also append 20 minutes of INS being at rest
Step4: Set pitch and roll angles to zeros
Step5: Sensor sampling period is set to 0.1
Step6: Run the simulation routine which will interpolate the trajectory and generate inertial readings
Step7: The final trajectory is drawn below, the initial point is marked with a cross.
Step8: Integrating ideal data
Step9: First we apply coning and sculling corrections
Step10: And the run the integration.
Step11: Compute integration error using a convenience function
Step12: We see that attitude and velocity errors are vanishingly small. The position errors are less than 3 meters during 3 hours of operations, which is completely negligible compared to errors of even the most accurate INS.
Step13: Integrating "real" data
Step14: Compute biases as a random constants. To avoid a "bad case" in this example we generated biases uniformly within $[-2 \sigma, 2 \sigma]$.
Step15: Now we apply errors to inertial readings
Step16: Compute coning and sculling corrections
Step17: An INS operation have to start with the self alignment. We devote 15 minutes of the initial rest for it
Step18: Split the readings into alignment and navigation parts
Step19: Compare estimated attitude angles with the true angles.
Step20: Assume that the initial position is known with the accuracy typical to GPS receivers
Step21: Assume that it is known that the navigation starts at rest ans set initial velocities to 0
Step22: Now we can run the integration
Step23: We see that even with very accurate gyros pure INS performance is not that good.
Step24: Aiding from GPS
Step25: We will use an idealized model that GPS observations contain only additive normal errors with a standard deviation of 10 meters (note that in reality errors in outputs from GPS receivers behave much worse).
Step26: To use GPS measurements in a navigation Kalman filter we wrap this data into a special object
Step27: Also define gyro and accelerometer models using parameters defined above
Step28: Now we can run a navigation Kalman filter which will blend INS and GPS data. In this example INS errors didn't grow very large, thus we can use a feedforward filter.
Step29: We create a filter by passing sampling period and computed trajectory. To initialize the covariance matrix we pass standard deviations of the initial errors.
Step30: We run the filter and pass available measurements to it. The return value is the INS trajectory corrected by estimated errors.
Step31: Now we want to investigate errors in the filtered trajectory.
Step32: Obviously performance in terms of position and velocity accuracy is very good, but this is sort of expected because GPS provides coordinates directly.
Step33: The return value of FeedforwardFilter contains attributes err, sd, gyro_err, gyro_sd, accel_err, accel_sd for estimated trajectory errors and inertial sensor states and their standard deviations. Below we plot true errors for heading, pitch and roll with their 1-sigma bounds provided by the filter.
Step34: Also it is interesting to assess the filter's sensor bias estimation. Plots below show $\pm \sigma$ bands of gyro bias estimates, the straight line depicts the true value. We see that estimation of gyro biases is quite successful.
Step35: Below the same done for accelerometer biases. Horizontal accelerometer biases are less observable on the given trajectory than gyro biases, and the vertical bias is not observable at all because pitch and roll are held zero.
|
<ASSISTANT_TASK:>
Python Code:
from pyins import sim
from pyins.coord import perturb_ll
def generate_trajectory(n_points, min_step, max_step, angle_spread, random_state=0):
rng = np.random.RandomState(random_state)
xy = [np.zeros(2)]
angle = rng.uniform(2 * np.pi)
heading = [90 - angle]
angle_spread = np.deg2rad(angle_spread)
for i in range(n_points - 1):
step = rng.uniform(min_step, max_step)
xy.append(xy[-1] + step * np.array([np.cos(angle), np.sin(angle)]))
angle += rng.uniform(-angle_spread, angle_spread)
heading.append(90 - angle)
return np.asarray(xy), np.asarray(heading)
xy, h = generate_trajectory(1000, 70, 100, 20, random_state=1)
t = np.arange(1000) * 10
lat0 = 58
lon0 = 56
lat, lon = perturb_ll(lat0, lon0, xy[:, 1], xy[:, 0])
t = np.hstack((-1200, t))
lat = np.hstack((lat[0], lat))
lon = np.hstack((lon[0], lon))
h = np.hstack((h[0], h))
p = np.zeros_like(h)
r = np.zeros_like(h)
dt = 0.1
traj_ref, gyro, accel = sim.from_position(dt, lat, lon, t, h=h, p=p, r=r)
plt.plot(traj_ref.lon, traj_ref.lat)
plt.plot(traj_ref.lon[0], traj_ref.lat[0], 'kx', markersize=12)
plt.xlabel("lon, deg")
plt.ylabel("lat, deg")
from pyins.integrate import coning_sculling, integrate
from pyins.filt import traj_diff
theta, dv = coning_sculling(gyro, accel)
traj_ideal = integrate(dt, *traj_ref.iloc[0], theta, dv)
err_ideal = traj_diff(traj_ideal, traj_ref)
def plot_errors(dt, err, step=1000):
plt.figure(figsize=(15, 10))
plt.subplot(331)
err = err.iloc[::step]
t = err.index * dt / 3600
plt.plot(t, err.lat, label='lat')
plt.xlabel("time, h")
plt.ylabel("m")
plt.legend(loc='best')
plt.subplot(334)
plt.plot(t, err.lon, label='lon')
plt.xlabel("time, h")
plt.ylabel("m")
plt.legend(loc='best')
plt.subplot(332)
plt.plot(t, err.VE, label='VE')
plt.xlabel("time, h")
plt.ylabel("m/s")
plt.legend(loc='best')
plt.subplot(335)
plt.plot(t, err.VN, label='VN')
plt.xlabel("time, h")
plt.ylabel("m/s")
plt.legend(loc='best')
plt.subplot(333)
plt.plot(t, err.h, label='heading')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.legend(loc='best')
plt.subplot(336)
plt.plot(t, err.p, label='pitch')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.legend(loc='best')
plt.subplot(339)
plt.plot(t, err.r, label='roll')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.legend(loc='best')
plt.tight_layout()
plot_errors(dt, err_ideal)
gyro_bias_sd = np.deg2rad(0.05) / 3600 # 0.05 d/h
accel_bias_sd = 5e-3
gyro_bias_sd
gyro_noise = 1e-6 # rad / s^0.5
accel_noise = 3e-4 # m / s^1.5
np.random.seed(1)
gyro_bias = gyro_bias_sd * np.random.uniform(-2, 2, 3)
accel_bias = accel_bias_sd * np.random.uniform(-2, 2, 3)
gyro_bias, accel_bias
from pyins import earth
gyro_e = gyro + gyro_bias * dt + gyro_noise * np.random.randn(*gyro.shape) * dt**0.5
accel_e = accel + accel_bias * dt + accel_noise * np.random.randn(*accel.shape) * dt**0.5
theta, dv = coning_sculling(gyro_e, accel_e)
t_align = 15 * 60
align_samples = int(t_align / dt)
theta_align = theta[:align_samples]
theta_nav = theta[align_samples:]
dv_align = dv[:align_samples]
dv_nav = dv[align_samples:]
from pyins.align import align_wahba
(h0, p0, r0), P_align = align_wahba(dt, theta_align, dv_align, 58)
h0 - traj_ref.h.loc[align_samples], p0 - traj_ref.p.loc[align_samples], r0 - traj_ref.r.loc[align_samples]
lat0, lon0 = perturb_ll(traj_ref.lat.loc[align_samples], traj_ref.lon.loc[align_samples],
10 * np.random.randn(1), 10 * np.random.randn(1))
VE0 = 0
VN0 = 0
traj_real = integrate(dt, lat0, lon0, VE0, VN0, h0, p0, r0, theta_nav, dv_nav, stamp=align_samples)
traj_error = traj_diff(traj_real, traj_ref)
plot_errors(dt, traj_error)
gps_data = pd.DataFrame(index=traj_ref.index[::10])
gps_data['lat'] = traj_ref.lat[::10]
gps_data['lon'] = traj_ref.lon[::10]
gps_pos_sd = 10
gps_data['lat'], gps_data['lon'] = perturb_ll(gps_data.lat, gps_data.lon,
gps_pos_sd * np.random.randn(*gps_data.lat.shape),
gps_pos_sd * np.random.randn(*gps_data.lon.shape))
from pyins.filt import LatLonObs
gps_obs = LatLonObs(gps_data, gps_pos_sd)
from pyins.filt import InertialSensor
gyro_model = InertialSensor(bias=gyro_bias_sd, noise=gyro_noise)
accel_model = InertialSensor(bias=accel_bias_sd, noise=accel_noise)
from pyins.filt import FeedforwardFilter
ff_filt = FeedforwardFilter(dt, traj_real,
pos_sd=10, vel_sd=0.1, azimuth_sd=0.5, level_sd=0.05,
gyro_model=gyro_model, accel_model=accel_model)
ff_res = ff_filt.run(observations=[gps_obs])
filt_error = traj_diff(ff_res.traj, traj_ref)
plot_errors(dt, filt_error, step=10)
plt.figure(figsize=(15, 5))
t_plot = filt_error.index * dt / 3600
plt.subplot(131)
plt.plot(t_plot, filt_error.h, 'b')
plt.plot(t_plot, ff_res.sd.h, 'b--')
plt.plot(t_plot, -ff_res.sd.h, 'b--')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.title("heading error")
plt.subplot(132)
plt.plot(t_plot, filt_error.p, 'b')
plt.plot(t_plot, ff_res.sd.p, 'b--')
plt.plot(t_plot, -ff_res.sd.p, 'b--')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.title("pitch error")
plt.subplot(133)
plt.plot(t_plot, filt_error.r, 'b')
plt.plot(t_plot, ff_res.sd.r, 'b--')
plt.plot(t_plot, -ff_res.sd.r, 'b--')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.title("roll error")
plt.tight_layout()
plt.figure(figsize=(15, 5))
t_plot = filt_error.index[::10] * dt / 3600
gyro_err = ff_res.gyro_err.iloc[::10]
gyro_sd = ff_res.gyro_sd.iloc[::10]
plt.subplot(131)
plt.plot(t_plot, gyro_err.BIAS_1 + gyro_sd.BIAS_1, 'b')
plt.plot(t_plot, gyro_err.BIAS_1 - gyro_sd.BIAS_1, 'b')
plt.hlines(gyro_bias[0], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Gyro 1 bias")
plt.subplot(132)
plt.plot(t_plot, gyro_err.BIAS_2 + gyro_sd.BIAS_2, 'b')
plt.plot(t_plot, gyro_err.BIAS_2 - gyro_sd.BIAS_2, 'b')
plt.hlines(gyro_bias[1], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Gyro 2 bias")
plt.subplot(133)
plt.plot(t_plot, gyro_err.BIAS_3 + gyro_sd.BIAS_3, 'b')
plt.plot(t_plot, gyro_err.BIAS_3 - gyro_sd.BIAS_3, 'b')
plt.hlines(gyro_bias[2], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Gyro 3 bias")
plt.tight_layout()
plt.figure(figsize=(15, 5))
t_plot = filt_error.index[::10] * dt / 3600
accel_err = ff_res.accel_err.iloc[::10]
accel_sd = ff_res.accel_sd.iloc[::10]
plt.subplot(131)
plt.plot(t_plot, accel_err.BIAS_1 + accel_sd.BIAS_1, 'b')
plt.plot(t_plot, accel_err.BIAS_1 - accel_sd.BIAS_1, 'b')
plt.hlines(accel_bias[0], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Accel 1 bias")
plt.subplot(132)
plt.plot(t_plot, accel_err.BIAS_2 + accel_sd.BIAS_2, 'b')
plt.plot(t_plot, accel_err.BIAS_2 - accel_sd.BIAS_2, 'b')
plt.hlines(accel_bias[1], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Accel 2 bias")
plt.subplot(133)
plt.plot(t_plot, accel_err.BIAS_3 + accel_sd.BIAS_3, 'b')
plt.plot(t_plot, accel_err.BIAS_3 - accel_sd.BIAS_3, 'b')
plt.hlines(accel_bias[2], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Accel 3 bias")
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.1 Valeurs propres et valeurs singuliรจres de l'ACP non rรฉduite
Step2: Les valeurs singuliรจres associรฉes ร l'ACP sont celles de $(\bar{X}, I_p, \frac{1}{n-1}I_n)$
Step3: Pour retrouver les valeurs propres de l'ACP ร partir des valeurs singuliรจres de la matrice centrรฉe
Step4: 2.2 Vecteurs propres de l'ACP non rรฉduite
Step5: 2.3 Composantes principales de l'ACP non rรฉduite
Step6: Q Comparer avec les rรฉsultats obtenus en R.
Step7: Voici un aperรงu des empilements des images ร dรฉcrire puis ensuite en principe ร discriminer
Step8: 3.2 Analyse en composantes principales
Step9: Diagramme boรฎte des premiรจres composantes principales.
Step10: Q Quelle dimension retenir en principe?
Step11: Le mรชme graphique avec une lรฉgende mais moins de couleurs.
Step12: Graphique en trois dimensions.
Step13: 4. Donnรฉes "cubiques" de l'OCDE
Step14: 4. 2 Lecture des donnรฉes
Step15: 4.3 Statistiques รฉlรฉmentaires
Step16: Q Quel est le graphique ci-dessous? Que reprรฉsentent les blocs dagonaux? Que dire des structures de corrรฉlation?
Step17: Q Quel est le graphe ci-dessus. Que dire de la premiรจre composante? Quelle dimension choisir?
Step18: Q Interprรฉter chacun des deux premiers axes.
Step19: Reprรฉsentation adaptรฉe ร ces donnรฉes
|
<ASSISTANT_TASK:>
Python Code:
# Construire la matrice de notes
import pandas as pd
note=[[6,6,5,5.5],[8,8,8,8],[6,7,11,9.5],[14.5,14.5,15.5,15],
[14,14,12,12.5],[11,10,5.5,7],[5.5,7,14,11.5],[13,12.5,8.5,9.5],
[9,9.5,12.5,12]]
dat=pd.DataFrame(note,index=["jean","alai","anni","moni","didi","andr","pier","brig","evel"],
columns=["Math","Phys","Fran","Angl"])
dat
# Importation des fonctions
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
import numpy as np
pca = PCA()
pca.fit(dat).explained_variance_
pca.singular_values_
pca.singular_values_/np.sqrt(8)
(pca.singular_values_/np.sqrt(8))**2
pca.components_.T
pca.transform(dat)
# Importations
import matplotlib.pyplot as plt
from sklearn import datasets
%matplotlib inline
# les donnรฉes prรฉsentes dnas la librairie
digits = datasets.load_digits()
# Contenu et mode d'obtention
print(digits)
# Dimensions
digits.images.shape
# Sous forme d'un cube d'images 1797 x 8x8
print(digits.images)
# Sous forme d'une matrice 1797 x 64
print(digits.data)
# Label rรฉel de chaque caractรจre
print(digits.target)
images_and_labels = list(zip(digits.images,
digits.target))
for index, (image, label) in enumerate(images_and_labels[:8]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Chiffres: %i' % label)
from sklearn.decomposition import PCA
X=digits.data
y=digits.target
target_name=[0,1,2,3,4,5,6,7,8,9]
# dรฉfinition de la commande
pca = PCA()
# Estimation, calcul des composantes principales
C = pca.fit(X).transform(X)
# Dรฉcroissance de la variance expliquรฉe
plt.plot(pca.explained_variance_ratio_)
plt.show()
plt.boxplot(C[:,0:20])
plt.show()
plt.scatter(C[:,0], C[:,1], c=y, label=target_name)
plt.show()
# attention aux indentations
plt.figure()
for c, i, target_name in zip("rgbcmykrgb",[0,1,2,3,4,5,6,7,8,9], target_name):
plt.scatter(C[y == i,0], C[y == i,1], c=c, label=target_name)
plt.legend()
plt.title("ACP Digits")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(C[:, 0], C[:, 1], C[:, 2], c=y, cmap=plt.cm.Paired)
ax.set_title("ACP: trois premieres composantes")
ax.set_xlabel("Comp1")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("Comp2")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("Comp3")
ax.w_zaxis.set_ticklabels([])
plt.show()
# Importaiton des principals librairies et
# Affichage des graphiques dans le notebook
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
ocde=pd.read_table("Data/ocdeR.dat",sep='\s+',index_col=0)
ocde.head()
ocde.mean()
ocde["CNRJ"].hist(bins=20)
plt.show()
from pandas.plotting import scatter_matrix
scatter_matrix(ocde, alpha=0.2, figsize=(15, 15), diagonal='kde')
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
# rรฉduction
ocdeS=scale(ocde)
pca = PCA()
cpOcde = pca.fit_transform(ocdeS)
# Eboulis
plt.plot(pca.explained_variance_ratio_)
plt.show()
plt.boxplot(cpOcde)
plt.show()
coord1=pca.components_[0]*np.sqrt(pca.explained_variance_[0])
coord2=pca.components_[1]*np.sqrt(pca.explained_variance_[1])
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(1, 1, 1)
for i, j, nom in zip(coord1,coord2, ocde.columns):
plt.text(i, j, nom)
plt.arrow(0,0,i,j,color='black')
plt.axis((-1.2,1.2,-1.2,1.2))
# cercle
c=plt.Circle((0,0), radius=1, color='gray', fill=False)
ax.add_patch(c)
plt.show()
plt.figure(figsize=(10,6))
for i, j, nom in zip(cpOcde[:,0], cpOcde[:,1], ocde.index):
# color = int(i/4)
plt.text(i, j, nom ,color="blue")
plt.axis((-5,7,-4,4))
plt.show()
import matplotlib.patheffects as PathEffects
comp_0 = 0
comp_1 = 1
cmap = plt.get_cmap("tab20")
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(1,1,1)
for i,k in enumerate(np.arange(0,cpOcde.shape[0],4)):
country =ocde.index[k]
xs = cpOcde[k:k+4,comp_0]
ys = cpOcde[k:k+4, comp_1]
ax.plot(xs,ys, color=cmap(i), marker=".", markersize=15)
txt = ax.text(xs[-4], ys[-4], country, horizontalalignment="left", verticalalignment="top",
color=cmap(i), fontweight="bold", fontsize=15)
# Add black line around text
#txt.set_path_effects([PathEffects.withStroke(linewidth=1, foreground='black')])
ax.set_xlabel("PC%d" %comp_0, fontsize=20)
ax.set_ylabel("PC%d" %comp_1, fontsize=20)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Caracteristicas usadas en las simulaciones
Step2: Simulacion en NGSPICE
Step3: Fig-20-15-LEVEL-3.cir Transitorio de 0 a 1n Segundo, condicines iniciales v(gateM1)=0 v(drainM2)=0
Step4: Fig-20-15-LEVEL-3.cir Transitorio de 0 a 1n Segundo, condicines iniciales v(gateM1)=5 v(drainM2)=0
Step5: Diseรฑo en Electric
Step6: Circuito en esquematico para exportar a Spice
Step7: Edicion de las preferencias para seleccionar nivel 3
Step8: Simulacion en LTSpice, Archivo
Step9: Circuito en LayOut
Step10: Circuito en LayOut para exportar a Spice
Step11: Simulacion en LTSpice, Archivo
Step12: Libreria final, archivo
Step13: Simulacion del circuito de la figura 20,22 (Improved current reference )
Step14: Caracteristicas usadas en las simulaciones
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/fig-20-15.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/long-channel-mosfet.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/table-9-1.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_ngspice_Level_3.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_ngspice_Level_3_transitory.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_ngspice_Level_3_transitory_2.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_electric_sch.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_electric_sch_2.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_electric_preferences.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_ltspice_sch.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_electric_layout.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_electric_layout_sim.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_electric_layout_2.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_ltspice_layout.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_15_jelib.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Fig_20_22.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/short_channel_model.png'))
from IPython.core.display import Image, display
display(Image(url='images/taller-oct-6/Table_9_2.png'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating an experiment object
Step2: Some basic information about the model can be obtained with
Step3: We can have a quick look at the model in a section view (note that Noddy is now executed in the background when required - and the output automatically generated in the required resolution)
Step4: The base plot is not very useful - but we can create a section plot with a define vertical exaggeration (keyword ve) and plot the colorbar in horizontal orientation
Step5: Note
Step6: Generating random perturbations of the model
Step7: For a reproducible experiment, we can also set the random seed
Step8: And now, let's perturb the model
Step9: Let's see what happened
Step10: ...and another perturbation
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
# here the usual imports. If any of the imports fails, make sure that pynoddy is installed
# properly, ideally with 'python setup.py develop' or 'python setup.py install'
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('/Users/flow/git/pynoddy/')
sys.path.append(repo_path)
import pynoddy.history
import pynoddy.experiment
rcParams.update({'font.size': 20})
import importlib
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.output)
importlib.reload(pynoddy.experiment)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4_b.his")
gipps_topo_ex = pynoddy.experiment.Experiment(history = history_file)
print(gipps_topo_ex)
gipps_topo_ex.plot_section('y')
# gipps_topo_ex.determine_model_stratigraphy()
gipps_topo_ex.plot_section('x', ve = 5, position = 'centre',
cmap = 'YlOrRd',
title = '',
colorbar = False)
gipps_topo_ex.plot_section('y', position = 100, ve = 5.,
cmap = 'YlOrRd',
title = '',
colorbar_orientation = 'horizontal')
importlib.reload(pynoddy.experiment)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4_b.his")
gipps_topo_ex = pynoddy.experiment.Experiment(history = history_file)
gipps_topo_ex.load_parameter_file(os.path.join(repo_path, "examples/gipps_params_2.csv"))
gipps_topo_ex.freeze()
gipps_topo_ex.set_random_seed(12345)
gipps_topo_ex.random_perturbation()
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
gipps_topo_ex.plot_section(ax = ax1, direction = 'x', model_type = "base",
colorbar = False, title = "", ve = 5.)
gipps_topo_ex.plot_section(ax = ax2, direction = 'x', colorbar = False,
title = "", ve = 5.)
#
# Note: keep these lines only for debugging!
#
reload(pynoddy.output)
reload(pynoddy.history)
reload(pynoddy.experiment)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4_b.his")
gipps_topo_ex = pynoddy.experiment.Experiment(history = history_file)
gipps_topo_ex.load_parameter_file(os.path.join(repo_path, "examples/gipps_params.csv"))
# freeze base state
gipps_topo_ex.freeze()
# set seed
gipps_topo_ex.set_random_seed(12345)
# randomize
gipps_topo_ex.random_perturbation()
b1 = gipps_topo_ex.get_section('x', resolution = 50, model_type = 'base')
# b1.plot_section(direction = 'x', colorbar = False, title = "", ve = 5.)
b2 = gipps_topo_ex.get_section('x', resolution = 50, model_type = 'current')
# b1.plot_section(direction = 'x', colorbar = True, title = "", ve = 5.)
b1 -= b2
# b1.plot_section(direction = 'x', colorbar = True, title = "", ve = 5.)
print np.min(b1.block), np.max(b1.block)
type(b1)
gipps_topo_ex.random_perturbation()
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
gipps_topo_ex.plot_section(ax = ax1, direction = 'x', model_type = "base",
colorbar = False, title = "", ve = 5.)
gipps_topo_ex.plot_section(ax = ax2, direction = 'x', colorbar = False,
title = "", ve = 5.)
# plot difference
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
gipps_topo_ex.plot_section(ax = ax1, direction = 'x', model_type = "base",
colorbar = False, title = "", ve = 5.)
gipps_topo_ex.plot_section(ax = ax2, direction = 'x', colorbar = False,
title = "", ve = 5.)
gipps_topo_ex.param_stats
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining a custom Dataset
Step2: Defining a custom Task
Step4: Meta-training on multiple tasks
Step5: With this task family defined, we can create instances by sampling a configuration and creating a task. This task acts like any other task in that it has an init and a loss function.
Step6: To achive speedups, we can now leverage jax.vmap to train multiple task instances in parallel! Depending on the task, this can be considerably faster than serially executing them.
Step7: Because of this ability to apply vmap over task families, this is the main building block for a number of the high level libraries in this package. Single tasks can always be converted to a task family with
Step8: This wrapper task family has no configuable value and always returns the base task.
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/learned_optimization.git
import numpy as np
import jax.numpy as jnp
import jax
from matplotlib import pylab as plt
from learned_optimization.outer_trainers import full_es
from learned_optimization.outer_trainers import truncated_pes
from learned_optimization.outer_trainers import gradient_learner
from learned_optimization.outer_trainers import truncation_schedule
from learned_optimization.tasks import quadratics
from learned_optimization.tasks.fixed import image_mlp
from learned_optimization.tasks import base as tasks_base
from learned_optimization.tasks.datasets import base as datasets_base
from learned_optimization.learned_optimizers import base as lopt_base
from learned_optimization.learned_optimizers import mlp_lopt
from learned_optimization.optimizers import base as opt_base
from learned_optimization import optimizers
from learned_optimization import eval_training
import haiku as hk
import tqdm
import numpy as np
def data_iterator():
bs = 3
while True:
batch = {"data": np.zeros([bs, 5])}
yield batch
@datasets_base.dataset_lru_cache
def get_datasets():
return datasets_base.Datasets(
train=data_iterator(),
inner_valid=data_iterator(),
outer_valid=data_iterator(),
test=data_iterator())
ds = get_datasets()
next(ds.train)
# First we construct data iterators.
def noise_datasets():
def _fn():
while True:
yield np.random.normal(size=[4, 2]).astype(dtype=np.float32)
return datasets_base.Datasets(
train=_fn(), inner_valid=_fn(), outer_valid=_fn(), test=_fn())
class MyTask(tasks_base.Task):
datasets = noise_datasets()
def loss(self, params, rng, data):
return jnp.sum(jnp.square(params - data))
def init(self, key):
return jax.random.normal(key, shape=(4, 2))
task = MyTask()
key = jax.random.PRNGKey(0)
key1, key = jax.random.split(key)
params = task.init(key)
task.loss(params, key1, next(task.datasets.train))
PRNGKey = jnp.ndarray
TaskParams = jnp.ndarray
class FixedDimQuadraticFamily(tasks_base.TaskFamily):
A simple TaskFamily with a fixed dimensionality but sampled target.
def __init__(self, dim: int):
super().__init__()
self._dim = dim
self.datasets = None
def sample(self, key: PRNGKey) -> TaskParams:
# Sample the target for the quadratic task.
return jax.random.normal(key, shape=(self._dim,))
def task_fn(self, task_params: TaskParams) -> tasks_base.Task:
dim = self._dim
class _Task(tasks_base.Task):
def loss(self, params, rng, _):
# Compute MSE to the target task.
return jnp.sum(jnp.square(task_params - params))
def init(self, key):
return jax.random.normal(key, shape=(dim,))
return _Task()
task_family = FixedDimQuadraticFamily(10)
key = jax.random.PRNGKey(0)
task_cfg = task_family.sample(key)
task = task_family.task_fn(task_cfg)
key1, key = jax.random.split(key)
params = task.init(key)
batch = None
task.loss(params, key, batch)
def train_task(cfg, key):
task = task_family.task_fn(cfg)
key1, key = jax.random.split(key)
params = task.init(key1)
opt = opt_base.Adam()
opt_state = opt.init(params)
for i in range(4):
params = opt.get_params(opt_state)
loss, grad = jax.value_and_grad(task.loss)(params, key, None)
opt_state = opt.update(opt_state, grad, loss=loss)
loss = task.loss(params, key, None)
return loss
task_cfg = task_family.sample(key)
print("single loss", train_task(task_cfg, key))
keys = jax.random.split(key, 32)
task_cfgs = jax.vmap(task_family.sample)(keys)
losses = jax.vmap(train_task)(task_cfgs, keys)
print("multiple losses", losses)
single_task = image_mlp.ImageMLP_FashionMnist8_Relu32()
task_family = tasks_base.single_task_to_family(single_task)
cfg = task_family.sample(key)
print("config only contains a dummy value:", cfg)
task = task_family.task_fn(cfg)
# Tasks are the same
assert task == single_task
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We could instead leave the default backend as 'matplotlib', and then switch only some specific cells to use bokeh using a cell magic
Step3: Here's an example of HoloViews Elements using a Bokeh backend, with bokeh's style options
Step4: Notice that because the first two plots use the same underlying data, they become linked, such that zooming or panning one of the plots makes the corresponding change on the other.
Step5: Controlling axes
Step6: Datetime axes
Step7: Matplotlib/Seaborn conversions
Step8: Containers
Step9: Another reason to use tabs is that some Layout combinations may not be able to be displayed directly using HoloViews. For example, it is not currently possible to display a GridSpace as part of a Layout in any backend, and this combination will automatically switch to a tab representation for the bokeh backend.
Step10: When the histogram represents a quantity that is mapped to a value dimension with a corresponding colormap, it will automatically share the colormap, making it useful as a colorbar for that dimension as well as a histogram.
Step11: HoloMaps
Step12: Tools
Step13: Selection tools
Step14: Selection widget with shared axes and linked brushing
Step15: By creating two Points Elements, which both draw their data from the same pandas DataFrame, the two plots become automatically linked. This linking behavior can be toggled with the shared_datasource plot option on a Layout or GridSpace. You can try selecting data in one plot, and see how the corresponding data (those on the same rows of the DataFrame, even if the plots show different data, will be highlighted in each.
Step16: A gridmatrix is a clear use case for linked plotting. This operation plots any combination of numeric variables against each other, in a grid, and selecting datapoints in any plot will highlight them in all of them. Such linking can thus reveal how values in a particular range (e.g. very large outliers along one dimension) relate to each of the other dimensions.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import holoviews as hv
hv.notebook_extension('bokeh')
from holoviews.plotting.bokeh.element import (line_properties, fill_properties,
text_properties)
print(
Line properties: %s\n
Fill properties: %s\n
Text properties: %s
% (line_properties, fill_properties, text_properties))
curve_opts = dict(line_width=10, line_color='indianred', line_dash='dotted', line_alpha=0.5)
point_opts = dict(fill_color='#00AA00', fill_alpha=0.5, line_width=1, line_color='black', size=5)
text_opts = dict(text_align='center', text_baseline='middle', text_color='gray', text_font='Arial')
xs = np.linspace(0, np.pi*4, 100)
data = (xs, np.sin(xs))
(hv.Curve(data)(style=curve_opts) +
hv.Points(data)(style=point_opts) +
hv.Text(6, 0, 'Here is some text')(style=text_opts))
%%opts Points.A [width=300 height=300] Points.B [width=600 height=300]
hv.Points(data, group='A') + hv.Points(data, group='B')
points = hv.Points(np.exp(xs))
axes_opts = [('Plain', {}),
('Log', {'logy': True}),
('None', {'yaxis': None}),
('Rotate', {'xrotation': 90}),
('N Ticks', {'xticks': 3}),
('List Ticks', {'xticks': [0, 100, 300, 500]})]
hv.Layout([points.relabel(group=group)(plot=opts)
for group, opts in axes_opts]).cols(3).display('all')
%%opts Overlay [width=600 legend_position='top_left']
try:
import bokeh.sampledata.stocks
except:
import bokeh.sampledata
bokeh.sampledata.download()
from bokeh.sampledata.stocks import GOOG, AAPL
goog_dates = np.array(GOOG['date'], dtype=np.datetime64)
aapl_dates = np.array(AAPL['date'], dtype=np.datetime64)
hv.Curve((goog_dates, GOOG['adj_close']), kdims=['Date'], vdims=['Stock Index'], label='Google') *\
hv.Curve((aapl_dates, AAPL['adj_close']), kdims=['Date'], vdims=['Stock Index'], label='Apple')
%%opts Distribution (kde_kws=dict(shade=True))
d1 = np.random.randn(500) + 450
d2 = np.random.randn(500) + 540
sines = np.array([np.sin(np.linspace(0, np.pi*2, 100)) + np.random.normal(0, 1, 100) for _ in range(20)])
hv.Distribution(d1) + hv.Bivariate((d1, d2)) + hv.TimeSeries(sines)
%%opts Overlay [tabs=True width=600 height=600] RGB [width=600 height=600]
x,y = np.mgrid[-50:51, -50:51] * 0.1
img = hv.Image(np.sin(x**2+y**2), bounds=(-1,-1,1,1))
img.relabel('Low') * img.clone(img.data*2).relabel('High') + img
points = hv.Points(np.random.randn(500,2))
points.hist(num_bins=51, dimension=['x','y'])
img.hist(num_bins=100, dimension=['x', 'y'], weight_dimension='z', mean_weighted=True) +\
img.hist(dimension='z')
hmap = hv.HoloMap({phase: img.clone(np.sin(x**2+y**2+phase))
for phase in np.linspace(0, np.pi*2, 6)}, kdims=['Phase'])
hmap.hist(num_bins=100, dimension=['x', 'y'], weight_dimension='z', mean_weighted=True)
%%opts Points [tools=['hover']] (size=5) HeatMap [tools=['hover']] Histogram [tools=['hover']] Layout [shared_axes=False]
error = np.random.rand(100, 3)
heatmap_data = {(chr(65+i),chr(97+j)):i*j for i in range(5) for j in range(5) if i!=j}
data = [np.random.normal() for i in range(10000)]
hist = np.histogram(data, 20)
hv.Points(error) + hv.HeatMap(heatmap_data).sort() + hv.Histogram(hist)
%%opts Points [tools=['box_select', 'lasso_select', 'poly_select']] (size=10 unselected_color='red' color='blue')
hv.Points(error)
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t')
%%opts Scatter [tools=['box_select', 'lasso_select']] Layout [shared_axes=True shared_datasource=True]
hv.Scatter(macro_df, kdims=['year'], vdims=['gdp']) +\
hv.Scatter(macro_df, kdims=['gdp'], vdims=['unem'])
%%opts Scatter [tools=['box_select', 'lasso_select', 'hover'] border=0] Histogram {+axiswise}
table = hv.Table(macro_df, kdims=['year', 'country'])
matrix = hv.operation.gridmatrix(table.groupby('country'))
matrix.select(country=['West Germany', 'United Kingdom', 'United States'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Joining
Step2: We can concatenate the two DataFrames.
Step3: Alternatively, the above can be obtained with the self-describing append() method.
Step4: By default, concat() concatenates along the rows (axis 0). What we did previously was 'concatenating' along the columns (axis 1). It is actually a join operation, on (index or key) Date.
Step5: Indeed, we could alternatively use the join() method.
Step6: Wait! Which frame's index is used? JOIN can be left, right, outer, or inner (picture unions and intersections).
Step7: Hands-on exercise
Step8: The function in question is an aggregation (for each group, return the minimum value). The length of the result is the number of different groups (here, number of unique x values). Aggregation reduces the size of the data structure.
Step9: The aggregation function can be applied directly, if it is available.
Step10: With a hierarchical index, the level parameter can even be passed directly to certain aggregation functions.
Step11: Counting the number of records in each group is also an aggregation.
Step12: For each unique values of x (which are 0 and 1), we have two entries.
Step13: Filtering is another kind of operation which can be applied to each group. Filtering may reduce the size of the data structure, since some groups might get filtered out.
Step14: The third kind of operation is transformation, where the size of the data structure is preserved.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
mlo, gl = pd.read_csv('../data/co2-mm-mlo.csv', na_values=-99.99, index_col='Date', parse_dates=True), \
pd.read_csv('../data/co2-mm-gl.csv', na_values=-99.99, index_col='Date', parse_dates=True)
# pd.read_csv('https://python.g-node.org/wiki/_media/co2-mm-mlo.csv')
ml, gl = mlo[['Average']], gl[['Average']]
ml.columns, gl.columns = ['Average_mlo'], ['Average_gl']
ml = ml[ml.index >= '1980-01']
ml, gl = ml.head(), gl.head()
pd.concat([ml, gl])
ml.append(gl)
pd.concat([ml, gl], axis=1)
ml.join(gl)
mlo = pd.read_csv('../data/co2-mm-mlo.csv', na_values=-99.99, index_col='Date', parse_dates=True)
mlo = mlo[['Average']]
mlo.head()
mlo.join(gl)
mlo.join(gl, how='inner')
z = pd.Series([0.5, 0.8, 0.6, 0.3], index=pd.MultiIndex.from_product([[0, 1], [0, 1]], names=['x', 'y']))
z
z.groupby('x')
z.groupby('x').apply(lambda u: u.min())
z.groupby(level=0).apply(lambda u: u.min())
z.groupby('x').min()
z.min(level='x')
z.groupby('x').size()
z.groupby(['x', 'y']).size()
z.groupby('y').describe()
import numpy as np
z.groupby('y').apply(lambda u: np.std(u, ddof=1))
z.groupby('y').apply(lambda u: np.std(u))
z.groupby('y').apply(np.std)
z
z.groupby('y').apply(lambda u: u.min() > 0.4)
z.groupby('y').filter(lambda u: u.min() > 0.4)
z.groupby('y').transform(lambda u: u.min())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I am really enjoying having this weather station. I say weather
Step3: Temperature
Step4: The latitude is almost in phase with the phase of the moon, at least at the moment.
|
<ASSISTANT_TASK:>
Python Code:
# Tell matplotlib to plot in line
%matplotlib inline
# import pandas
import pandas
# seaborn magically adds a layer of goodness on top of Matplotlib
# mostly this is just changing matplotlib defaults, but it does also
# provide some higher level plotting methods.
import seaborn
# Tell seaborn to set things up
seaborn.set()
def smooth(data, thresh=None):
means = data.mean()
if thresh is None:
sds = data.std()
else:
sds = thresh
delta = data - data.shift()
good = delta[abs(delta) < sds]
print(good.describe())
return delta.where(good, 0.0)
# set the path to the file we are going to analyse
infile = "../files/light.csv"
!scp 192.168.0.127:Adafruit_Python_BMP/light.csv ../files
assume it is csv and let pandas do magic
index_col tells it to use the 'date' column in the data
as the row index, plotting picks up on this and uses the
date on the x-axis
The *parse_dates* bit just tells it to try and figure out
the date/time in the columne labeled 'date'.
data = pandas.read_csv(infile, index_col='date', parse_dates=['date'])
# incantation to extract the first record in the data
start = data[['temp', 'altitude']].irow(0)
# smooth the data to filter out bad temps and pressures
sdata = (smooth(data, 5.0).cumsum() + start)
# now use smooth to throw away dodgy data, and plot the temp and altitude fieldsa
sdata[['temp', 'altitude']].plot(subplots=True)
import astropy
from astropy import units
from astropy import find_api_page
# find_api_page is handy, even opens a browser windo.
#find_api_page(units)
# astropy is cool, but I need data for the moon. Let's try pyephem
# uncommented the line below and run this cell if you need to install using
# pip. This will install into the environment that is being used to run your
# ipython notebook server.
#!pip install pyephem
import ephem
# Create a Moon
moon = ephem.Moon()
# Tell it to figure out where it is
moon.compute()
# pring out the phase
moon.phase
def moon_orbitals(index):
Given an index of times create a DataFrame of moon orbitals
For now, just Phase, geocentric latitude and geocentric longitude
# Create dataframe with index as the index
df = pandas.DataFrame(index=index)
# Add three series
df['phase'] = pandas.Series()
df['glat'] = pandas.Series()
df['glon'] = pandas.Series()
# Now generate the data
# NB this is slow, solpy might work out faster
moon = ephem.Moon()
for ix, timestamp in enumerate(index):
# Compute the moon posigion
moon.compute(timestamp.strftime("%Y/%m/%d %H:%M:%S"))
df.phase[ix] = moon.phase
df.glat[ix] = moon.hlat
df.glon[ix] = moon.hlon
return df
# See what we got
moon = moon_orbitals(data.index)
moon.describe()
moon.plot(subplots=True)
# Try feeding in a longer time series
days = pandas.date_range('7/7/2015', periods=560, freq='D')
moon_orbitals(days).plot(subplots=True)
sdata['moon_phase'] = moon.phase
sdata['moon_glat'] = moon.glat
sdata['moon_glon'] = moon.glon
# FIXME -- must be a pandas one liner eg data += moon ?
sdata[['temp', 'altitude', 'moon_phase', 'moon_glon', 'moon_glat']].plot(subplots=True)
print(sdata.index[0])
sdata.index[0].hour + (sdata.index[0].minute / 60.)
sdata.describe()
def tide_proxy(df):
# Create dataframe with index as the index
series = pandas.Series(index=df.index)
for ix, timestamp in enumerate(df.index):
hour_min = timestamp.hour + (timestamp.minute / 60.)
hour_min += df.moon_glat[ix]
series[ix] = hour_min
return series
xx = tide_proxy(sdata)
xx.plot()
xx.describe()
# See what we got
moon = moon_orbitals(data.index)
moon.describe()
sdata['tide'] = tide_proxy(sdata)
fields = ['altitude', 'temp', 'tide']
sdata[fields].plot()
sdata.temp.plot()
with open("../files/moon_weather.csv", 'w') as outfile:
sdata.to_csv(outfile)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Temporal decoding
Step3: Generalization Across Time
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score
from sklearn.cross_validation import StratifiedKFold
import mne
from mne.datasets import sample
from mne.decoding import TimeDecoding, GeneralizationAcrossTime
data_path = sample.data_path()
plt.close('all')
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True, add_eeg_ref=False)
raw.set_eeg_reference() # set EEG average reference
raw.filter(2, None, method='iir') # replace baselining with high-pass
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=None, preload=True,
reject=dict(grad=4000e-13, eog=150e-6), add_eeg_ref=False)
epochs_list = [epochs[k] for k in event_id]
mne.epochs.equalize_epoch_counts(epochs_list)
data_picks = mne.pick_types(epochs.info, meg=True, exclude='bads')
td = TimeDecoding(predict_mode='cross-validation', n_jobs=1)
# Fit
td.fit(epochs)
# Compute accuracy
td.score(epochs)
# Plot scores across time
td.plot(title='Sensor space decoding')
# make response vector
y = np.zeros(len(epochs.events), dtype=int)
y[epochs.events[:, 2] == 3] = 1
cv = StratifiedKFold(y=y) # do a stratified cross-validation
# define the GeneralizationAcrossTime object
gat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1,
cv=cv, scorer=roc_auc_score)
# fit and score
gat.fit(epochs, y=y)
gat.score(epochs)
# let's visualize now
gat.plot()
gat.plot_diagonal()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading The Data
Step2: Building The Cross Validated Predictions
Step3: Plotting The Cross-Validated Predictions
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn import datasets
from sklearn import model_selection
import seaborn as sns
sns.set_style('whitegrid')
sns.despine()
from ibex import trans
from ibex.sklearn import linear_model as pd_linear_model
from ibex.sklearn import decomposition as pd_decomposition
from ibex.sklearn import preprocessing as pd_preprocessing
from ibex.sklearn import ensemble as pd_ensemble
from ibex import xgboost as pd_xgboost
from ibex.sklearn import model_selection as pd_model_selection
%pylab inline
dataset = datasets.load_boston()
boston = pd.DataFrame(dataset.data, columns=dataset.feature_names)
features = dataset.feature_names
boston['price'] = dataset.target
boston.head()
linear_y_hat = pd_model_selection.cross_val_predict(
pd_linear_model.LinearRegression(),
boston[features],
boston.price)
linear_y_hat.head()
linear_cv= pd.concat([linear_y_hat, boston.price], axis=1)
linear_cv['type'] = 'linear'
linear_cv.columns = ['y_hat', 'y', 'regressor']
linear_cv.head()
rf_y_hat = pd_model_selection.cross_val_predict(
pd_ensemble.RandomForestRegressor(),
boston[features],
boston.price)
rf_cv= pd.concat([rf_y_hat, boston.price], axis=1)
rf_cv['type'] = 'rf'
rf_cv.columns = ['y_hat', 'y', 'regressor']
xgb_rf_y_hat = pd_model_selection.cross_val_predict(
pd_xgboost.XGBRegressor(),
boston[features],
boston.price)
xgb_rf_cv= pd.concat([xgb_rf_y_hat, boston.price], axis=1)
xgb_rf_cv['type'] = 'xgb_rf'
xgb_rf_cv.columns = ['y_hat', 'y', 'regressor']
cvs = pd.concat([linear_cv, rf_cv, xgb_rf_cv])
cvs.regressor.unique()
min_, max_ = cvs[['y_hat', 'y']].min().min(), cvs[['y_hat', 'y']].max().max()
sns.lmplot(
x='y',
y='y_hat',
hue='regressor',
data=cvs,
palette={'linear': 'grey', 'rf': 'brown', 'xgb_rf': 'green'});
plot(np.linspace(min_, max_, 100), np.linspace(min_, max_, 100), '--', color='darkgrey');
tick_params(colors='0.6')
xlim((min_, max_))
ylim((min_, max_))
figtext(
0,
-0.1,
'Cross-validated predictions for linear and random-forest regressor on the price in the Boston dataset;\n'
'the linear regressor has inferior performance here, in particular for lower prices');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pikov Classes
Step2: Gamekitty
Step3: Create frames for each "clip"
|
<ASSISTANT_TASK:>
Python Code:
names = {}
for node in graph:
for edge in node:
if edge.guid == "169a81aefca74e92b45e3fa03c7021df":
value = node[edge].value
if value in names:
raise ValueError('name: "{}" defined twice'.format(value))
names[value] = node
names["ctor"]
def name_to_guid(name):
if name not in names:
return None
node = names[name]
if not hasattr(node, "guid"):
return None
return node.guid
from pikov.sprite import Bitmap, Clip, Frame, FrameList, Resource, Transition
resource = Resource(graph, guid=name_to_guid("spritesheet"))
spritesheet = []
for row in range(16):
for column in range(16):
sprite_number = row * 16 + column
bitmap_name = "bitmap[{}]".format(sprite_number)
bitmap = Bitmap(graph, guid=name_to_guid(bitmap_name))
spritesheet.append(bitmap)
def find_nodes(graph, ctor, cls):
nodes = set()
# TODO: With graph formats that have indexes, there should be a faster way.
for node in graph:
if node[names["ctor"]] == ctor:
node = cls(graph, guid=node.guid)
nodes.add(node)
return nodes
def find_frames(graph):
return find_nodes(graph, names["frame"], Frame)
def find_transitions(graph):
return find_nodes(graph, names["transition"], Transition)
def find_absorbing_frames(graph):
transitions = find_transitions(graph)
target_frames = set()
source_frames = set()
for transition in transitions:
target_frames.add(transition.target.guid)
source_frames.add(transition.source.guid)
return target_frames - source_frames # In but not out. Dead end!
MICROS_12_FPS = int(1e6 / 12) # 12 frames per second
MICROS_24_FPS = int(1e6 / 24)
def connect_frames(graph, transition_name, source, target):
transition = Transition(graph, guid=name_to_guid(transition_name))
transition.name = transition_name
transition.source = source
transition.target = target
return transition
sit = Clip(graph, guid=name_to_guid("clip[sit]"))
sit
sit_to_stand = Clip(graph, guid=name_to_guid("clip[sit_to_stand]"))
sit_to_stand
stand_waggle= Clip(graph, guid=name_to_guid("clip[stand_waggle]"))
stand_waggle
connect_frames(
graph,
"transitions[sit_to_stand, stand_waggle]",
sit_to_stand[-1],
stand_waggle[0])
stand_to_sit = Clip(graph, guid=name_to_guid("clip[stand_to_sit]"))
stand_to_sit
connect_frames(
graph,
"transitions[stand_waggle, stand_to_sit]",
stand_waggle[-1],
stand_to_sit[0])
connect_frames(
graph,
"transitions[stand_to_sit, sit]",
stand_to_sit[-1],
sit[0])
sit_paw = Clip(graph, guid=name_to_guid("clip[sit_paw]"))
sit_paw
connect_frames(
graph,
"transitions[sit_paw, sit]",
sit_paw[-1],
sit[0])
connect_frames(
graph,
"transitions[sit, sit_paw]",
sit[-1],
sit_paw[0])
sit_to_crouch = Clip(graph, guid=name_to_guid("clip[sit_to_crouch]"))
connect_frames(
graph,
"transitions[sit, sit_to_crouch]",
sit[-1],
sit_to_crouch[0])
crouch = Clip(graph, guid=name_to_guid("clip[crouch]"))
connect_frames(
graph,
"transitions[sit_to_crouch, crouch]",
sit_to_crouch[-1],
crouch[0])
crouch_to_sit = Clip(graph, guid=name_to_guid("clip[crouch_to_sit]"))
connect_frames(
graph,
"transitions[crouch_to_sit, sit]",
crouch[-1],
crouch_to_sit[0])
connect_frames(
graph,
"transitions[crouch_to_sit, sit]",
crouch_to_sit[-1],
sit[0])
find_absorbing_frames(graph)
graph.save()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='wrangling'></a>
Step3: Data Cleaning
Step4: Convert cast, genres, director and production_companies columns to array
Step5: <a id='eda'></a>
Step8: Research Question 2
Step9: Research Question 3
Step10: Find common genres in highest grossing movies
Step11: Popularity of highest grossing movies
Step13: Directors of highest grossing movies
Step14: Cast of highest grossing movies
Step15: Production companies of highest grossing movies
Step18: Highest grossing budget
Step19: Research Question 5
|
<ASSISTANT_TASK:>
Python Code:
# import necessary libraries
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
# Load TMDb data and print out a few lines. Perform operations to inspect data
# types and look for instances of missing or possibly errant data.
tmdb_movies = pd.read_csv('tmdb-movies.csv')
tmdb_movies.head()
tmdb_movies.describe()
# Pandas read empty string value as nan, make it empty string
tmdb_movies.cast.fillna('', inplace=True)
tmdb_movies.genres.fillna('', inplace=True)
tmdb_movies.director.fillna('', inplace=True)
tmdb_movies.production_companies.fillna('', inplace=True)
def string_to_array(data):
This function returns given splitss the data by separator `|` and returns the result as array
return data.split('|')
tmdb_movies.cast = tmdb_movies.cast.apply(string_to_array)
tmdb_movies.genres = tmdb_movies.genres.apply(string_to_array)
tmdb_movies.director = tmdb_movies.director.apply(string_to_array)
tmdb_movies.production_companies = tmdb_movies.production_companies.apply(string_to_array)
def yearly_growth(mean_revenue):
return mean_revenue - mean_revenue.shift(1).fillna(0)
# Show change in mean revenue over years, considering only movies for which we have revenue data
movies_with_budget = tmdb_movies[tmdb_movies.budget_adj > 0]
movies_with_revenue = movies_with_budget[movies_with_budget.revenue_adj > 0]
revenues_over_years = movies_with_revenue.groupby('release_year').sum()
revenues_over_years.apply(yearly_growth)['revenue'].plot()
revenues_over_years[['budget_adj', 'revenue_adj']].plot()
def log(data):
return np.log(data)
movies_with_revenue[['budget_adj', 'revenue_adj']].apply(log) \
.sort_values(by='budget_adj').set_index('budget_adj')['revenue_adj'].plot(figsize=(20,6))
def popular_movies(movies):
return movies[movies['vote_average']>=7]
def group_by_genre(data):
This function takes a Data Frame having and returns a dictionary having
release_year as key and value a dictionary having key as movie's genre
and value as frequency of the genre that year.
genres_by_year = {}
for (year, position), genres in data.items():
for genre in genres:
if year in genres_by_year:
if genre in genres_by_year[year]:
genres_by_year[year][genre] += 1
else:
genres_by_year[year][genre] = 1
else:
genres_by_year[year] = {}
genres_by_year[year][genre] = 1
return genres_by_year
def plot(genres_by_year):
This function iterates over each row of Data Frame and prints rows
having release_year divisible by 5 to avoid plotting too many graphs.
for year, genres in genres_by_year.items():
if year%5 == 0:
pd.DataFrame(grouped_genres[year], index=[year]).plot(kind='bar', figsize=(20, 6))
# Group movies by genre for each year and try to find the correlations
# of genres over years.
grouped_genres = group_by_genre(tmdb_movies.groupby('release_year').apply(popular_movies).genres)
plot(grouped_genres)
highest_grossing_movies = tmdb_movies[tmdb_movies['revenue_adj'] >= 1000000000]\
.sort_values(by='revenue_adj', ascending=False)
highest_grossing_movies.head()
def count_frequency(data):
frequency_count = {}
for items in data:
for item in items:
if item in frequency_count:
frequency_count[item] += 1
else:
frequency_count[item] = 1
return frequency_count
highest_grossing_genres = count_frequency(highest_grossing_movies.genres)
print(highest_grossing_genres)
pd.DataFrame(highest_grossing_genres, index=['Genres']).plot(kind='bar', figsize=(20, 8))
highest_grossing_movies.vote_average.hist()
def list_to_dict(data, label):
This function creates returns statistics and indices for a data frame
from a list having label and value.
statistics = {label: []}
index = []
for item in data:
statistics[label].append(item[1])
index.append(item[0])
return statistics, index
import operator
high_grossing_dirs = count_frequency(highest_grossing_movies.director)
revenues, indexes = list_to_dict(sorted(high_grossing_dirs.items(), key=operator.itemgetter(1), reverse=True)[:20], 'revenue')
pd.DataFrame(revenues, index=indexes).plot(kind='bar', figsize=(20, 5))
high_grossing_cast = count_frequency(highest_grossing_movies.cast)
revenues, index = list_to_dict(sorted(high_grossing_cast.items(), key=operator.itemgetter(1), reverse=True)[:30], 'number of movies')
pd.DataFrame(revenues, index=index).plot(kind='bar', figsize=(20, 5))
high_grossing_prod_comps = count_frequency(highest_grossing_movies.production_companies)
revenues, index = list_to_dict(sorted(high_grossing_prod_comps.items(), key=operator.itemgetter(1), reverse=True)[:30]\
, 'number of movies')
pd.DataFrame(revenues, index=index).plot(kind='bar', figsize=(20, 5))
def grossing(movies, by):
This function returns the movies' revenues over key passed as `by` value in argument.
revenues = {}
for id, movie in movies.iterrows():
for key in movie[by]:
if key in revenues:
revenues[key].append(movie.revenue_adj)
else:
revenues[key] = [movie.revenue_adj]
return revenues
def gross_revenue(data):
This functions computes the sum of values of the dictionary and
return a new dictionary with same key but cummulative value.
gross = {}
for key, revenues in data.items():
gross[key] = np.sum(revenues)
return gross
gross_by_dirs = grossing(movies=movies_with_revenue, by='director')
director_gross_revenue = gross_revenue(gross_by_dirs)
top_15_directors = sorted(director_gross_revenue.items(), key=operator.itemgetter(1), reverse=True)[:15]
revenues, index = list_to_dict(top_15_directors, 'director')
pd.DataFrame(data=revenues, index=index).plot(kind='bar', figsize=(15, 9))
gross_by_actors = grossing(movies=tmdb_movies, by='cast')
actors_gross_revenue = gross_revenue(gross_by_actors)
top_15_actors = sorted(actors_gross_revenue.items(), key=operator.itemgetter(1), reverse=True)[:15]
revenues, indexes = list_to_dict(top_15_actors, 'actors')
pd.DataFrame(data=revenues, index=indexes).plot(kind='bar', figsize=(15, 9))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Have a look to the downloaded images
Step2: The same dataset saved as a hdf5 file can be downloaded
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import ndimage as nd
from matplotlib import pyplot as plt
!wget https://github.com/jeanpat/DeepFISH/blob/master/dataset/Cleaned_FullRes_2164_overlapping_pairs.npz?raw=true
!mv Cleaned_FullRes_2164_overlapping_pairs.npz?raw=true Clean2164.npz
# There's a trick to load and uncompress a numpy .npz array
# https://stackoverflow.com/questions/18231135/load-compressed-data-npz-from-file-using-numpy-load/44693995
#
dataset = np.load('Clean2164.npz')
data = dataset.f.arr_0
data.shape
N=203
plt.figure(figsize=(10,8))
plt.subplot(121)
plt.imshow(data[N,:,:,0], cmap=plt.cm.gray)
plt.subplot(122)
plt.imshow(data[N,:,:,1], cmap=plt.cm.flag_r)
!wget https://github.com/jeanpat/DeepFISH/blob/master/dataset/Cleaned_FullRes_2164_overlapping_pairs.h5?raw=true
!mv Cleaned_FullRes_2164_overlapping_pairs.h5?raw=true Clean2164.h5
import h5py
filename = './Clean2164.h5'
h5f = h5py.File(filename,'r')
pairs = h5f['chroms_data'][:]
h5f.close()
print('dataset is a numpy array of shape:', pairs.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notebook Provenance
Step2: Local Path Definitions
Step3: PyBEL Resources
Step4: Data
Step5: Error Analysis
Step6: The types of errors in a graph and their frequencies can be calculated using pbt.summary.count_error_types.
Step7: A common type of error is to use names that aren't contained within a namespace. These are thrown with pybel.parser.parse_exceptions.MissingNamespaceNameWarning. The PyBEL Tools function pbt.summary.calculate_incorrect_name_dict creates a dictionary for each namespace which incorrect names were used and their frequencies.
Step8: Using pbt.utils.count_dict_values, the number of unique incorrect names for each namespace is extracted and plotted.
Step9: Another common error is to write identifiers without a namespace, or for short, a naked name. The function pbt.summary.count_naked_names returns a counter of how many time each naked name appeared.
Step10: The number of unique naked names can be directly calculated with len() on the resulting counter from pbt.summary.count_naked_names.
Step11: The 25 most common naked names are output below.
Step12: Overall, the same error made multiple times is grouped to identify the most frequent errors.
Step13: Error Analysis by Annotation
Step14: The list of all errors for each subgraph can be calculated with pbt.summary.calculate_error_by_annotation.
Step15: These data are aggregated to a count of the number of items in each list with pbt.utils.count_dict_values. The top 30 most error-prone subgraphs are shown below.
Step16: Finally, the size to error ratio is calculated for each subgraph below. The 25 subgraphs with the highest error to size ratio are shown.
Step17: The overall distribution of subgraph sizes and error counts is shown below. It indicates that they trend with a positive correlation, but there are clear outliers showing careful curation for large subgraphs, and sloppy curation for smaller ones.
|
<ASSISTANT_TASK:>
Python Code:
import logging
import os
import re
import time
from collections import Counter, defaultdict
from operator import itemgetter
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from fuzzywuzzy import process, fuzz
from matplotlib_venn import venn2
import pybel
from pybel.constants import PYBEL_DATA_DIR
from pybel.manager.cache import CacheManager
from pybel.parser import MetadataParser
import pybel_tools as pbt
from pybel_tools.utils import barh, barv
%config InlineBackend.figure_format = 'svg'
%matplotlib inline
logging.getLogger('pybel.cache').setLevel(logging.CRITICAL)
time.asctime()
pybel.__version__
pbt.__version__
bms_base = os.environ['BMS_BASE']
pybel_resources_base = os.environ['PYBEL_RESOURCES_BASE']
pickle_path = os.path.join(bms_base, 'aetionomy', 'alzheimers', 'alzheimers.gpickle')
graph = pybel.from_pickle(pickle_path)
graph.version
len(graph.warnings)
error_counter = pbt.summary.count_error_types(graph)
barh(error_counter, plt)
incorrect_name_dict = pbt.summary.calculate_incorrect_name_dict(graph)
barh(pbt.utils.count_dict_values(incorrect_name_dict), plt)
naked_names = pbt.summary.count_naked_names(graph)
len(naked_names)
naked_names.most_common(25)
error_groups = pbt.summary.group_errors(graph)
error_group_counts = Counter({k:len(v) for k,v in error_groups.items()})
error_group_counts.most_common(24)
size_by_subgraph = pbt.summary.count_annotation_values(graph, 'Subgraph')
plt.figure(figsize=(10, 3))
barv(dict(size_by_subgraph.most_common(30)), plt)
plt.yscale('log')
plt.title('Top 30 Subgraph Sizes')
plt.show()
error_by_subgraph = pbt.summary.calculate_error_by_annotation(graph, 'Subgraph')
error_by_subgraph_count = pbt.utils.count_dict_values(error_by_subgraph)
plt.figure(figsize=(10, 3))
barv(dict(error_by_subgraph_count.most_common(30)), plt)
plt.yscale('log')
plt.ylabel('Errors')
plt.title('Top 30 Most Error-Prone Subgraphs')
plt.show()
subgraphs = sorted(size_by_subgraph)
df_data = [(size_by_subgraph[k], error_by_subgraph_count[k], error_by_subgraph_count[k] / size_by_subgraph[k]) for k in subgraphs]
df = pd.DataFrame(df_data, index=subgraphs, columns=['Size', 'Errors', 'E/S Ratio'])
df.to_csv('~/Desktop/errors.tsv', sep='\t')
df.sort_values('E/S Ratio', ascending=False).head(25)
sns.lmplot('Size', 'Errors', data=df)
plt.title('BEL Errors as a function of Subgraph Size')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
normalized_x = x / 255.
return normalized_x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
nb_classes = 10
return np.eye(nb_classes)[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None, *image_shape], name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None, n_classes], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([*conv_ksize, x_tensor.get_shape().as_list()[3], conv_num_outputs],
stddev = 0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
padding = 'SAME'
convex_layer = tf.nn.bias_add(tf.nn.conv2d(x_tensor, weights, [1, *conv_strides, 1], padding), bias)
convex_layer = tf.nn.relu(convex_layer)
max_pool_layer = tf.nn.max_pool(convex_layer, [1, *pool_ksize, 1], [1, *pool_strides, 1], padding)
return max_pool_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
shape = x_tensor.get_shape().as_list()
dim = np.prod(shape[1:])
flatten_x_tensor = tf.reshape(x_tensor, [-1, dim])
return flatten_x_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
input_dim = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([input_dim, num_outputs], stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
fully_conn_layer = tf.add(tf.matmul(x_tensor, weight), bias)
fully_conn_layer = tf.nn.relu(fully_conn_layer)
return fully_conn_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
input_dim = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([input_dim, num_outputs], stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
output_layer = tf.add(tf.matmul(x_tensor, weight), bias)
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = (5, 5)
conv_stride = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
conv2d_maxpool_layer = conv2d_maxpool(x, 64, conv_ksize, conv_stride, pool_ksize, pool_strides)
conv2d_maxpool_layer = conv2d_maxpool(conv2d_maxpool_layer, 128,
conv_ksize, conv_stride, pool_ksize, pool_strides)
conv2d_maxpool_layer = conv2d_maxpool(conv2d_maxpool_layer, 256,
conv_ksize, conv_stride, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flatten_layer = flatten(conv2d_maxpool_layer)
# flatten_layer = tf.nn.dropout(flatten_layer, keep_prob)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_conn_layer = fully_conn(flatten_layer, 1024)
fully_conn_layer = tf.nn.dropout(fully_conn_layer, keep_prob)
# fully_conn_layer = fully_conn(flatten_layer, 128)
# fully_conn_layer = tf.nn.dropout(fully_conn_layer, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_conn_layer, 10)
# TODO: return output
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability}
session.run(optimizer, feed_dict)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict = {x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict = {x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
# TODO: Tune Parameters
epochs = 10
batch_size = 128
keep_probability = 0.75
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the core, menus and pipeline tree
Step2: Create a new project and tree
Step3: Set the device type
Step4: Initiate the pipeline
Step5: Discover available modules
Step6: Activate some modules
Step7: Activate the Economics and Reliability themes
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from IPython.display import display, HTML
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (14.0, 8.0)
import numpy as np
from datetime import datetime
from dtocean_core import start_logging
from dtocean_core.core import Core
from dtocean_core.menu import DataMenu, ModuleMenu, ProjectMenu, ThemeMenu
from dtocean_core.pipeline import Tree, _get_connector
from dtocean_core.extensions import StrategyManager
# Bring up the logger
start_logging()
def html_list(x):
message = "<ul>"
for name in x:
message += "<li>{}</li>".format(name)
message += "</ul>"
return message
def html_dict(x):
message = "<ul>"
for name, status in x.iteritems():
message += "<li>{}: <b>{}</b></li>".format(name, status)
message += "</ul>"
return message
def html_variable(core, project, variable):
value = variable.get_value(core, project)
metadata = variable.get_metadata(core)
name = metadata.title
units = metadata.units
message = "<b>{}:</b> {}".format(name, value)
if units:
message += " ({})".format(units[0])
return message
new_core = Core()
project_menu = ProjectMenu()
module_menu = ModuleMenu()
theme_menu = ThemeMenu()
data_menu = DataMenu()
pipe_tree = Tree()
project_title = "DTOcean"
new_project = project_menu.new_project(new_core, project_title)
options_branch = pipe_tree.get_branch(new_core, new_project, "System Type Selection")
variable_id = "device.system_type"
my_var = options_branch.get_input_variable(new_core, new_project, variable_id)
my_var.set_raw_interface(new_core, "Wave Floating")
my_var.read(new_core, new_project)
project_menu.initiate_pipeline(new_core, new_project)
names = module_menu.get_available(new_core, new_project)
message = html_list(names)
HTML(message)
module_menu.activate(new_core, new_project, 'Hydrodynamics')
module_menu.activate(new_core, new_project, 'Electrical Sub-Systems')
module_menu.activate(new_core, new_project, 'Mooring and Foundations')
names = theme_menu.get_available(new_core, new_project)
message = html_list(names)
HTML(message)
theme_menu.activate(new_core, new_project, "Economics")
# Here we are expecting Hydrodynamics
assert _get_connector(new_project, "modules").get_current_interface_name(new_core, new_project) == "Hydrodynamics"
from aneris.utilities.analysis import get_variable_network, count_atomic_variables
req_inputs, opt_inputs, outputs, req_inter, opt_inter = get_variable_network(new_core.control,
new_project.get_pool(),
new_project.get_simulation(),
"modules")
req_inputs[req_inputs.Type=="Shared"].reset_index()
shared_req_inputs = req_inputs[req_inputs.Type=="Shared"]
len(shared_req_inputs["Identifier"].unique())
count_atomic_variables(shared_req_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
opt_inputs[opt_inputs.Type=="Shared"].reset_index()
shared_opt_inputs = opt_inputs[opt_inputs.Type=="Shared"]
len(shared_opt_inputs["Identifier"].unique())
count_atomic_variables(shared_opt_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
req_inter
len(req_inter["Identifier"].unique())
count_atomic_variables(req_inter["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
opt_inter
len(opt_inter["Identifier"].unique())
count_atomic_variables(opt_inter["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
hyrdo_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Hydrodynamics']
len(hyrdo_req_inputs["Identifier"].unique())
count_atomic_variables(hyrdo_req_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
hyrdo_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Hydrodynamics']
len(hyrdo_opt_inputs["Identifier"].unique())
count_atomic_variables(hyrdo_opt_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
electro_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Electrical Sub-Systems']
len(electro_req_inputs["Identifier"].unique())
count_atomic_variables(electro_req_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
electro_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Electrical Sub-Systems']
len(electro_opt_inputs["Identifier"].unique())
count_atomic_variables(electro_opt_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
moorings_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Mooring and Foundations']
len(moorings_req_inputs["Identifier"].unique())
count_atomic_variables(moorings_req_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
moorings_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Mooring and Foundations']
len(moorings_opt_inputs["Identifier"].unique())
count_atomic_variables(moorings_opt_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
total_req_inputs = req_inputs.loc[req_inputs['Interface'] != 'Shared']
len(total_req_inputs["Identifier"].unique())
count_atomic_variables(total_req_inputs["Identifier"].unique(),
new_core.data_catalog,
"labels",
["TableData",
"TableDataColumn",
"IndexTable",
"LineTable",
"LineTableColumn",
"TimeTable",
"TimeTableColumn"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining required functions
Step2: Comparing the Gamma and the Four Parameter response function
Step3: Create Pastas model
Step4: The results of the Pastas simulation show that Pastas is able to simulate the synthetic groundwater head. The parameters calculated with Pastas are equal to the parameters used to generate the synthetic groundwater series; Atrue, ntrue, atrue and dtrue.
Step5: The results of the Pastas simulation show that the groundwater head series can be simulated using the Four Parameter resposne function. The parameters calculated using Pastas only slightly deviate from the parameters Atrue, ntrue, atrue and dtrue defined above. The parameter recharge_b is almost equal to 0 (meaning that the Four Parameter responce function is almost equal to the Gamma response function, as can be seen above).
Step6: Create Pastas model
Step7: The results of the Pastas simulation show that the observed head can be simulated using the Hantush response function. The parameters calibrated with Pastas are very close to the true parameters.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import gammainc, gammaincinv
from scipy.integrate import quad
import pandas as pd
import pastas as ps
%matplotlib inline
rain = ps.read.read_knmi('data_notebook_5/etmgeg_260.txt', variables='RH').series
evap = ps.read.read_knmi('data_notebook_5/etmgeg_260.txt', variables='EV24').series
extraction = pd.read_csv('data_notebook_5/extraction.csv',
index_col=0, parse_dates=True, dayfirst=True)['Extraction']
rain = rain['1980':'1999']
evap = evap['1980':'1999']
extraction = extraction['1980':'2000']
def gamma_tmax(A, n, a, cutoff=0.99):
return gammaincinv(n, cutoff) * a
def gamma_step(A, n, a, cutoff=0.99):
tmax = gamma_tmax(A, n, a, cutoff)
t = np.arange(0, tmax, 1)
s = A * gammainc(n, t / a)
return s
def gamma_block(A, n, a, cutoff=0.99):
# returns the gamma block response starting at t=0 with intervals of delt = 1
s = gamma_step(A, n, a, cutoff)
return np.append(s[0], s[1:] - s[:-1])
def hantush_func(t, a, b):
return (t ** -1) * np.exp(-(a / t) - (t / b))
def hantush_step(A, a, b, tmax=1000, cutoff=0.99):
t = np.arange(0, tmax)
f = np.zeros(tmax)
for i in range(1,tmax):
f[i] = quad(hantush_func, i-1, i, args=(a, b))[0]
F = np.cumsum(f)
return (A / quad(hantush_func, 0, np.inf, args=(a, b))[0]) * F
def hantush_block(A, a, b, tmax=1000, cutoff=0.99):
s = hantush_step(A, a, b, tmax=tmax, cutoff=cutoff)
return s[1:] - s[:-1]
Atrue = 800
ntrue = 1.1
atrue = 200
dtrue = 20
h = gamma_block(Atrue, ntrue, atrue) * 0.001
tmax = gamma_tmax(Atrue, ntrue, atrue)
plt.plot(h)
plt.xlabel('Time (days)')
plt.ylabel('Head response (m) due to 1 mm of rain in day 1')
plt.title('Gamma block response with tmax=' + str(int(tmax)));
step = gamma_block(Atrue, ntrue, atrue)[1:]
lenstep = len(step)
h = dtrue * np.ones(len(rain) + lenstep)
for i in range(len(rain)):
h[i:i + lenstep] += rain[i] * step
head = pd.DataFrame(index=rain.index, data=h[:len(rain)],)
head = head['1990':'1999']
plt.figure(figsize=(12,5))
plt.plot(head,'k.', label='head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Time (years)')
ml = ps.Model(head)
sm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm)
ml.solve(noise=False)
ml.plots.results()
ml2 = ps.Model(head)
sm2 = ps.StressModel(rain, ps.FourParam, name='recharge', settings='prec')
ml2.add_stressmodel(sm2)
ml2.solve(noise=False)
ml2.plots.results()
Atrue_hantush = -0.01 # Atrue is negative since a positive extraction results in a drop in groundwater head.
atrue_hantush = 100 # the parameter a is equal to cS in the hantush equation.
rho = 2
btrue_hantush = atrue_hantush * rho ** 2 / 4
dtrue_hantush = 20
h_hantush = hantush_block(Atrue_hantush, atrue_hantush, btrue_hantush)
plt.plot(h_hantush)
plt.xlabel('Time (days)')
plt.ylabel('Head response (m) due to 1 m3 of extraction in day 1')
plt.title('Hantush block response with tmax=' + str(1000));
step_hantush = hantush_block(Atrue_hantush, atrue_hantush, btrue_hantush)[1:]
lenstep = len(step_hantush)
h_hantush = dtrue * np.ones(len(extraction) + lenstep)
for i in range(len(extraction)):
h_hantush[i:i + lenstep] += extraction[i] * step_hantush
head_hantush = pd.DataFrame(index=extraction.index, data=h_hantush[:len(extraction)],)
head_hantush = head_hantush['1990':'1999']
plt.figure(figsize=(12,5))
plt.plot(head_hantush,'k.', label='head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Time (years)')
ml3 = ps.Model(head_hantush)
sm3 = ps.StressModel(extraction, ps.Hantush, name='extraction', settings='well', up=False)
ml3.add_stressmodel(sm3)
ml3.solve(noise=False)
ml3.plots.results()
ml4 = ps.Model(head_hantush)
sm4 = ps.StressModel(extraction, ps.FourParam, name='extraction', settings='well', up=False)
ml4.add_stressmodel(sm4)
ml4.solve(noise=False)
ml4.plots.results()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Labo 2
Step2: Dataframe manipulation
Step3: Normality
Step4: Kurtosis
Step5: Kolmogorov-Smirnov
Step6: Shapiro-Wilk
Step7: Transformations
Step8: Logarithmic
Step9: Centrage et rรฉduction
Step10: Descriptive statistics
Step11: Labo 3
Step12: Pearson
Step13: Satterthwaite method
|
<ASSISTANT_TASK:>
Python Code:
# jupyter magic
%matplotlib inline
# python scientific stack
import numpy as np
import pandas as pd
import scipy.stats as scs
import statsmodels
import statsmodels.api as sm
import statsmodels.formula.api as smf
import statsmodels.stats as sms
# fileformat
from simpledbf import Dbf5
from sas7bdat import SAS7BDAT
# excel
#df = pd.read_excel('data/labo2/SR_Data.xls')
# DBF (Dbase)
dbf = Dbf5('data/labo2/SR_Data.dbf')
df = dbf.to_dataframe()
# SPSS
# savReaderWriter error with pip install
# SAS
sas = SAS7BDAT('data/labo2/tableau1.sas7bdat')
# show vars
df.columns
# delete var
df = df.drop('Shape_Leng', 1) # 1 = column axis
# df.drop('Shape_Leng', 1, inplace=True) # same as previous, inplace impacts this dataframe instead of the returned one
# rename var
df = df.rename(columns={'POPTOT_FR':'POPTOT'})
# create var
df['km'] = df['Shape_Area'] / 1000000
df['HabKm2'] = df['POPTOT'] / df['km']
# show data head
df.head()
#scs.skew(df)
df.skew()
df.kurt() # or df.kurtosis()
# scs.kstest(df['HabKm2'], 'norm')
statsmodels.stats.diagnostic.kstest_normal(df['HabKm2'])
scs.shapiro(df['HabKm2'])
df['SqrtDens'] = np.sqrt(df['HabKm2'])
df['SqrtImg'] = np.sqrt(df['IMMREC_PCT'])
# log(0) = error
df['LogDens'] = np.log(df['HabKm2'])
df['LogImg'] = np.log(df['IMMREC_PCT'] + 1)
#df[['INDICE_PAU', 'Dist_Min', 'N_1000', 'Dist_Moy_3']]
zscores = df.ix[:,11:15]
# scaling from machine learning
from sklearn.preprocessing import scale
zscores = pd.DataFrame(scale(zscores), index=zscores.index, columns=zscores.columns)
zscores.mean()
zscores.std()
df.describe()
df.mean()
df.std()
df.min()
df.max()
df.median()
#df.range() : min, max
df.quantile(0.75) # param : 0.25, 0.75... default 0.5
df.corr()
scs.ttest_ind?
df.cov()
#statsmodels.stats.anova.anova_lm
statsmodels.stats.anova.anova_lm?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-2', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Events
Step2: Note how the reaction is connected using a "connection string", which (in this case) indicates we connect to the type "foo" event of the object. The connection string allows some powerful mechanics, as we will see later in this tutorial. Here, we prefixed the connection string with "!", to supress a warning that Flexx would otherwise give, because it does not know about the "foo" event.
Step3: A reaction can also connect to multiple events
Step4: Properties
Step5: Properties can also be set during initialization.
Step6: Properties are readonly. This may seem like a limitation at first, but it helps make apps more predictable, especially as they become larger. Properties can be mutated using actions. In the above example, a setter action was created automatically because we specified setter=True in the definition of the property.
Step7: The action above mutates the foo property. Properties can only be mutated by actions. This ensures that the state of a component (and of the whole app) is consistent during the handling of reactions.
Step8: Implicit reactions
Step9: Labels
Step10: Dynamism
Step11: Updating the count property on any of its children will invoke the callback
Step12: We also connected to the children property, so that the handler is also invoked when the children are added/removed
Step13: Naturally, when the count on new children changes ...
|
<ASSISTANT_TASK:>
Python Code:
%gui asyncio
from flexx import event
class MyObject(event.Component):
@event.reaction('!foo')
def on_foo(self, *events):
print('received the foo event %i times' % len(events))
ob = MyObject()
for i in range(3):
ob.emit('foo', {})
ob.on_foo()
class MyObject(event.Component):
@event.reaction('!foo', '!bar')
def on_foo_or_bar(self, *events):
for ev in events:
print('received the %s event' % ev.type)
ob = MyObject()
ob.emit('foo', {}); ob.emit('foo', {}); ob.emit('bar', {})
class MyObject(event.Component):
foo = event.IntProp(2, settable=True)
@event.reaction('foo')
def on_foo(self, *events):
print('foo changed from', events[0].old_value, 'to', events[-1].new_value)
ob = MyObject()
ob.set_foo(7)
print(ob.foo)
ob = MyObject(foo=12)
class MyObject(event.Component):
foo = event.IntProp(2, settable=True)
@event.action
def increase_foo(self):
self._mutate_foo(self.foo + 1)
@event.reaction('foo')
def on_foo(self, *events):
print('foo changed from', events[0].old_value, 'to', events[-1].new_value)
ob = MyObject()
ob.increase_foo()
class MyObject(event.Component):
@event.emitter
def mouse_down(self, js_event):
''' Event emitted when the mouse is pressed down. '''
return dict(button=js_event['button'])
@event.reaction('mouse_down')
def on_bar(self, *events):
for ev in events:
print('detected mouse_down, button', ev.button)
ob = MyObject()
ob.mouse_down({'button': 1})
ob.mouse_down({'button': 2})
class MyObject(event.Component):
foo = event.IntProp(2, settable=True)
@event.reaction
def on_foo(self):
print('foo changed is now', self.foo)
ob = MyObject()
ob.set_foo(99)
class MyObject(event.Component):
@event.reaction('!foo:bb')
def foo_handler1(self, *events):
print('foo B')
@event.reaction('!foo:cc')
def foo_handler2(self, *events):
print('foo C')
@event.reaction('!foo:aa')
def foo_handler3(self, *events):
print('foo A')
ob = MyObject()
ob.emit('foo', {})
ob.disconnect('foo:bb')
ob.emit('foo', {})
class Root(event.Component):
children = event.TupleProp([], settable=True)
@event.reaction('children', 'children*.count')
def update_total_count(self, *events):
total_count = sum([child.count for child in self.children])
print('total count is', total_count)
class Sub(event.Component):
count = event.IntProp(0, settable=True)
root = Root()
sub1, sub2, sub3 = Sub(count=1), Sub(count=2), Sub(count=3)
root.set_children([sub1, sub2, sub3])
sub1.set_count(100)
root.set_children([sub2, sub3])
sub4 = Sub()
root.set_children([sub3, sub4])
sub4.set_count(10)
sub1.set_count(1000) # no update, sub1 is not part of root's children
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configurando a biblioteca
Step2: Carregando o primeiro grafo
Step3: Vocรช pode usar esse grafo para depurar sua implementaรงรฃo do cรกlculo de edge betweenness.
Step4: Carregando o segundo grafo
Step5: Construindo uma animaรงรฃo do Girvan-Newman
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import socnet as sn
sn.graph_width = 400
sn.graph_height = 225
sn.edge_width = 1
g = sn.load_graph('teste.gml', has_pos=True)
sn.show_graph(g)
def calculate_partial_betweenness(g, s):
# Esta funรงรฃo deve calcular o betweenness parcial de cada aresta
# e armazenar esse valor no campo `partial_betweenness` dela.
# Betweenness parcial significa o valor do fluxo quando um nรณ
# especรญfico รฉ a fonte. Nesta funรงรฃo, esse nรณ รฉ o parรขmetro s.
sn.graph_width = 800
sn.graph_height = 450
g = sn.load_graph('karate.gml', has_pos=True)
sn.show_graph(g)
from random import randrange
from queue import Queue
def snapshot(g, frames):
frame = sn.generate_frame(g)
frames.append(frame)
frames = []
prev_num_components = 1
gc = g.copy()
snapshot(gc, frames)
while g.edges():
# Identifica e remove a aresta com maior betweenness.
for n, m in g.edges():
g.edge[n][m]['betweenness'] = 0
for n in g.nodes():
calculate_partial_betweenness(g, n)
for n, m in g.edges():
g.edge[n][m]['betweenness'] += g.edge[n][m]['partial_betweenness']
for n, m in g.edges():
g.edge[n][m]['betweenness'] /= 2
n, m = max(g.edges(), key=lambda e: g.edge[e[0]][e[1]]['betweenness'])
g.remove_edge(n, m)
gc.edge[n][m]['color'] = (255, 255, 255)
# Calcula o nรบmero de componentes depois da remoรงรฃo.
for n in g.nodes():
g.node[n]['label'] = 0
label = 0
q = Queue()
for s in g.nodes():
if g.node[s]['label'] == 0:
label += 1
g.node[s]['label'] = label
q.put(s)
while not q.empty():
n = q.get()
for m in g.neighbors(n):
if g.node[m]['label'] == 0:
g.node[m]['label'] = label
q.put(m)
num_components = label
# Se o nรบmero de componentes aumentou, identifica as componentes por cores aleatรณrias.
if prev_num_components < num_components:
colors = {}
for label in range(1, num_components + 1):
colors[label] = (randrange(256), randrange(256), randrange(256))
for n in gc.nodes():
gc.node[n]['color'] = colors[g.node[n]['label']]
prev_num_components = num_components
snapshot(gc, frames)
sn.show_animation(frames)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Search Feature Sets
Step2: Get Feature Set by ID
Step3: Search Features
Step4: Note
|
<ASSISTANT_TASK:>
Python Code:
from ga4gh.client import client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
#Obtain dataSet id REF: -> `1kg_metadata_service`
dataset = c.search_datasets().next()
for feature_set in c.search_feature_sets(dataset_id=dataset.id):
print feature_set
if feature_set.name == "gencode_v24lift37":
gencode = feature_set
feature_set = c.get_feature_set(feature_set_id=gencode.id)
print feature_set
counter = 0
for features in c.search_features(feature_set_id=feature_set.id):
if counter > 3:
break
counter += 1
print"Id: {},".format(features.id)
print" Name: {},".format(features.name)
print" Gene Symbol: {},".format(features.gene_symbol)
print" Parent Id: {},".format(features.parent_id)
if features.child_ids:
for i in features.child_ids:
print" Child Ids: {}".format(i)
print" Feature Set Id: {},".format(features.feature_set_id)
print" Reference Name: {},".format(features.reference_name)
print" Start: {},\tEnd: {},".format(features.start, features.end)
print" Strand: {},".format(features.strand)
print" Feature Type Id: {},".format(features.feature_type.id)
print" Feature Type Term: {},".format(features.feature_type.term)
print" Feature Type Sorce Name: {},".format(features.feature_type.source_name)
print" Feature Type Source Version: {}\n".format(features.feature_type.source_version)
for feature in c.search_features(feature_set_id=feature_set.id, reference_name="chr17", start=42000000, end=42001000):
print feature.name, feature.start, feature.end
feature = c.get_feature(feature_id=features.id)
print"Id: {},".format(feature.id)
print" Name: {},".format(feature.name)
print" Gene Symbol: {},".format(feature.gene_symbol)
print" Parent Id: {},".format(feature.parent_id)
if feature.child_ids:
for i in feature.child_ids:
print" Child Ids: {}".format(i)
print" Feature Set Id: {},".format(feature.feature_set_id)
print" Reference Name: {},".format(feature.reference_name)
print" Start: {},\tEnd: {},".format(feature.start, feature.end)
print" Strand: {},".format(feature.strand)
print" Feature Type Id: {},".format(feature.feature_type.id)
print" Feature Type Term: {},".format(feature.feature_type.term)
print" Feature Type Sorce Name: {},".format(feature.feature_type.source_name)
print" Feature Type Source Version: {}\n".format(feature.feature_type.source_version)
for vals in feature.attributes.vals:
print"{}: {}".format(vals, feature.attributes.vals[vals].values[0].string_value)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Combining the two dataframes into a single dataframe called 'combined.'
Step2: None of the above should be changed
Step3: So let's see how big of a difference there is between TFQMR and P_TFQMR
Step4: Shows how much difference there is between the two solver
|
<ASSISTANT_TASK:>
Python Code:
timings.columns= ['np', 'matrix', 'solver', 'prec', 'status', 'time', 'iters', 'resid']
properties.columns = ['rows', 'cols', 'min_nnz_row', 'row_var', 'col_var', 'diag_var', 'nnz', 'frob_norm', 'symm_frob_norm', 'antisymm_frob_norm', 'one_norm', 'inf_norm', 'symm_inf_norm', 'antisymm_inf_norm', 'max_nnz_row', 'trace', 'abs_trace', 'min_nnz_row', 'avg_nnz_row', 'dummy_rows', 'dummy_rows_kind', 'num_value_symm_1', 'nnz_pattern_symm_1', 'num_value_symm_2', 'nnz_pattern_symm_2', 'row_diag_dom', 'col_diag_dom', 'diag_avg', 'diag_sign', 'diag_nnz', 'lower_bw', 'upper_bw', 'row_log_val_spread', 'col_log_val_spread', 'symm', 'matrix']
combined = pd.merge(properties, timings)
combined.info()
combined = combined.dropna()
combined['solver_num'] = combined.solver.map({'FIXED_POINT': 0, 'BICGSTAB': 1, 'MINRES': 2, 'PSEUDOBLOCK_CG': 3, 'PSEUDOBLOCK_STOCHASTIC_CG': 4, 'PSEUDOBLOCK_TFQMR': 5, 'TFQMR': 6, 'LSQR': 7, 'PSEUDOBLOCK_GMRES': 8}).astype(int)
combined['prec_num'] = combined.prec.map({'ILUT': 0, 'RILUK': 1, 'RELAXATION': 2, 'CHEBYSHEV': 3, 'NONE': 4}).astype(int)
combined['status_num'] = combined.status.map({'error': -1, 'unconverged': 0, 'converged': 1}).astype(int)
good = combined[combined.status == 'converged']
good.groupby('solver').size()
values = {"TFQMR", "PSEUDOBLOCK_TFQMR"}
tfqmr = good.loc[good.solver.isin(values)]
tfqmr.solver.unique()
tfqmr = tfqmr.drop(tfqmr.columns[:36], axis=1)
tfqmr = tfqmr.drop(tfqmr.columns[-3:], axis=1)
tfqmr.info()
tfqmr = tfqmr.groupby('solver')
tfqmr.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we'll pivot this table to construct a nice matrix of users and the movies they rated. NaN indicates missing data, or movies that a given user did not watch
Step2: Now the magic happens - pandas has a built-in corr() method that will compute a correlation score for every column pair in the matrix! This gives us a correlation score between every pair of movies (where at least one user rated both movies - otherwise NaN's will show up.) That's amazing!
Step3: However, we want to avoid spurious results that happened from just a handful of users that happened to rate the same pair of movies. In order to restrict our results to movies that lots of people rated together - and also give us more popular results that are more easily recongnizable - we'll use the min_periods argument to throw out results where fewer than 100 users rated a given movie pair
Step4: Now let's produce some movie recommendations for user ID 0, who I manually added to the data set as a test case. This guy really likes Star Wars and The Empire Strikes Back, but hated Gone with the Wind. I'll extract his ratings from the userRatings DataFrame, and use dropna() to get rid of missing data (leaving me only with a Series of the movies I actually rated
Step5: Now, let's go through each movie I rated one at a time, and build up a list of possible recommendations based on the movies similar to the ones I rated.
Step6: This is starting to look like something useful! Note that some of the same movies came up more than once, because they were similar to more than one movie I rated. We'll use groupby() to add together the scores from movies that show up more than once, so they'll count more
Step7: The last thing we have to do is filter out movies I've already rated, as recommending a movie I've already watched isn't helpful
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3), encoding="ISO-8859-1")
m_cols = ['movie_id', 'title']
movies = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.item', sep='|', names=m_cols, usecols=range(2), encoding="ISO-8859-1")
ratings = pd.merge(movies, ratings)
ratings.head()
userRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values='rating')
userRatings.head()
corrMatrix = userRatings.corr()
corrMatrix.head()
corrMatrix = userRatings.corr(method='pearson', min_periods=100)
corrMatrix.head()
myRatings = userRatings.loc[0].dropna()
myRatings
simCandidates = pd.Series()
for i in range(0, len(myRatings.index)):
print ("Adding sims for " + myRatings.index[i] + "...")
# Retrieve similar movies to this one that I rated
sims = corrMatrix[myRatings.index[i]].dropna()
# Now scale its similarity by how well I rated this movie
sims = sims.map(lambda x: x * myRatings[i])
# Add the score to the list of similarity candidates
simCandidates = simCandidates.append(sims)
#Glance at our results so far:
print ("sorting...")
simCandidates.sort_values(inplace = True, ascending = False)
print (simCandidates.head(10))
simCandidates = simCandidates.groupby(simCandidates.index).sum()
simCandidates.sort_values(inplace = True, ascending = False)
simCandidates.head(10)
filteredSims = simCandidates.drop(myRatings.index)
filteredSims.head(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Input
Step2: Data Selection
Step3: Visualization
|
<ASSISTANT_TASK:>
Python Code:
%less ../datasets/vmstat_loadtest.log
from ozapfdis.linux import vmstat
stats = vmstat.read_logfile("../datasets/vmstat_loadtest.log")
stats.head()
cpu_data = stats.iloc[:, -5:]
cpu_data.head()
%matplotlib inline
cpu_data.plot.area();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load LendingClub dataset
Step2: Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset. We have done this in previous assignments, so we won't belabor this here.
Step3: Modifying the target column
Step4: Selecting features
Step5: Skipping observations with missing values
Step6: Fortunately, there are not too many missing values. We are retaining most of the data.
Step7: Checkpoint
Step8: Gradient boosted tree classifier
Step9: Making predictions
Step10: Predicting on sample validation data
Step11: Quiz question
Step12: Quiz Question
Step13: Calculate the number of false positives made by the model.
Step14: Quiz question
Step15: Comparison with decision trees
Step16: Reminder
Step17: Checkpoint
Step18: Now, we are ready to go to Step 3. You can now use the prediction column to sort the loans in validation_data (in descending order) by prediction probability. Find the top 5 loans with the highest probability of being predicted as a safe loan.
Step19: Quiz question
Step20: Checkpoint
Step21: Now, train 4 models with max_iterations to be
Step22: Compare accuracy on entire validation set
Step23: Quiz Question
Step24: Plot the training and validation error vs. number of trees
Step25: In order to plot the classification errors (on the train_data and validation_data) versus the number of trees, we will need lists of these accuracies, which we get by applying the method .evaluate.
Steps to follow
Step26: Now, let us run Step 2. Save the training errors into a list called training_errors
Step27: Now, onto Step 3. Write code to compute the classification error on the validation_data for models model_10, model_50, model_100, model_200, and model_500.
Step28: Now, let us run Step 4. Save the training errors into a list called validation_errors
Step29: Now, we will plot the training_errors and validation_errors versus the number of trees. We will compare the 10, 50, 100, 200, and 500 tree models. We provide some plotting code to visualize the plots within this notebook.
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
loans = graphlab.SFrame('lending-club-data.gl/')
loans.column_names()
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
target = 'safe_loans'
features = ['grade', # grade of the loan (categorical)
'sub_grade_num', # sub-grade of the loan as a number from 0 to 1
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'payment_inc_ratio', # ratio of the monthly payment to income
'delinq_2yrs', # number of delinquincies
'delinq_2yrs_zero', # no delinquincies in last 2 years
'inq_last_6mths', # number of creditor inquiries in last 6 months
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'open_acc', # number of open credit accounts
'pub_rec', # number of derogatory public records
'pub_rec_zero', # no derogatory public records
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
'int_rate', # interest rate of the loan
'total_rec_int', # interest received to date
'annual_inc', # annual income of borrower
'funded_amnt', # amount committed to the loan
'funded_amnt_inv', # amount committed by investors for the loan
'installment', # monthly payment owed by the borrower
]
loans, loans_with_na = loans[[target] + features].dropna_split()
# Count the number of rows with missing data
num_rows_with_na = loans_with_na.num_rows()
num_rows = loans.num_rows()
print 'Dropping %s observations; keeping %s ' % (num_rows_with_na, num_rows)
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
train_data, validation_data = loans_data.random_split(.8, seed=1)
model_5 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 5)
# Select all positive and negative examples.
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
# Select 2 examples from the validation set for positive & negative loans
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
# Append the 4 examples into a single dataset
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
model_5.predict(dataset=sample_validation_data)
model_5.predict(dataset=sample_validation_data, output_type='probability')
e = model_5.evaluate(validation_data)
e
confusion_matrix = e['confusion_matrix']
confusion_matrix[(confusion_matrix['target_label']==-1) & (confusion_matrix['predicted_label']==1)]
confusion_matrix[(confusion_matrix['target_label']==1) & (confusion_matrix['predicted_label']==-1)]
cost = 10000 * 1463 + 20000 * 1618
cost
validation_data['predictions'] = model_5.predict(dataset=validation_data, output_type='probability')
validation_data.sort('predictions', ascending=False)
print "Your loans : %s\n" % validation_data['predictions'].head(4)
print "Expected answer : %s" % [0.4492515948736132, 0.6119100103640573,
0.3835981314851436, 0.3693306705994325]
s = validation_data.sort('predictions', ascending=True)
print "Your loans : %s\n" % s['grade'].head(5)
model_10 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 10, verbose=False)
model_50 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 50, verbose=False)
model_100 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 100, verbose=False)
model_200 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 200, verbose=False)
model_500 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 500, verbose=False)
e_50 = model_50.evaluate(validation_data)
e_100 = model_100.evaluate(validation_data)
e_200 = model_200.evaluate(validation_data)
e_500 = model_500.evaluate(validation_data)
print "Model 50 accuracy: %s\n" % e_50['accuracy']
print "Model 100 accuracy: %s\n" % e_100['accuracy']
print "Model 200 accuracy: %s\n" % e_200['accuracy']
print "Model 500 accuracy: %s\n" % e_500['accuracy']
import matplotlib.pyplot as plt
%matplotlib inline
def make_figure(dim, title, xlabel, ylabel, legend):
plt.rcParams['figure.figsize'] = dim
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
if legend is not None:
plt.legend(loc=legend, prop={'size':15})
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
train_err_10 = 1 - model_10.evaluate(train_data)['accuracy']
train_err_50 = 1 - model_50.evaluate(train_data)['accuracy']
train_err_100 = 1 - model_100.evaluate(train_data)['accuracy']
train_err_200 = 1 - model_200.evaluate(train_data)['accuracy']
train_err_500 = 1 - model_500.evaluate(train_data)['accuracy']
training_errors = [train_err_10, train_err_50, train_err_100,
train_err_200, train_err_500]
validation_err_10 = 1 - model_10.evaluate(validation_data)['accuracy']
validation_err_50 = 1 - model_50.evaluate(validation_data)['accuracy']
validation_err_100 = 1 - model_100.evaluate(validation_data)['accuracy']
validation_err_200 = 1 - model_200.evaluate(validation_data)['accuracy']
validation_err_500 = 1 - model_500.evaluate(validation_data)['accuracy']
validation_errors = [validation_err_10, validation_err_50, validation_err_100,
validation_err_200, validation_err_500]
plt.plot([10, 50, 100, 200, 500], training_errors, linewidth=4.0, label='Training error')
plt.plot([10, 50, 100, 200, 500], validation_errors, linewidth=4.0, label='Validation error')
make_figure(dim=(10,5), title='Error vs number of trees',
xlabel='Number of trees',
ylabel='Classification error',
legend='best')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the data - for instance the MovieLens data can be found at
Step2: Set up all of the parameters necessary for the runner
Step3: Pass in the variables into the Hermes Runner and run each step
Step4: View the results
|
<ASSISTANT_TASK:>
Python Code:
#This block of code will set up a spark content and sql context if you are running locally
#If you are on cluster or have deployed spark a different way you don't need this
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
try:
sc = SparkContext()
except:
sc = SparkContext._active_spark_context
sqlCtx = SQLContext(sc)
sc.addPyFile('hermes/hermes.zip')
from src import hermes_run_script
import pandas as pd
movies = sqlCtx.read.json(
'movielens_20m_movies.json.gz',
)
ratings = sqlCtx.read.json(
'movielens_20m_ratings.json.gz',
)
#We found the best tag set is in MovieLens 20M and it usable for all movielens
tags = sqlCtx.read.json('movielens_20m_tags.json.gz')
#name of the dataset: will be used for to get the correct vectorizer and when saving files
data_name = 'movielens_20m'
#the types of user vectors to assess
#each dataset has different user vectors that can be chosen
user_vector_types = ['ratings', 'pos_ratings', 'ratings_to_interact']
#the types of content vectors to assess
#each dataset has different content vectors that can be chosen
content_vector_types = ['genre','tags']
#the directory where intermediate files will be saved including user vectors, content vectors, and predictions
#this can be HDFS
directory = 'HDFS/movielens/data'
#the directory for the csv results files.
#this should not be HDFS
results_directory = 'movielens/results'
#the collaborative filtering algorithms to run
cf_predictions = ['cf_mllib', 'cf_item', 'cf_user', 'cf_bayes_map', 'cf_bayes_mse', 'cf_bayes_mae', 'cf_random']
#the content based algorithms to run
cb_predictions = ['cb_vect', 'cb_kmeans_100', 'cb_kmeans_1000']
#the number of predictions to give to a user
result_runs = [100, 1000]
#any additional items that are necessary to run the content vectors
#for MovieLens this includes the user tags if you want to run the tag content vector
support_files = {'num_tags':300, 'tag_rdd':tags}
runner = hermes_run_script.hermes_run(ratings, movies, user_vector_types, content_vector_types,\
sqlCtx, sc, data_name, directory, results_directory, cf_predictions, cb_predictions, \
result_runs, num_partitions=30, **support_files)
#run the vectorizers
runner.run_vectorizer()
#run the collaborative filtering algorithms
runner.run_cf_predictions()
#run the content based algorithms
runner.run_cb_predictions()
#get the results for the collaborative filtering predictions
runner.run_cf_results()
#get the results for the content based predictions
runner.run_cb_results()
#consolidate all of the results into a single csv file
runner.consolidate_results()
full_results_path = results_directory + data_name + '_full_results.csv'
results = pd.read_csv(full_results_path, delimiter=',', index_col=0)
#View part or all of the results
results[['user_vector','content_vector','N','alg_type','serendipity', 'cat_coverage', 'rmse']]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Summarize
Step2: Summarize
Step3: Setting values in lists
Step4: Predict what this code does.
Step5: Predict what this code does.
Step6: Summarize
Step7: Miscellaneous List Stuff
Step8: Copying lists
Step9: Predict what this code does.
|
<ASSISTANT_TASK:>
Python Code:
some_list = [10,20,30]
print(some_list[2])
some_list = [10,20,30]
print(some_list[0])
some_list = [10,20,30]
print(some_list[-1])
some_list = [10,20,30,40]
print(some_list[1:3])
some_list = [10,20,30]
print(some_list[:3])
some_list = [0,10,20,30,40,50,60,70]
print(some_list[2:4])
some_list = [0,10,20,30,40,50,60,70]
print(some_list[1:5])
some_list = [10,20,30]
some_list[0] = 50
print(some_list)
some_list = []
for i in range(5):
some_list.append(i)
print(some_list)
some_list = [1,2,3]
some_list.insert(2,5)
print(some_list)
some_list = [10,20,30]
some_list.pop(1)
print(some_list)
some_list = [10,20,30]
some_list.remove(30)
print(some_list)
some_list = []
for i in range(10):
some_list.append(i)
some_list[5] = 423
print(some_list)
# OR, more compact:
some_list = list(range(10))
some_list[5] = 423
print(some_list)
# You can put anything in a list
some_list = ["test",1,1.52323,print]
# You can even put a list in a list
some_list = [[1,2,3],[4,5,6],[7,8,9]] # a list of three lists!
# You can get the length of a list with len(some_list)
some_list = [10,20,30]
print(len(some_list))
some_list = [10,20,30]
another_list = some_list
some_list[0] = 50
print(some_list)
print(another_list)
import copy
some_list = [10,20,30]
another_list = copy.deepcopy(some_list)
some_list[0] = 50
print(some_list)
print(another_list)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (Optional) Explore Augmentation
Step2: Do the transformations you chose seem reasonable for the Car or Truck dataset?
Step3: The TensorFlow Flowers dataset consists of photographs of flowers of several species. Below is a sample.
Step4: Now you'll use data augmentation with a custom convnet similar to the one you built in Exercise 5. Since data augmentation effectively increases the size of the dataset, we can increase the capacity of the model in turn without as much risk of overfitting.
Step5: Now we'll train the model. Run the next cell to compile it with a loss and accuracy metric and fit it to the training set.
Step6: 4) Train Model
|
<ASSISTANT_TASK:>
Python Code:
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.computer_vision.ex6 import *
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
set_seed()
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Load training and validation sets
ds_train_ = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
'../input/car-or-truck/valid',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=False,
)
# Data Pipeline
def convert_to_float(image, label):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds_train = (
ds_train_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
ds_valid = (
ds_valid_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
# all of the "factor" parameters indicate a percent-change
augment = keras.Sequential([
# preprocessing.RandomContrast(factor=0.5),
preprocessing.RandomFlip(mode='horizontal'), # meaning, left-to-right
# preprocessing.RandomFlip(mode='vertical'), # meaning, top-to-bottom
# preprocessing.RandomWidth(factor=0.15), # horizontal stretch
# preprocessing.RandomRotation(factor=0.20),
# preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),
])
ex = next(iter(ds_train.unbatch().map(lambda x, y: x).batch(1)))
plt.figure(figsize=(10,10))
for i in range(16):
image = augment(ex, training=True)
plt.subplot(4, 4, i+1)
plt.imshow(tf.squeeze(image))
plt.axis('off')
plt.show()
# View the solution (Run this code cell to receive credit!)
q_1.check()
# Lines below will give you a hint
#_COMMENT_IF(PROD)_
q_1.solution()
# View the solution (Run this code cell to receive credit!)
q_2.check()
# Lines below will give you a hint
#_COMMENT_IF(PROD)_
q_2.solution()
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.InputLayer(input_shape=[128, 128, 3]),
# Data Augmentation
# ____,
# Block One
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Two
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Three
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Head
layers.BatchNormalization(renorm=True),
layers.Flatten(),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
# Check your answer
q_3.check()
#%%RM_IF(PROD)%%
# Check number of layers
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.InputLayer(input_shape=[128, 128, 3]),
# Data Augmentation
preprocessing.RandomFlip(mode='horizontal'),
preprocessing.RandomRotation(factor=0.10),
# Block One
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Two
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Three
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Head
layers.BatchNormalization(renorm=True),
layers.Flatten(),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
# Check layer types
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.InputLayer(input_shape=[128, 128, 3]),
# Data Augmentation
preprocessing.RandomRotation(factor=0.10),
preprocessing.RandomContrast(factor=0.10),
preprocessing.RandomFlip(mode='horizontal'),
# Block One
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Two
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Three
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Head
layers.BatchNormalization(renorm=True),
layers.Flatten(),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.InputLayer(input_shape=[128, 128, 3]),
# Data Augmentation
preprocessing.RandomContrast(factor=0.15),
preprocessing.RandomFlip(mode='vertical'),
preprocessing.RandomRotation(factor=0.15),
# Block One
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Two
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Three
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Head
layers.BatchNormalization(renorm=True),
layers.Flatten(),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.InputLayer(input_shape=[128, 128, 3]),
# Data Augmentation
preprocessing.RandomContrast(factor=0.10),
preprocessing.RandomFlip(mode='horizontal'),
preprocessing.RandomRotation(factor=0.10),
# Block One
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Two
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Three
layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Head
layers.BatchNormalization(renorm=True),
layers.Flatten(),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
q_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
optimizer = tf.keras.optimizers.Adam(epsilon=0.01)
model.compile(
optimizer=optimizer,
loss='binary_crossentropy',
metrics=['binary_accuracy'],
)
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=50,
)
# Plot learning curves
import pandas as pd
history_frame = pd.DataFrame(history.history)
history_frame.loc[:, ['loss', 'val_loss']].plot()
history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
# View the solution (Run this code cell to receive credit!)
q_4.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: =======ๆณจๆ=======
Step2: ไฝฟ็จfriendsๅจๅญๅฅฝๅๅ่กจ๏ผupdate=Trueๅฏไปฅ็กฎไฟๅฅฝๅๅ่กจๆฏๆๆฐ็ใๆณจๆๅฅฝๅๅ่กจ็ฌฌ0ไธชๆฏ่ชๅทฑ
Step3: friendsๅฅฝๅๅ่กจ็ฌฌ0ไธชๆฏ่ชๅทฑ๏ผๆไปฌๅฏไปฅ็ไธไธใ้กบๅธฆ่ฏดไธไธ๏ผๅฅฝๅๅ่กจ็้กบๅบ ๏ผ่ฒไผผ๏ผ ๆฏๆ็
งๅฅฝๅๆทปๅ ้กบๅบ
Step4: ๅๅปบไธไธช็ฎๅฝ๏ผ็จๆฅไฟๅญๆๆๅฅฝๅๅคดๅใๆณจๆไฝฟ็จos.chdir(user)ๅๆขไธชๅฐๅทฅไฝ็ฎๅฝ๏ผๆนไพฟๅ็ปญไฟๅญๅพ็ใ
Step5: ๆน้ไธ่ฝฝๅฅฝๅๅคดๅ๏ผๅจๅญๅฐfriends[i]['img']ไธญใ็ถๅๆไปฌprint(friends[0]็็ๆๆฒกๆๅๅ๏ผๆญฃๅธธๆ
ๅตไธๅบ่ฏฅๅฏไปฅ็ๅฐๅขๅ ไบimg๏ผไปฅไบ่ฟๅถๆนๅผๅจๅญๅคดๅ๏ผใๅ ไธบๆๅฎถ็ฝ็ป็ปๅธธ้พๆฅๅคฑ่ดฅ๏ผๆไปฅ็จtry...except...ๆฅๅ่ฟไธๆฎตใ
Step6: ็็ๆๆๅคๅฐไธชๅฅฝๅ๏ผfriends้้ขๆๅคๅฐๆก่ฎฐๅฝ๏ผ๏ผ็็ไธ่ฝฝไบๅคๅฐๅคดๅ๏ผos.listdir(os.getcwd())็็ฎๅฝๅบไธๆๅคๅฐๆไปถ๏ผ
Step7: ๅไธชๅพๅ่พน้ฟeachsize=64ๅ็ด ๏ผไธ่กeachline=int(sqrt(numImages))+1ไธชๅคดๅ๏ผๆ็ปๅพๅ่พน้ฟeachSize*eachline
Step8: importๅพๅๅค็Pythonๅพๅๅค็ๅบ๏ผPILไธญImage
Step9: ็ไธไธๆผๆฅๅฅฝ็ๅพๅๆฏไปไนๆ ท็๏ผๆณจๆๆไปถ่ฟๅคงๆฏๅธธๆ็็ฐ่ฑก๏ผ่ฏทๅ
ๅปๆๆณจ้๏ผ
Step10: ่ณๆญค๏ผๅคงๅๅๆ๏ผๅซๅฟ่ฎฐ้ๅบ็ฝ้กต็ๅพฎไฟก
|
<ASSISTANT_TASK:>
Python Code:
#ๆ็Python็ๆฌๆฏ๏ผ
import sys
print(sys.version)
print(sys.version_info)
import itchat
itchat.auto_login()
friends = itchat.get_friends(update=True)[0:]
friends[0]
import os
user = friends[0]["PYQuanPin"][0:]
print(user)
os.mkdir(user)
os.chdir(user)
os.getcwd()
for i in friends:
try:
i['img'] = itchat.get_head_img(userName=i["UserName"])
i['ImgName']=i["UserName"][1:] + ".jpg"
except ConnectionError:
print('get '+i["UserName"][1:]+' fail')
fileImage=open(i['ImgName'],'wb')
fileImage.write(i['img'])
fileImage.close()
#่ฟ้ไธๅปบ่ฎฎ็friends[0]๏ผๅคช้ฟไบ
friendsSum=len(friends)
imgList=os.listdir(os.getcwd())
numImages=len(imgList)
print('I have ',friendsSum,'friend(s), and I got ',numImages,'image(s)')
import math
eachSize=64
eachLine=int(math.sqrt(numImages))+1
print("ๅไธชๅพๅ่พน้ฟ",eachSize,"ๅ็ด ๏ผไธ่ก",eachLine,"ไธชๅคดๅ๏ผๆ็ปๅพๅ่พน้ฟ",eachSize*eachLine)
import PIL.Image as Image
toImage = Image.new('RGBA', (eachSize*eachLine,eachSize*eachLine))#ๆฐๅปบไธๅ็ปๅธ
x = 0
y = 0
for i in imgList:
try:
img = Image.open(i)#ๆๅผๅพ็
except IOError:
print("Error: ๆฒกๆๆพๅฐๆไปถๆ่ฏปๅๆไปถๅคฑ่ดฅ",i)
else:
img = img.resize((eachSize, eachSize), Image.ANTIALIAS)#็ผฉๅฐๅพ็
toImage.paste(img, (x * eachSize, y * eachSize))#ๆผๆฅๅพ็
x += 1
if x == eachLine:
x = 0
y += 1
print("ๅพๅๆผๆฅๅฎๆ")
toImage.show()
os.chdir(os.path.pardir)
os.getcwd()
toImage.save(friends[0]["PYQuanPin"][0:]+".jpg")
itchat.send_image(friends[0]["PYQuanPin"][0:]+".jpg", 'filehelper')
itchat.logout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ะะดะฝะพะถะธะดะบะพััะฝัะน ะบัะธัะตัะธะน
Step2: ะะฒัั
ะถะธะดะบะพััะฝัะน ะบัะธัะตัะธะน
Step3: ะะฐั
ะพะถะดะตะฝะธะต ะผะฐะบัะธะผัะผะฐ
Step4: ะขะตะฟะตัั ะบะธะฝะตะผะฐัะธัะตัะบะพะต ะฟัะธะฑะปะธะถะตะฝะธะต
Step5: ะฃ Rafikov ะฝะฐะฟะธัะฐะฝะพ ัะปะตะดัััะตะต
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import HTML
from IPython.display import Image
from PIL import Image as ImagePIL
%pylab
%matplotlib inline
G = 4.32 #ะณัะฐะฒะธัะฐัะธะพะฝะฝะฐั ะฟะพััะพัะฝะฝะฐั ะฒ ะฝัะถะฝัั
ะตะดะธะฝะธัะฐั
def Qs(epicycl=None, sigma=None, star_density=None):
'''ะััะธัะปะตะฝะธะต ะฑะตะทัะฐะทะผะตัะฝะพะณะพ ะฟะฐัะฐะผะตััะฐ ะขัะผัะต ะดะปั ะทะฒะตะทะดะฝะพะณะพ ะดะธัะบะฐ.
ะะฐะฒะธัะธั ะพั ะฟะปะพัะฝะพััะธ ะทะฒะตะทะด, ะดะธัะฟะตััะธะธ ัะบะพัะพััะตะน ะธ ัะฟะธัะธะบะปะธัะตัะบะพะน ัะฐััะพัั.
ะััะธัะปัะตััั ะดะปั pi, ะฐ ะฝะต 3.36, ะฟะพัะบะพะปัะบั ัะฐะบ ะฒ Rafikov(2001).'''
return epicycl * sigma / (math.pi * G * star_density)
def Qg(epicycl=None, sound_vel=None, gas_density=None):
'''ะััะธัะปะตะฝะธะต ะฑะตะทัะฐะทะผะตัะฝะพะณะพ ะฟะฐัะฐะผะตััะฐ ะขัะผัะต ะดะปั ะณะฐะทะพะฒะพะณะพ ะดะธัะบะฐ.
ะะฐะฒะธัะธั ะพั ะฟะปะพัะฝะพััะธ ะณะฐะทะฐ ะธ ัะฟะธัะธะบะปะธัะตัะบะพะน ัะฐััะพัั, ัะบะพัะพััะธ ะทะฒัะบะฐ ะฒ ะณะฐะทะต.'''
return epicycl * sound_vel / (math.pi * G * gas_density)
from scipy.special import i0e, i1e
def inverse_hydro_Qeff_from_k(dimlK, Qg=None, Qs=None, s=None):
return 2.*dimlK / Qs / (1 + dimlK**2) + 2*s*dimlK / Qg / (1 + dimlK**2 * s**2)
def inverse_kinem_Qeff_from_k(dimlK, Qg=None, Qs=None, s=None):
return 2. / dimlK / Qs * (1 - i0e(dimlK ** 2)) + 2*s*dimlK / Qg / (1 + dimlK**2 * s**2)
from sympy import Symbol, solve
def findInvHydroQeffSympy(Qs, Qg, s):
'''ะ ะตัะฐะตะผ ััะฐะฒะฝะตะฝะธะต deriv()=0 ััะพะฑั ะฝะฐะนัะธ ะผะฐะบัะธะผัะผ ััะฝะบัะธะธ ะฒ ะณะธะดัะพะดะธะฝะฐะผะธัะตัะบะพะผ ะฟัะธะฑะปะธะถะตะฝะธะธ.'''
k = Symbol('k') #solve for complex because it may returns roots as 1.03957287978471 + 0.e-20*I
foo = 2./Qs*k/(1+k**2) + 2/Qg*s*k/(1+k**2 * s**2)
foo2 = 2./Qs * (1-k)*(1+k * s**2)**2 + 2/Qg*s*(1-k*s**2)*(1+k)**2
roots = solve(foo2.simplify(), k)
roots = [np.sqrt(float(abs(re(r)))) for r in roots]
_tmp = [foo.evalf(subs={k:r}) for r in roots]
max_val = max(_tmp)
return (roots[_tmp.index(max_val)], max_val)
def findInvHydroQeffBrute(Qs, Qg, s, krange):
'''ะะฐั
ะพะดะธะผ ะผะฐะบัะธะผัะผ ััะฝะบัะธะธ ะฒ ะณะธะดัะพะดะธะฝะฐะผะธัะตัะบะพะผ ะฟัะธะฑะปะธะถะตะฝะธะธ ะฟะตัะตะฑะพัะพะผ ะฟะพ ัะตัะบะต.'''
_tmp = [inverse_hydro_Qeff_from_k(l, Qg=Qg, Qs=Qs, s=s) for l in krange]
max_val = max(_tmp)
root_for_max = krange[_tmp.index(max_val)]
if abs(root_for_max-krange[-1]) < 0.5:
print 'WARNING! For Qs={} Qg={} s={} root of max near the max of k-range'.format(Qs, Qg, s)
return (root_for_max, max_val)
from scipy.optimize import brentq
def findInvHydroQeffBrentq(Qs, Qg, s, krange):
'''ะ ะตัะตะฝะธะต ััะฐะฒะฝะตะฝะธั deriv(9) = 0 ะดะปั ะฝะฐั
ะพะถะดะตะฝะธั ะผะฐะบัะธะผัะผะฐ ะธัั
ะพะดะฝะพะน ััะฝะบัะธะธ. ะะฐะฟััะบะฐะตััั brentq ะฝะฐ ะธัั
ะพะดะฝะพะน ัะตัะบะต,
ะฒ ัะปััะฐะต ะตัะปะธ ะฝะฐ ะบะพะฝัะฐั
ัะตัะบะธ ัะฐะทะฝัะต ะทะฝะฐะบะธ ััะฝะบัะธะธ (ะฟัะพะผะตะถััะพะบ ัะพะดะตัะถะธั ะบะพัะตะฝั),
ะทะฐัะตะผ ะฒัะฑะธัะฐัััั ะปัััะธะต ะบะพัะฝะธ, ะฟะพัะปะต ัะตะณะพ ะธัะตััั, ะบะฐะบะพะน ะธั
ะฝะธั
ะดะฐะตั ะผะฐะบัะธะผัะผ. ะะพะทะฒัะฐัะฐะตััั ัะพะปัะบะพ ััะพั ะบะพัะตะฝั.'''
grid = krange
args = [Qs, Qg, s]
signs = [derivTwoFluidHydroQeff(x, *args) for x in grid]
signs = map(lambda x: x / abs(x), signs)
roots = []
for i in range(0, signs.__len__() - 1):
if signs[i] * signs[i + 1] < 0:
roots.append(brentq(lambda x: derivTwoFluidHydroQeff(x, *args), grid[i], grid[i + 1]))
original = [inverse_hydro_Qeff_from_k(l, Qg=Qg, Qs=Qs, s=s) for l in roots]
root_for_max = roots[original.index(max(original))]
if abs(root_for_max-krange[-1]) < 0.5:
print 'WARNING! For Qs={} Qg={} s={} root of max near the max of k-range'.format(Qs, Qg, s)
return (root_for_max, max(original))
def derivTwoFluidHydroQeff(dimlK, Qs, Qg, s):
'''ะัะพะธะทะฒะพะดะฝะฐั ะฟะพ \bar{k} ะพั ะปะตะฒะพะน ัะฐััะธ (9) ะดะปั ัะพะณะพ, ััะพะฑั ะฝะฐะนัะธ ะผะฐะบัะธะผัะผ.'''
part1 = (1 - dimlK ** 2) / (1 + dimlK ** 2) ** 2
part3 = (1 - (dimlK * s) ** 2) / (1 + (dimlK * s) ** 2) ** 2
return (2 * part1 / Qs) + (2 * s * part3 / Qg)
def findInvKinemQeffBrute(Qs, Qg, s, krange):
'''ะะฐั
ะพะดะธะผ ะผะฐะบัะธะผัะผ ััะฝะบัะธะธ ะฒ ะบะธะฝะตะผะฐัะธัะตัะบะพะผ ะฟัะธะฑะปะธะถะตะฝะธะธ ะฟะตัะตะฑะพัะพะผ ะฟะพ ัะตัะบะต.'''
_tmp = [inverse_kinem_Qeff_from_k(l, Qg=Qg, Qs=Qs, s=s) for l in krange]
max_val = max(_tmp)
root_for_max = krange[_tmp.index(max_val)]
if abs(root_for_max-krange[-1]) < 0.5:
print 'WARNING! For Qs={} Qg={} s={} root of max near the max of k-range'.format(Qs, Qg, s)
return (root_for_max, max_val)
def findInvKinemQeffBrentq(Qs, Qg, s, krange):
'''ะ ะตัะตะฝะธะต ััะฐะฒะฝะตะฝะธั deriv(13) = 0 ะดะปั ะฝะฐั
ะพะถะดะตะฝะธั ะผะฐะบัะธะผัะผะฐ ะธัั
ะพะดะฝะพะน ััะฝะบัะธะธ. ะะฐะฟััะบะฐะตััั brentq ะฝะฐ ะธัั
ะพะดะฝะพะน ัะตัะบะต,
ะฒ ัะปััะฐะต ะตัะปะธ ะฝะฐ ะบะพะฝัะฐั
ัะตัะบะธ ัะฐะทะฝัะต ะทะฝะฐะบะธ ััะฝะบัะธะธ (ะฟัะพะผะตะถััะพะบ ัะพะดะตัะถะธั ะบะพัะตะฝั),
ะทะฐัะตะผ ะฒัะฑะธัะฐัััั ะปัััะธะต ะบะพัะฝะธ, ะฟะพัะปะต ัะตะณะพ ะธัะตััั, ะบะฐะบะพะน ะธั
ะฝะธั
ะดะฐะตั ะผะฐะบัะธะผัะผ. ะะพะทะฒัะฐัะฐะตััั ัะพะปัะบะพ ััะพั ะบะพัะตะฝั.'''
grid = krange
args = [Qs, Qg, s]
signs = [derivTwoFluidKinemQeff(x, *args) for x in grid]
signs = map(lambda x: x / abs(x), signs)
roots = []
for i in range(0, signs.__len__() - 1):
if signs[i] * signs[i + 1] < 0:
roots.append(brentq(lambda x: derivTwoFluidKinemQeff(x, *args), grid[i], grid[i + 1]))
original = [inverse_kinem_Qeff_from_k(l, Qg=Qg, Qs=Qs, s=s) for l in roots]
root_for_max = roots[original.index(max(original))]
if abs(root_for_max-krange[-1]) < 0.5:
print 'WARNING! For Qs={} Qg={} s={} root of max near the max of k-range'.format(Qs, Qg, s)
return (root_for_max, max(original))
def derivTwoFluidKinemQeff(dimlK, Qs, Qg, s):
'''ะัะพะธะทะฒะพะดะฝะฐั ะฟะพ \bar{k} ะพั ะปะตะฒะพะน ัะฐััะธ (13) ะดะปั ัะพะณะพ, ััะพะฑั ะฝะฐะนัะธ ะผะฐะบัะธะผัะผ. ะะพััะตะบัะธั ะทะฐ ะฐััะธะผะฟัะพัะธะบั ะฟัะพะธะทะฒะพะดะธััั
ั ะฟะพะผะพััั ะฒัััะพะตะฝะฝัั
ััะฝะบัะธะน ะฑะตััะตะปั, ะฝะพัะผะธัะพะฒะฐะฝะฝัั
ะฝะฐ exp.'''
part1 = (1 - i0e(dimlK ** 2)) / (-dimlK ** 2)
part2 = (2 * dimlK * i0e(dimlK ** 2) - 2 * dimlK * i1e(dimlK ** 2)) / dimlK
part3 = (1 - (dimlK * s) ** 2) / (1 + (dimlK * s) ** 2) ** 2
return 2 * (part1 + part2) / Qs + 2 * s * part3 / Qg
def calc_Qeffs_(Qss=None, Qgs=None, s_params=None, verbose=False):
'''ััะธัะฐะตั ััะฐะทั ะฒัะต Qeff ะฒ ะบะธะฝะตะผะฐัะธัะตัะบะพะผ'''
invQeff = []
for Qs, Qg, s in zip(Qss, Qgs, s_params):
qeff = findInvKinemQeffBrentq(Qs, Qg, s, np.arange(0.01, 60000., 1.))
if verbose:
print 'Qs = {:2.2f}; Qg = {:2.2f}; s = {:2.2f}; Qeff = {:2.2f}'.format(Qs, Qg, s, 1./qeff[1])
invQeff.append(qeff[1])
return invQeff
def calc_Qeffs(r_g_dens=None, gas_dens=None, epicycl=None,
sound_vel=None, star_density=None, sigma=None, verbose=False):
'''ััะธัะฐะตะผ ะผะพะดะตะปัะฝะพะต Qeff ะฒ ะบะธะฝะตะผะฐัะธัะตัะบะพะผ'''
Qgs = []
Qss = []
s_params = []
for r, gd, sd in zip(r_g_dens, gas_dens, star_density):
Qgs.append(Qg(epicycl=epicycl(r), sound_vel=sound_vel, gas_density=gd))
Qss.append(Qs(epicycl=epicycl(r), sigma=sigma(r), star_density=sd))
s_params.append(sound_vel/sigma(r))
return calc_Qeffs_(Qss=Qss, Qgs=Qgs, s_params=s_params, verbose=verbose)
def plot_k_dependency(Qs=None, Qg=None, s=None, krange=None, ax=None, label=None, color=None):
'''ัะธััะตััั ะทะฐะฒะธัะธะผะพััั ะผะตะถะดั ะฒะพะปะฝะพะฒัะผะธ ัะธัะปะฐะผะธ ะธ ะดะฒัั
ะถะธะดะบะพััะฝะพะน ะฝะตัััะพะนัะธะฒะพัััั, ะฟะพะบะฐะทะฐะฝ ะผะฐะบัะธะผัะผ'''
TFcriteria = []
_tmp = [inverse_kinem_Qeff_from_k(dimlK, Qg=Qg, Qs=Qs, s=s) for dimlK in krange]
root_for_max, max_val = findInvKinemQeffBrentq(Qs, Qg, s, krange)
ax.plot(krange, _tmp, '-', label=label, color=color)
ax.plot(root_for_max, max_val, 'o', color=color)
def plot_k_dependencies(r_g_dens=None, gas_dens=None, epicycl=None,
sound_vel=None, star_density=None, sigma=None, krange=None, show=False):
'''ัะธััะตะผ ะผะฝะพะณะพ ะทะฐะฒะธัะธะผะพััะตะน ััะฐะทั'''
Qgs, Qss, s_params = [], [], []
maxk = 30.
fig = plt.figure(figsize=[16,8])
ax = plt.gca()
colors = cm.rainbow(np.linspace(0, 1, len(r_g_dens)))
for r, gd, sd, color in zip(r_g_dens, gas_dens, star_density, colors):
Qgs.append(Qg(epicycl=epicycl(r), sound_vel=sound_vel, gas_density=gd))
Qss.append(Qs(epicycl=epicycl(r), sigma=sigma(r), star_density=sd))
s_params.append(sound_vel/sigma(r))
if show:
print 'r={:7.3f} Qg={:7.3f} Qs={:7.3f} Qg^-1={:7.3f} Qs^-1={:7.3f} s={:7.3f}'.format(r, Qgs[-1], Qss[-1], 1./Qgs[-1], 1./Qss[-1], s_params[-1])
plot_k_dependency(Qs=Qss[-1], Qg=Qgs[-1], s=s_params[-1], krange=krange, ax=ax, label=str(r), color=color)
maxk = max(maxk, findInvKinemQeffBrentq(Qss[-1], Qgs[-1], s_params[-1], krange)[0]) #not optimal
plt.legend()
plt.xlim(0, maxk+100.)
def plot_WKB_dependency(Qs=None, Qg=None, s=None, krange=None, ax=None, label=None, color=None, r=None, scale=None, epicycl=None, sound_vel=None):
'''ัะธััะตััั ะทะฐะฒะธัะธะผะพััั ะผะตะถะดั (k x r) ะธ ะดะฒัั
ะถะธะดะบะพััะฝะพะน ะฝะตัััะพะนัะธะฒะพัััั, ะฟะพะบะฐะทะฐะฝ ะผะฐะบัะธะผัะผ (ัะผ. ััะฐะฒะฝะตะฝะธะต ะฒััะต), ััะพะฑั ะฟัะพะฒะตัะธัั ัะฟัะฐะฒะตะดะปะธะฒะพััั WKB'''
TFcriteria = []
sigma = sound_vel/s
_tmp = [inverse_kinem_Qeff_from_k(dimlK, Qg=Qg, Qs=Qs, s=s) for dimlK in krange]
root_for_max, max_val = findInvKinemQeffBrentq(Qs, Qg, s, krange)
factor = epicycl*r*scale/sigma
ax.plot(np.array(krange)*factor, _tmp, '-', label=label, color=color)
ax.plot(root_for_max*factor, max_val, 'o', color=color)
def plot_WKB_dependencies(r_g_dens=None, gas_dens=None, epicycl=None,
sound_vel=None, star_density=None, sigma=None, krange=None, scale=None):
'''ัะธััะตะผ ะผะฝะพะณะพ ะทะฐะฒะธัะธะผะพััะตะน ััะฐะทั ะดะปั ะฟัะพะฒะตัะบะธ WKB'''
Qgs, Qss, s_params = [], [], []
maxk = 30.
fig = plt.figure(figsize=[16,8])
ax = plt.gca()
colors = cm.rainbow(np.linspace(0, 1, len(r_g_dens)))
for r, gd, sd, color in zip(r_g_dens, gas_dens, star_density, colors):
Qgs.append(Qg(epicycl=epicycl(r), sound_vel=sound_vel, gas_density=gd))
Qss.append(Qs(epicycl=epicycl(r), sigma=sigma(r), star_density=sd))
s_params.append(sound_vel/sigma(r))
plot_WKB_dependency(Qs=Qss[-1], Qg=Qgs[-1], s=s_params[-1], krange=krange, ax=ax, label=str(r),
color=color, r=r, scale=scale, epicycl=epicycl(r), sound_vel=sound_vel)
maxk = max(maxk, findInvKinemQeffBrentq(Qss[-1], Qgs[-1], s_params[-1], krange)[0]) #not optimal
plt.legend()
plt.xlim(0, maxk+100.)
def get_invQeff_from_data(gas_data=None, epicycl=None, gas_approx=None, sound_vel=None, scale=None, sigma=None, star_density=None, verbose=False):
'''ัะฐัััะธััะฒะฐะตั ะธะท ะฝะฐะฑะปัะดะฐัะตะปัะฝัั
ะดะฐะฝะฝัั
ััะฐะทั ะผะฝะพะณะพ ะทะฝะฐัะตะฝะธะน Qeff^-1 ะธ ะฒะพะทะฒัะฐัะฐะตั ะฝะฐะฑะพั (Qg, Qs, Qeff)'''
Qgs = []
Qss = []
invQeff = []
for ind, (r, gd) in enumerate(gas_data):
if type(sound_vel) == tuple or type(sound_vel) == list: #ััะตั ัะปััะฐั ัะฐะทะฝัั
ัะบะพัะพััะตะน ะทะฒัะบะฐ
s_vel = sound_vel[ind]
else:
s_vel = sound_vel
Qgs.append(Qg(epicycl=epicycl(gas_approx, r, scale), sound_vel=s_vel, gas_density=gd))
Qss.append(Qs(epicycl=epicycl(gas_approx, r, scale), sigma=sigma(r), star_density=star_density(r)))
qeff = findInvKinemQeffBrentq(Qss[-1], Qgs[-1], s_vel/sigma(r), np.arange(0.01, 60000., 1.))
if verbose:
print 'r = {:2.2f}; gas_d = {:2.2f}; epicycl = {:2.2f}; sig = {:2.2f}; star_d = {:2.2f}'.format(r, gd, epicycl(gas_approx, r, scale),
sigma(r), star_density(r))
print '\tQs = {:2.2f}; Qg = {:2.2f}; Qeff = {:2.2f}'.format(Qss[-1], Qgs[-1], 1./qeff[1])
invQeff.append(qeff[1])
return zip(map(lambda l: 1./l, Qgs), map(lambda l: 1./l, Qss), invQeff)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Authenticate Kaggle by uploading kaggle.json file
Step2: Download dataset
Step3: Unzip files
Step4: Notebook
Step5: Load dataset info
Step6: Create datagenerator
Step7: Show data
Step8: Create model
Step9: Train model
Step11: Bayesian Optimization
Step12: Training with Data Aug
Step13: Create submit
|
<ASSISTANT_TASK:>
Python Code:
!pip3 install kaggle
!pip3 install google
from google.colab import files
upload = files.upload()
!mkdir ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle competitions download -c human-protein-atlas-image-classification
!mkdir ./human_protein_atlas/
!mkdir ./human_protein_atlas/train
!mkdir ./human_protein_atlas/test
!mv train.csv ./human_protein_atlas/train.csv
!unzip -q ./train.zip -d ./human_protein_atlas/train
!unzip -q ./test.zip -d ./human_protein_atlas/test
import os, sys, math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image
import cv2
from imgaug import augmenters as iaa
from tqdm import tqdm
import warnings
warnings.filterwarnings("ignore")
INPUT_SHAPE = (299,299,3)
BATCH_SIZE = 10
path_to_train = './human_protein_atlas/train/'
data = pd.read_csv('human_protein_atlas/train.csv')
train_dataset_info = []
for name, labels in zip(data['Id'], data['Target'].str.split(' ')):
train_dataset_info.append({
'path':os.path.join(path_to_train, name),
'labels':np.array([int(label) for label in labels])})
train_dataset_info = np.array(train_dataset_info)
from sklearn.model_selection import train_test_split
train_ids, test_ids, train_targets, test_target = train_test_split(
data['Id'], data['Target'], test_size=0.2, random_state=42)
class data_generator:
def create_train(dataset_info, batch_size, shape, augument=True):
assert shape[2] == 3
while True:
random_indexes = np.random.choice(len(dataset_info), batch_size)
batch_images = np.empty((batch_size, shape[0], shape[1], shape[2]))
batch_labels = np.zeros((batch_size, 28))
for i, idx in enumerate(random_indexes):
image = data_generator.load_image(
dataset_info[idx]['path'], shape)
if augument:
image = data_generator.augment(image)
batch_images[i] = image
batch_labels[i][dataset_info[idx]['labels']] = 1
yield batch_images, batch_labels
def load_image(path, shape):
R = np.array(Image.open(path+'_red.png'))
G = np.array(Image.open(path+'_green.png'))
B = np.array(Image.open(path+'_blue.png'))
Y = np.array(Image.open(path+'_yellow.png'))
image = np.stack((
R/2 + Y/2,
G/2 + Y/2,
B),-1)
image = cv2.resize(image, (shape[0], shape[1]))
image = np.divide(image, 255)
return image
def augment(image):
augment_img = iaa.Sequential([
iaa.OneOf([
iaa.Affine(rotate=0),
iaa.Affine(rotate=90),
iaa.Affine(rotate=180),
iaa.Affine(rotate=270),
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
])], random_order=True)
image_aug = augment_img.augment_image(image)
return image_aug
# create train datagen
input_shape = (299, 299, 3)
train_datagen = data_generator.create_train(
train_dataset_info, 5, input_shape, augument=True)
images, labels = next(train_datagen)
fig, ax = plt.subplots(1,5,figsize=(25,5))
for i in range(5):
ax[i].imshow(images[i])
print('min: {0}, max: {1}'.format(images.min(), images.max()))
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, load_model
from keras.layers import Activation
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Input
from keras.layers import BatchNormalization
from keras.layers import Conv2D
from keras.models import Model
from keras.applications import InceptionResNetV2
from keras.callbacks import ModelCheckpoint
from keras.callbacks import LambdaCallback
from keras.callbacks import Callback
from keras import metrics
from keras.optimizers import Adam
from keras import backend as K
import tensorflow as tf
import keras
def create_model(input_shape, n_out):
pretrain_model = InceptionResNetV2(
include_top=False,
weights='imagenet',
input_shape=input_shape)
input_tensor = Input(shape=input_shape)
bn = BatchNormalization()(input_tensor)
x = pretrain_model(bn)
x = Conv2D(128, kernel_size=(1,1), activation='relu')(x)
x = Flatten()(x)
x = Dropout(0.5)(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
output = Dense(n_out, activation='sigmoid')(x)
model = Model(input_tensor, output)
return model
def f1(y_true, y_pred):
tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)
fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)
fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)
p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())
f1 = 2*p*r / (p+r+K.epsilon())
f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)
return K.mean(f1)
def show_history(history):
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ax[0].set_title('loss')
ax[0].plot(history.epoch, history.history["loss"], label="Train loss")
ax[0].plot(history.epoch, history.history["val_loss"], label="Validation loss")
ax[1].set_title('f1')
ax[1].plot(history.epoch, history.history["f1"], label="Train f1")
ax[1].plot(history.epoch, history.history["val_f1"], label="Validation f1")
ax[2].set_title('acc')
ax[2].plot(history.epoch, history.history["acc"], label="Train acc")
ax[2].plot(history.epoch, history.history["val_acc"], label="Validation acc")
ax[0].legend()
ax[1].legend()
ax[2].legend()
keras.backend.clear_session()
model = create_model(
input_shape=(299,299,3),
n_out=28)
model.summary()
checkpointer = ModelCheckpoint(
'./InceptionResNetV2.model', monitor = 'val_f1',
verbose=2, save_best_only=True)
# no data augmentation training
train_generator = data_generator.create_train(
train_dataset_info[train_ids.index], BATCH_SIZE, INPUT_SHAPE, augument=False)
validation_generator = data_generator.create_train(
train_dataset_info[test_ids.index], 256, INPUT_SHAPE, augument=False)
model.layers[2].trainable = False
model.compile(
loss='categorical_crossentropy',
optimizer=Adam(1e-3),
metrics=['acc', f1])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
validation_data=next(validation_generator),
epochs=1,
verbose=1,
callbacks=[checkpointer])
# To prevent error during training, custom f1 scoring object must be defined
from keras.utils.generic_utils import get_custom_objects
get_custom_objects().update({'f1': f1})
# Take a checkpoint model and load into keras
from keras.models import load_model
checkpointer_savepath = './InceptionResNetV2.model'
model = load_model(checkpointer_savepath)
show_history(history)
!pip install scikit-optimize
from skopt.space import Real
from skopt.utils import use_named_args
from skopt import gp_minimize
def create_model_and_compile(input_shape, n_out, lr):
Args:
input_shape:
n_out: number of output classes
pretrain_model = InceptionResNetV2(
include_top=False,
weights='imagenet',
input_shape=input_shape)
input_tensor = Input(shape=input_shape)
bn = BatchNormalization()(input_tensor)
x = pretrain_model(bn)
x = Conv2D(128, kernel_size=(1,1), activation='relu')(x)
x = Flatten()(x)
x = Dropout(0.5)(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
output = Dense(n_out, activation='sigmoid')(x)
model = Model(input_tensor, output)
model.layers[2].trainable = False
model.compile(
loss='categorical_crossentropy',
optimizer=Adam(lr = lr),
metrics=['acc', f1])
return model
path_best_model = '/content/'
best_f1 = history.history['val_f1'][-1]
dimensions = [Real(name='learning_rate', low=1e-6, high=1e-3, prior='log-uniform'),]
# Integer(name='num_nodes', low=10, high=256),
# Categorical(name='activation', categories=['relu', 'sigmoid'])]
@use_named_args(dimensions=dimensions)
def fitness(learning_rate):
model = create_model_and_compile(input_shape = (299, 299, 3), n_out = 28, lr = learning_rate)
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
validation_data=next(validation_generator),
epochs=1,
verbose=1,
callbacks=[checkpointer])
val_f1 = history.history['val_f1'][-1]
global best_f1
if val_f1<best_f1:
best_f1 = val_f1
del model
K.clear_session()
return val_f1
history.history
#running the automatic creation of models using gp_minimize and fitness function
from skopt import gp_minimize
search_result = gp_minimize(func=fitness,
dimensions=dimensions,
acq_func='EI',
n_calls=10,)
# x0=default_parameters)
search_result.x
# with data augmentation
train_generator = data_generator.create_train(
train_dataset_info[train_ids.index], BATCH_SIZE, INPUT_SHAPE, augument=True)
validation_generator = data_generator.create_train(
train_dataset_info[test_ids.index], 256, INPUT_SHAPE, augument=False)
model.layers[2].trainable = True
model.compile(
loss='categorical_crossentropy',
optimizer=Adam(1e-4),
metrics=['acc', f1])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
validation_data=next(validation_generator),
epochs=180,
verbose=1,
callbacks=[checkpointer])
show_history(history)
model = load_model(
'./InceptionResNetV2.model',
custom_objects={'f1': f1})
submit = pd.read_csv('./sample_submission.csv')
%%time
predicted = []
for name in tqdm(submit['Id']):
path = os.path.join('./human_protein_atlas/test/', name)
image = data_generator.load_image(path, INPUT_SHAPE)
score_predict = model.predict(image[np.newaxis])[0]
label_predict = np.arange(28)[score_predict>=0.2]
str_predict_label = ' '.join(str(l) for l in label_predict)
predicted.append(str_predict_label)
submit['Predicted'] = predicted
submit.to_csv('submission.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tensorflow implementation
Step2: Modify the gradients
Step3: Gradient reversal
Step4: Pytoch case
Step5: Modify gradients
Step6: Gradient reversal
Step7: Pytorch backward hooks
|
<ASSISTANT_TASK:>
Python Code:
import torch
import tensorflow as tf
from torch.autograd import Variable
import numpy as np
def f(X):
return X*X
def g(X):
return X**3
X = np.random.randn(10)
X
sess = tf.InteractiveSession()
tf_X = tf.Variable(X)
init_op = tf.global_variables_initializer()
sess.run(init_op)
sess.run(tf_X)
forward_op = f(tf_X)
sess.run(forward_op)
gradient_op = tf.gradients(forward_op, tf_X)
sess.run(gradient_op)
X*2 # This should match the gradient above
gradient_modifier_op = g(tf_X)
sess.run(gradient_modifier_op)
modified_forward_op = (f(tf_X) + g(tf_X) - tf.stop_gradient(g(tf_X)))
modified_backward_op = tf.gradients(modified_forward_op, tf_X)
sess.run(modified_forward_op)
sess.run(modified_backward_op)
2*X + 3*(X**2) # This should match the gradients above
gradient_reversal_op = (tf.stop_gradient(2*f(tf_X)) - f(tf_X))
gradient_reversal_grad_op = tf.gradients(gradient_reversal_op, tf_X)
sess.run(gradient_reversal_op)
sess.run(gradient_reversal_grad_op)
sess.run((gradient_op[0] + gradient_reversal_grad_op[0])) # This should be zero. Signifying grad is reversed.
def zero_grad(X):
if X.grad is not None:
X.grad.data.zero_()
torch_X = Variable(torch.FloatTensor(X), requires_grad=True)
torch_X.data.numpy()
f(torch_X).data.numpy()
g(torch_X).data.numpy()
zero_grad(torch_X)
f_X = f(torch_X)
f_X.backward(torch.ones(f_X.size()))
torch_X.grad.data.numpy()
2*X
modified_gradients_forward = lambda x: f(x) + g(x) - g(x).detach()
zero_grad(torch_X)
modified_grad = modified_gradients_forward(torch_X)
modified_grad.backward(torch.ones(modified_grad.size()))
torch_X.grad.data.numpy()
2*X + 3*(X*X) # It should be same as above
gradient_reversal = lambda x: (2*f(x)).detach() - f(x)
zero_grad(torch_X)
grad_reverse = gradient_reversal(torch_X)
grad_reverse.backward(torch.ones(grad_reverse.size()))
torch_X.grad.data.numpy()
-2*X # It should be same as above
# Gradient reversal
zero_grad(torch_X)
f_X = f(torch_X)
f_X.register_hook(lambda grad: -grad)
f_X.backward(torch.ones(f_X.size()))
torch_X.grad.data.numpy()
-2*X
# Modified grad example
zero_grad(torch_X)
h = torch_X.register_hook(lambda grad: grad + 3*(torch_X*torch_X))
f_X = f(torch_X)
f_X.backward(torch.ones(f_X.size()))
h.remove()
torch_X.grad.data.numpy()
2*X + 3*(X*X) # It should be same as above
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-2', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Built-in Constraints
Step3: esinw, ecosw
Step4: t0
Step5: freq
Step6: mass
Step7: component sma
Step8: component asini
Step9: requiv_max
Step10: rotation period
Step11: pitch/yaw (incl/long_an)
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.filter(qualifier='asini', context='constraint')
b.get_parameter(qualifier='asini', component='binary', context='constraint')
b.get_parameter(qualifier='esinw', context='constraint')
b.get_parameter(qualifier='ecosw', context='constraint')
b.get_parameter(qualifier='t0_perpass', context='constraint')
b.filter(qualifier='freq', context='constraint')
b.get_parameter(qualifier='freq', component='binary', context='constraint')
b.get_parameter(qualifier='freq', component='primary', context='constraint')
b.filter(qualifier='mass', context='constraint')
b.get_parameter(qualifier='mass', component='primary', context='constraint')
b.filter(qualifier='sma', context='constraint')
b.get_parameter(qualifier='sma', component='primary', context='constraint')
b.filter(qualifier='asini', context='constraint')
b.get_parameter(qualifier='asini', component='primary', context='constraint')
b.filter(qualifier='requiv_max', context='constraint')
b.get_parameter(qualifier='requiv_max', component='primary', context='constraint')
b.filter(qualifier='period', context='constraint')
b.get_parameter(qualifier='period', component='primary', context='constraint')
b.filter(qualifier='incl', context='constraint')
b.get_parameter(qualifier='incl', component='primary', context='constraint')
b.filter(qualifier='long_an', context='constraint')
b.get_parameter(qualifier='long_an', component='primary', context='constraint')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the following box we import all the VIP routines that will be used in this tutorial.
Step2: 6.1. Introduction
Step3: 6.2.1. Symmetric pole-on disk
Step4: Then create your disk model
Step5: The method compute_scattered_light returns the synthetic image of the disk.
Step6: You can print some info on the geometrical properties of the model, the dust distribution parameters, the numerical integration parameters and the phase function parameters (detailed later).
Step7: As a side note, if $\alpha_{in} \ne \alpha_{out}$, then the peak surface density of the disk is not located at the reference radius $a$.
Step8: Go to the top
Step9: The position angle of the disk is 0 (e.g. north). The phase function is asymmetric, the reason why the north and south ansae appear brighter is because the disk is not flat
Step10: Warning ! The code does not handle perfectly edge-on disks. There is a maximum inclination close to edge-on beyond which it cannot create an image. In practice this is not a limitation as the convolution by the PSF always makes it impossible to disentangle between a close to edge-on disk and a perfectly edge-on disk.
Step11: Go to the top
Step12: You can plot how the phase function look like
Step13: The forward side is brighter.
Step14: Go to the top
Step15: Go to the top
Step16: You can combine this Rayleigh-like degree of linear polarisation with any phase function (simple HG, double HG or custom type).
Step17: The brightness asymmetry here is entirely due to the fact that the brightness at one point in the disk is inversely proportional to the squared distance to the star.
Step18: Go to the top
Step19: cube_fake_disk3 is now a cube of 30 frames, where the disk has been injected at the correct position angle.
Step20: Let's visualize the first, middle and last image of the cube.
Step21: We can now process this cube with median-ADI for instance
Step22: The example above shows a typical bias that can be induced by ADI on extended disk signals (Milli et al. 2012).
Step23: Then we inject the disk in the cube and convolve each frame by the PSF
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from hciplot import plot_frames, plot_cubes
from matplotlib.pyplot import *
from matplotlib import pyplot as plt
import numpy as np
from packaging import version
import vip_hci as vip
vvip = vip.__version__
print("VIP version: ", vvip)
if version.parse(vvip) < version.parse("1.0.0"):
msg = "Please upgrade your version of VIP"
msg+= "It should be 1.0.0 or above to run this notebook."
raise ValueError(msg)
elif version.parse(vvip) <= version.parse("1.0.3"):
from vip_hci.conf import time_ini, timing
from vip_hci.medsub import median_sub
from vip_hci.metrics import cube_inject_fakedisk, ScatteredLightDisk
else:
from vip_hci.config import time_ini, timing
from vip_hci.fm import cube_inject_fakedisk, ScatteredLightDisk
from vip_hci.psfsub import median_sub
# common to all versions:
from vip_hci.var import create_synth_psf
pixel_scale=0.01225 # pixel scale in arcsec/px
dstar= 80 # distance to the star in pc
nx = 200 # number of pixels of your image in X
ny = 200 # number of pixels of your image in Y
itilt = 0. # inclination of your disk in degrees
a = 70. # semimajoraxis of the disk in au
ksi0 = 3. # reference scale height at the semi-major axis of the disk
gamma = 2. # exponant of the vertical exponential decay
alpha_in = 12
alpha_out = -12
beta = 1
fake_disk1 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws','ain':alpha_in,'aout':alpha_out,
'a':a,'e':0.0,'ksi0':ksi0,'gamma':gamma,'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
fake_disk1_map = fake_disk1.compute_scattered_light()
plot_frames(fake_disk1_map, grid=False, size_factor=6)
fake_disk1.print_info()
fake_disk1 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':-3,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
fake_disk1_map = fake_disk1.compute_scattered_light()
plot_frames(fake_disk1_map, grid=False, size_factor=6)
fake_disk1.print_info()
itilt = 76 # inclination of your disk in degreess
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta,
'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':False})
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=90, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':2, 'gamma':gamma, 'beta':beta,
'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':False})
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
g=0.4
fake_disk3 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
fake_disk3.phase_function.plot_phase_function()
fake_disk3_map = fake_disk3.compute_scattered_light()
plot_frames(fake_disk3_map, grid=False, size_factor=6)
g1=0.6
g2=-0.4
weight1=0.7
fake_disk4 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'DoubleHG', 'g':[g1,g2], 'weight':weight1,
'polar':False},
flux_max=1)
fake_disk4.phase_function.plot_phase_function()
fake_disk4_map = fake_disk4.compute_scattered_light()
plot_frames(fake_disk4_map, grid=False, size_factor=6)
kind='cubic' #kind must be either "linear", "nearest", "zero", "slinear", "quadratic" or "cubic"
spf_dico = dict({'phi':[0, 60, 90, 120, 180],
'spf':[1, 0.4, 0.3, 0.3, 0.5],
'name':'interpolated', 'polar':False, 'kind':kind})
fake_disk5 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico=spf_dico, flux_max=1)
fake_disk5.phase_function.plot_phase_function()
fake_disk5_map = fake_disk5.compute_scattered_light()
plot_frames(fake_disk5_map, grid=False, size_factor=6)
fake_disk6 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma,
'beta':beta, 'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':True})
fake_disk6.phase_function.plot_phase_function()
fake_disk6_map = fake_disk6.compute_scattered_light()
plot_frames(fake_disk6_map, grid=False, size_factor=6)
e=0.4 # eccentricity in degrees
omega=30 # argument of pericenter
fake_disk7 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=0, omega=omega, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':e, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
fake_disk7_map = fake_disk7.compute_scattered_light()
plot_frames(fake_disk7_map, grid=False, size_factor=6)
fake_disk7 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=omega, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':e, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
fake_disk7_map = fake_disk7.compute_scattered_light()
plot_frames(fake_disk7_map, grid=False, size_factor=6)
plot_frames(fake_disk3_map, grid=False, size_factor=6)
nframes = 30
# we assume we have 60ยบ of parallactic angle rotation centered around meridian
parang_amplitude = 60
derotation_angles = np.linspace(-parang_amplitude/2, parang_amplitude/2, nframes)
start = time_ini()
cube_fake_disk3 = cube_inject_fakedisk(fake_disk3_map, -derotation_angles, imlib='vip-fft')
timing(start)
cube_fake_disk3.shape
plot_frames((cube_fake_disk3[0], cube_fake_disk3[nframes//2], cube_fake_disk3[nframes-1]),
grid=False, size_factor=3)
cadi_fake_disk3 = median_sub(cube_fake_disk3, derotation_angles, imlib='vip-fft')
plot_frames((fake_disk3_map, cadi_fake_disk3), grid=False, size_factor=4)
psf = create_synth_psf(model='gauss', shape=(11, 11), fwhm=4.)
plot_frames(psf, grid=True, size_factor=2)
cube_fake_disk3_convolved = cube_inject_fakedisk(fake_disk3_map, -derotation_angles,
psf=psf, imlib='vip-fft')
cadi_fake_disk3_convolved = median_sub(cube_fake_disk3_convolved, derotation_angles, imlib='vip-fft')
plot_frames((fake_disk3_map, cadi_fake_disk3, cadi_fake_disk3_convolved), grid=False, size_factor=4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function
Step2: Next, we need to tell REBOUND about this function.
Step3: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force.
Step4: And let's plot the result.
Step5: You can see that the eccentricity is oscillating between 0 and almost 1.
Step6: But we change the additional force to be
Step7: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles.
Step8: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity.
|
<ASSISTANT_TASK:>
Python Code:
import rebound
sim = rebound.Simulation()
sim.integrator = "whfast"
sim.add(m=1.)
sim.add(m=1e-6,a=1.)
sim.move_to_com() # Moves to the center of momentum frame
ps = sim.particles
c = 0.01
def starkForce(reb_sim):
ps[1].ax += c
sim.additional_forces = starkForce
import numpy as np
Nout = 1000
es = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
sim.integrate(time, exact_finish_time=0) # integrate to the nearest timestep so WHFast's timestep stays constant
es[i] = sim.particles[1].e
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
plt.plot(times, es);
sim = rebound.Simulation()
sim.integrator = "ias15"
sim.add(m=1.)
sim.add(m=1e-6,a=1.)
sim.move_to_com() # Moves to the center of momentum frame
ps = sim.particles
tau = 1000.
def migrationForce(reb_sim):
ps[1].ax -= ps[1].vx/tau
ps[1].ay -= ps[1].vy/tau
ps[1].az -= ps[1].vz/tau
sim.additional_forces = migrationForce
sim.force_is_velocity_dependent = 1
Nout = 1000
a_s = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
sim.integrate(time)
a_s[i] = sim.particles[1].a
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.set_xlabel("time")
ax.set_ylabel("semi-major axis")
plt.plot(times, a_s);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can of course run this tutorial locally if you prefer. In this case, don't run the above cell since it will download and install Anaconda on your local machine. In either case, we can now import the deepchem package to play with.
Step2: Training a Model with DeepChem
Step3: I won't say too much about this code right now. We will see many similar examples in later tutorials. There are two details I do want to draw your attention to. First, notice the featurizer argument passed to the load_delaney() function. Molecules can be represented in many ways. We therefore tell it which representation we want to use, or in more technical language, how to "featurize" the data. Second, notice that we actually get three different data sets
Step4: Here again I will not say much about the code. Later tutorials will give lots more information about GraphConvModel, as well as other types of models provided by DeepChem.
Step5: If everything has gone well, we should now have a fully trained model! But do we? To find out, we must evaluate the model on the test set. We do that by selecting an evaluation metric and calling evaluate() on the model. For this example, let's use the Pearson correlation, also known as r<sup>2</sup>, as our metric. We can evaluate it on both the training set and test set.
Step6: Notice that it has a higher score on the training set than the test set. Models usually perform better on the particular data they were trained on than they do on similar but independent data. This is called "overfitting", and it is the reason it is essential to evaluate your model on an independent test set.
Step7: Congratulations! Time to join the Community!
|
<ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem[tensorflow]
import deepchem as dc
dc.__version__
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
model = dc.models.GraphConvModel(n_tasks=1, mode='regression', dropout=0.2)
model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print("Training set score:", model.evaluate(train_dataset, [metric], transformers))
print("Test set score:", model.evaluate(test_dataset, [metric], transformers))
solubilities = model.predict_on_batch(test_dataset.X[:10])
for molecule, solubility, test_solubility in zip(test_dataset.ids, solubilities, test_dataset.y):
print(solubility, test_solubility, molecule)
@manual{Intro1,
title={The Basic Tools of the Deep Life Sciences},
organization={DeepChem},
author={Ramsundar, Bharath},
howpublished = {\url{https://github.com/deepchem/deepchem/blob/master/examples/tutorials/The_Basic_Tools_of_the_Deep_Life_Sciences.ipynb}},
year={2021},
}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: Load data and take a peak at it.
Step3: Separate data into training, validation, and test sets. (This division is not used for the plot above, but will be critical in assessing the performance of our learning algorithms.)
Step4: Define each match as a 1 or a 0, depending on whether the higher ranked player won.
Step5: Perform 1-D logistic regression on training data.
Step6: Produce the plots
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display, HTML
display(HTML('''<img src="image1.png",width=800,height=500">'''))
import numpy as np # numerical libraries
import pandas as pd # for data analysis
import matplotlib as mpl # a big library with plotting functionality
import matplotlib.pyplot as plt # a subset of matplotlib with most of the useful tools
import IPython as IP
%matplotlib inline
import pdb
from sklearn import linear_model as lm
odds= pd.read_pickle('../data/pickle_files/odds.pkl')
matches= pd.read_pickle('../data/pickle_files/matches.pkl')
data = pd.merge(matches,odds[['PSW','PSL','key_o']].dropna(axis=0,subset=["PSW"]),how='inner',on='key_o')
data = data[~data.winner_rank_points.isnull() & ~data.loser_rank_points.isnull()]
IP.display.display(data[0:3])
data['year'] = data['tourney_date'].map(lambda x: x.year)
training = data[data.year.isin([2010,2011,2012])]
validation = data[data.year.isin([2013,2014])]
test = data[data.year.isin([2015,2016])]
# consider rank difference to be positive if winner higher ranked, otherwise negative
rank_diff = (training['winner_rank_points'] - training['loser_rank_points']).values
# if higher ranked player won, raw rank was a successful predictor
y = (rank_diff > 0)*1
# predictions done *before* the match, so algorithm operates on absolute value of rank difference
X = np.abs(rank_diff)
# for numerical well-behavedness, we need to scale and center the data
X=(X/np.std(X,axis=0))
lr = lm.LogisticRegression(C=1., solver='lbfgs')
lr.fit(X.reshape(len(X),-1),y*1)
cofs = lr.coef_[0]
# define figure and axes
fig = plt.figure(figsize=(15,5))
ax0 = fig.add_subplot(131)
ax1 = fig.add_subplot(132)
ax2 = fig.add_subplot(133)
# figure A: predicted probabilities vs. empirical probs
hist, bin_edges = np.histogram(X,bins=100)
p = [np.sum(y[np.where((X>=bin_edges[i]) & (X<bin_edges[i+1]))[0]])/np.max([hist[i],1]) for i in np.arange(len(bin_edges)-1)]
bar_pos = np.arange(len(p))
bar_width = np.diff(bin_edges)
ax0.bar(bin_edges[0:-1], p, width=bar_width, align='edge', alpha=0.5)
r = np.arange(X.min(),X.max(),.1)
s = 1/(1+np.exp(-cofs[0]*r))
ax0.plot(r,s,'r')
ax0.set_xlabel('Scaled rank difference',fontsize=12)
ax0.set_ylabel('Probability that higher ranked wins',fontsize=12)
ax0.set_title('Logistic fit to empirical probabilities',fontsize=12)
ax0.legend(['Logistic probability curve','Empirical probability hist.'])
# figure B: probabilities predicted by odds market
ProbW = 1/training.PSW
ProbL = 1/training.PSL
idx = (training.winner_rank_points>training.loser_rank_points)
odds_prob=np.where(idx,ProbW,ProbL)
t = pd.DataFrame({'X':X,'odds_prob':odds_prob})
ts = t.sort_values('X')
ax1.plot(ts['X'],ts['odds_prob'],'.b')
ax1.plot(r,s,'r')
ax1.set_xlabel('Scaled rank difference',fontsize=12)
ax1.set_ylabel('Probability higher ranked wins',fontsize=12)
ax1.set_title('Probabilities implied by odds market.',fontsize=12)
ax1.legend(['Odds market probabilities','Logistic probability curve'])
# Fig C: variance in odds probabilities as a function of rank difference
x_odds = ts['X'].values.reshape(len(ts),-1)
y_odds = ts['odds_prob'].values
hist, bin_edges = np.histogram(x_odds,bins=10)
stds = [np.std(y_odds[np.where((X>=bin_edges[i]) & (X<bin_edges[i+1]))]) for i in np.arange(len(bin_edges)-1)]
reg = lm.LinearRegression()
reg.fit (bin_edges[0:-1].reshape(10,1),stds)
yv=reg.predict(bin_edges[0:-1].reshape(10,1))
ax2.plot(bin_edges[0:-1],stds,'*b')
ax2.plot(bin_edges[0:-1],yv,'r')
ax2.set_xlabel('Scaled rank difference',fontsize=12)
ax2.set_ylabel('Variance of market prob.',fontsize=12)
ax2.set_title('Trends in stdev of implied probabilities',fontsize=12)
ax2.legend(['Stdev of binned market-probs.','Regression line'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you are using Colab, run the cell below and follow the instructions
Step2: Create a Cloud Storage bucket
Step3: Run the following cell to create your Cloud Storage bucket if it does not already exist.
Step4: Import libraries for creating model
Step5: Downloading and preprocessing data
Step6: Read the data with Pandas
Step7: Build, train, and evaluate our model with Keras
Step8: Create an input data pipeline with tf.data
Step9: Train the model
Step10: Export the model as a TF 1 SavedModel
Step11: Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. We'll use this information when we deploy our model to AI Explanations in the next section.
Step12: Deploy the model to AI Explanations
Step13: Since this is a regression model (predicting a numerical value), the baseline prediction will be the same for every example we send to the model. If this were instead a classification model, each class would have a different baseline prediction.
Step14: Create the model
Step15: Create the model version
Step16: Getting predictions and explanations on deployed model
Step17: Making the explain request
Step18: Understanding the explanations response
Step19: Next let's look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed our model prediction up by that amount, and vice versa for negative attribution values. Which features seem like they're the most important...well it seems like the location features are the most important!
|
<ASSISTANT_TASK:>
Python Code:
import os
PROJECT_ID = "michaelabel-gcp-training"
os.environ["PROJECT_ID"] = PROJECT_ID
import sys
import warnings
warnings.filterwarnings('ignore')
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# If you are running this notebook in Colab, follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
!pip install witwidget --quiet
!pip install tensorflow==1.15.2 --quiet
!gcloud config set project $PROJECT_ID
elif "DL_PATH" in os.environ:
!sudo pip install tabulate --quiet
BUCKET_NAME = "michaelabel-gcp-training-ml"
REGION = "us-central1"
os.environ['BUCKET_NAME'] = BUCKET_NAME
os.environ['REGION'] = REGION
%%bash
exists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/)
if [ -n "$exists" ]; then
echo -e "Bucket gs://${BUCKET_NAME} already exists."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET_NAME}
echo -e "\nHere are your current buckets:"
gsutil ls
fi
%tensorflow_version 1.x
import tensorflow as tf
import tensorflow.feature_column as fc
import pandas as pd
import numpy as np
import json
import time
# Should be 1.15.2
print(tf.__version__)
%%bash
# Copy the data to your notebook instance
mkdir taxi_preproc
gsutil cp -r gs://cloud-training/bootcamps/serverlessml/taxi_preproc/*_xai.csv ./taxi_preproc
ls -l taxi_preproc
CSV_COLUMNS = ['fare_amount', 'dayofweek', 'hourofday', 'pickuplon',
'pickuplat', 'dropofflon', 'dropofflat']
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
DTYPES = ['float32', 'str' , 'int32', 'float32' , 'float32' , 'float32' , 'float32' ]
def prepare_data(file_path):
df = pd.read_csv(file_path, usecols = range(7), names = CSV_COLUMNS,
dtype = dict(zip(CSV_COLUMNS, DTYPES)), skiprows=1)
labels = df['fare_amount']
df = df.drop(columns=['fare_amount'])
df['dayofweek'] = df['dayofweek'].map(dict(zip(DAYS, range(7)))).astype('float32')
return df, labels
train_data, train_labels = prepare_data('./taxi_preproc/train_xai.csv')
valid_data, valid_labels = prepare_data('./taxi_preproc/valid_xai.csv')
# Preview the first 5 rows of training data
train_data.head()
# Create functions to compute engineered features in later Lambda layers
def euclidean(params):
lat1, lon1, lat2, lon2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
NUMERIC_COLS = ['pickuplon', 'pickuplat', 'dropofflon', 'dropofflat', 'hourofday', 'dayofweek']
def transform(inputs):
transformed = inputs.copy()
transformed['euclidean'] = tf.keras.layers.Lambda(euclidean, name='euclidean')([
inputs['pickuplat'],
inputs['pickuplon'],
inputs['dropofflat'],
inputs['dropofflon']])
feat_cols = {colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS}
feat_cols['euclidean'] = fc.numeric_column('euclidean')
print("BEFORE TRANSFORMATION")
print("INPUTS:", inputs.keys())
print("AFTER TRANSFORMATION")
print("TRANSFORMED:", transformed.keys())
print("FEATURES", feat_cols.keys())
return transformed, feat_cols
def build_model():
raw_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
transformed, feat_cols = transform(raw_inputs)
dense_inputs = tf.keras.layers.DenseFeatures(feat_cols.values(),
name = 'dense_input')(transformed)
h1 = tf.keras.layers.Dense(64, activation='relu', name='h1')(dense_inputs)
h2 = tf.keras.layers.Dense(32, activation='relu', name='h2')(h1)
output = tf.keras.layers.Dense(1, activation='linear', name = 'output')(h2)
model = tf.keras.models.Model(raw_inputs, output)
return model
model = build_model()
model.summary()
# Compile the model and see a summary
optimizer = tf.keras.optimizers.Adam(0.001)
model.compile(loss='mean_squared_error', optimizer=optimizer,
metrics = [tf.keras.metrics.RootMeanSquaredError()])
tf.keras.utils.plot_model(model, to_file='model_plot.png', show_shapes=True,
show_layer_names=True, rankdir="TB")
def load_dataset(features, labels, mode):
dataset = tf.data.Dataset.from_tensor_slices(({"dayofweek" : features["dayofweek"],
"hourofday" : features["hourofday"],
"pickuplat" : features["pickuplat"],
"pickuplon" : features["pickuplon"],
"dropofflat" : features["dropofflat"],
"dropofflon" : features["dropofflon"]},
labels
))
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.repeat().batch(256).shuffle(256*10)
else:
dataset = dataset.batch(256)
return dataset.prefetch(1)
train_dataset = load_dataset(train_data, train_labels, tf.estimator.ModeKeys.TRAIN)
valid_dataset = load_dataset(valid_data, valid_labels, tf.estimator.ModeKeys.EVAL)
tf.keras.backend.get_session().run(tf.tables_initializer(name='init_all_tables'))
steps_per_epoch = 426433 // 256
model.fit(train_dataset, steps_per_epoch=steps_per_epoch, validation_data=valid_dataset, epochs=10)
# Send test instances to model for prediction
predict = model.predict(valid_dataset, steps = 1)
predict[:5]
## Convert our Keras model to an estimator
keras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='export')
print(model.input)
# We need this serving input function to export our model in the next cell
serving_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
model.input
)
export_path = keras_estimator.export_saved_model(
'gs://' + BUCKET_NAME + '/explanations',
serving_input_receiver_fn=serving_fn
).decode('utf-8')
!saved_model_cli show --dir $export_path --all
# Print the names of our tensors
print('Model input tensors: ', model.input)
print('Model output tensor: ', model.output.name)
baselines_med = train_data.median().values.tolist()
baselines_mode = train_data.mode().values.tolist()
print(baselines_med)
print(baselines_mode)
explanation_metadata = {
"inputs": {
"dayofweek": {
"input_tensor_name": "dayofweek:0",
"input_baselines": [baselines_mode[0][0]] # Thursday
},
"hourofday": {
"input_tensor_name": "hourofday:0",
"input_baselines": [baselines_mode[0][1]] # 8pm
},
"dropofflon": {
"input_tensor_name": "dropofflon:0",
"input_baselines": [baselines_med[4]]
},
"dropofflat": {
"input_tensor_name": "dropofflat:0",
"input_baselines": [baselines_med[5]]
},
"pickuplon": {
"input_tensor_name": "pickuplon:0",
"input_baselines": [baselines_med[2]]
},
"pickuplat": {
"input_tensor_name": "pickuplat:0",
"input_baselines": [baselines_med[3]]
},
},
"outputs": {
"dense": {
"output_tensor_name": "output/BiasAdd:0"
}
},
"framework": "tensorflow"
}
print(explanation_metadata)
# Write the json to a local file
with open('explanation_metadata.json', 'w') as output_file:
json.dump(explanation_metadata, output_file)
!gsutil cp explanation_metadata.json $export_path
MODEL = 'taxifare_explain'
os.environ["MODEL"] = MODEL
%%bash
exists=$(gcloud ai-platform models list | grep ${MODEL})
if [ -n "$exists" ]; then
echo -e "Model ${MODEL} already exists."
else
echo "Creating a new model."
gcloud ai-platform models create ${MODEL}
fi
# Each time you create a version the name should be unique
import datetime
now = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
VERSION_IG = 'v_IG_{}'.format(now)
VERSION_SHAP = 'v_SHAP_{}'.format(now)
# Create the version with gcloud
!gcloud beta ai-platform versions create $VERSION_IG \
--model $MODEL \
--origin $export_path \
--runtime-version 1.15 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method 'integrated-gradients' \
--num-integral-steps 25
!gcloud beta ai-platform versions create $VERSION_SHAP \
--model $MODEL \
--origin $export_path \
--runtime-version 1.15 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method 'sampled-shapley' \
--num-paths 50
# Make sure the model deployed correctly. State should be `READY` in the following log
!gcloud ai-platform versions describe $VERSION_IG --model $MODEL
!echo "---"
!gcloud ai-platform versions describe $VERSION_SHAP --model $MODEL
# Format data for prediction to our model
!rm taxi-data.txt
!touch taxi-data.txt
prediction_json = {"dayofweek": "3", "hourofday": "17", "pickuplon": "-74.0026", "pickuplat": "40.7410", "dropofflat": "40.7790", "dropofflon": "-73.8772"}
with open('taxi-data.txt', 'a') as outfile:
json.dump(prediction_json, outfile)
# Preview the contents of the data file
!cat taxi-data.txt
resp_obj = !gcloud beta ai-platform explain --model $MODEL --version $VERSION_IG --json-instances='taxi-data.txt'
response_IG = json.loads(resp_obj.s)
resp_obj
resp_obj = !gcloud beta ai-platform explain --model $MODEL --version $VERSION_SHAP --json-instances='taxi-data.txt'
response_SHAP = json.loads(resp_obj.s)
resp_obj
explanations_IG = response_IG['explanations'][0]['attributions_by_label'][0]
explanations_SHAP = response_SHAP['explanations'][0]['attributions_by_label'][0]
predicted = round(explanations_SHAP['example_score'], 2)
baseline = round(explanations_SHAP['baseline_score'], 2 )
print('Baseline taxi fare: ' + str(baseline) + ' dollars')
print('Predicted taxi fare: ' + str(predicted) + ' dollars')
from tabulate import tabulate
feature_names = valid_data.columns.tolist()
attributions_IG = explanations_IG['attributions']
attributions_SHAP = explanations_SHAP['attributions']
rows = []
for feat in feature_names:
rows.append([feat, prediction_json[feat], attributions_IG[feat], attributions_SHAP[feat]])
print(tabulate(rows,headers=['Feature name', 'Feature value', 'Attribution value (IG)', 'Attribution value (SHAP)']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hyperparameters
Step2: Creating the network
Step3: Testing on the XOR function
Step4: Optimizing the network for XOR function
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib import cm
import matplotlib.pyplot as plt
import seaborn as sns
from progress_bar import log_progress
import FeedforwardNN
nodes = 9 # Number of nodes in our hidden layer
alpha = 5 # Learning Rate
num_epochs = 1000 # Maximum number of epochs
# Create instance of a neural network
nn = FeedforwardNN.NeuralNetwork()
# Add Layers
# Input Layer is created automatically
nn.add_layer((2, nodes)) # Layer 2
nn.add_layer((nodes, 1)) # Layer 3
# Create the data
training_data = np.asarray([[0, 0], [0, 1], [1, 0], [1, 1]]).reshape(4, 2, 1)
training_labels = np.asarray([[0], [1], [1], [0]])
# Train
# The stop accuracy tells the neural network to stop training once the error rate is under the specified threshold
# The returned iteration is the number of epochs needed to reach that accuracy
error_rate, iteration = nn.train(training_data, training_labels, num_epochs=num_epochs, learning_rate=alpha, stop_accuracy=1e-5)
# Plot the error
error_rate = error_rate.reshape((iteration,4))
sns.set_style("darkgrid")
plt.plot(np.arange(iteration), error_rate[:, 0], label='[0,0]')
plt.plot(np.arange(iteration), error_rate[:, 1], label='[0,1]')
plt.plot(np.arange(iteration), error_rate[:, 2], label='[1,0]')
plt.plot(np.arange(iteration), error_rate[:, 3], label='[1,1]')
plt.plot(np.arange(iteration), np.mean(error_rate, axis=1), label='mean', color='black')
plt.title('Error')
plt.xlabel('Number of epochs')
plt.ylabel('Error rate')
plt.legend()
plt.show()
# List of hyperparameters
nodes_list = np.arange(4, 10, 1)
alpha_list = np.arange(0.1, 15, 0.1)
num_epochs = 100
# Train for all hyperparameter combinations
num_epoch_to_train = []
for nodes in log_progress(nodes_list, user_label='nodes'):
for alpha in log_progress(alpha_list, user_label='alphas', refresh=True):
nn = FeedforwardNN.NeuralNetwork()
nn.add_layer((2, nodes)) # Layer 2
nn.add_layer((nodes, 1)) # Layer 3
error_rate, iteration = nn.train(training_data, training_labels, num_epochs=num_epochs, learning_rate=alpha, stop_accuracy=1e-6)
num_epoch_to_train.append(iteration)
# Reshape for plotting
z = np.asarray(num_epoch_to_train).reshape(len(nodes_list), -1)
np.savez('mesh.npz', alpha_list, nodes_list, z)
# Plot
fig = plt.figure()
ax = fig.add_subplot(111)
sns.set_style("darkgrid")
for n in range(len(z)):
plt.plot(alpha_list, z[n], label='{n} nodes'.format(n=n+4))
ax.set_xlabel('alpha')
ax.set_ylabel('epochs')
ax.set_ylim([0,100])
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_title('Epochs needed to learn')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the usual channel library with a couple of AWGs.
Step2: Compile a simple sequence.
Step3: Open the offsets file (in the same directory as the .aps2 files, one per AWG slice.)
Step4: Let's replace every single pulse with a fixed amplitude Utheta
Step5: We see that the data in the file has been updated.
Step6: Profiling
Step7: Getting the offsets is fast, and only needs to be done once
|
<ASSISTANT_TASK:>
Python Code:
from QGL import *
import QGL
import os.path
import pickle
QGL.drivers.APS2Pattern.SAVE_WF_OFFSETS = True
cl = ChannelLibrary(":memory:")
q1 = cl.new_qubit("q1")
aps2_1 = cl.new_APS2("BBNAPS1", address="192.168.5.101")
aps2_2 = cl.new_APS2("BBNAPS2", address="192.168.5.102")
dig_1 = cl.new_X6("X6_1", address=0)
h1 = cl.new_source("Holz1", "HolzworthHS9000", "HS9004A-009-1", power=-30)
h2 = cl.new_source("Holz2", "HolzworthHS9000", "HS9004A-009-2", power=-30)
cl.set_control(q1, aps2_1, generator=h1)
cl.set_measure(q1, aps2_2, dig_1.ch(1), generator=h2)
cl.set_master(aps2_1, aps2_1.ch("m2"))
cl["q1"].measure_chan.frequency = 0e6
cl.commit()
mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 11))
plot_pulse_files(mf, time=True)
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets")
with open(offset_f, "rb") as FID:
offsets = pickle.load(FID)
offsets
pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets}
wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2")
QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
plot_pulse_files(mf, time=True)
%timeit mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 100))
def get_offsets():
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets")
with open(offset_f, "rb") as FID:
offsets = pickle.load(FID)
return offsets
%timeit offsets = get_offsets()
%timeit pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets}
wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2")
%timeit QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
# %timeit QGL.drivers.APS2Pattern.update_wf_library("/Users/growland/workspace/AWG/Rabi/Rabi-BBNAPS1.aps2", pulses, offsets)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: E poi proporre la nostra frase allโinsegnante il quale si limita a
Step2: Ben presto ci accorgiamo di quanta pazienza devono avere le scimmie, dopo 100000 tentativi (saremo pur pigri ma anche molto tenaci) abbiamo azzeccato appena 8 lettere su 28
Step3: Con questa semplice modifica siamo in grado di imparare la frase corretta in appena qualche migliaio di tentativi.
Step4: Wow ora sono riuscito a scendere attorno al migliaio, ma ho davvero guadagnato qualcosa in termini di numero di tentativi necessari? Alla fine ora faccio 100 tentativi alla volta, perchรจ non ci metto 100 volte meno? In effetti รจ sleale contare il numero di volte che passo i miei tentativi allโinsegnante, dovrei contare invece quante volte lโinsegnante valuta i miei tentativi!
Step5: Diamine! Sรฌ, รจ vero che ci metto circa un terzo delle iterazioni adesso ma quel poveretto del mio insegnante deve valutare uno sproposito di tentativi in piรน! circa 130000! Mentre prima quando gli proponevo una frase alla volta ne bastavano circa 3000. E dal momento che devo aspettare che lui abbia finito prima di poter provare nuove frasi mi sto tirando la zappa sui piedi!
Step6: Proviamo a vedere cosa succede ai nostri candidati quando ne considero 10 alla volta
Step7: Eh sรฌ รจ come pensavo, quello che era il mio campione ha fallito a trovare un miglioramento e quindi un suo rivale รจ passato in vantaggio!
Step8: Ok, ma chi mescolo tra loro? Arghโฆ
|
<ASSISTANT_TASK:>
Python Code:
import random
import string
def random_char():
return random.choice(string.ascii_lowercase + ' ')
def genera_frase():
return [random_char() for n in range(0,len(amleto))]
amleto = list('parmi somigli ad una donnola')
print("target= '"+''.join(amleto)+"'")
frase = genera_frase()
print(str(frase)+" = '"+''.join(frase)+"'")
def valuta( candidato ):
azzeccate = 0
for (lettera1, lettera2) in zip(candidato, amleto):
if lettera1 == lettera2:
azzeccate = azzeccate + 1
return azzeccate
risposta = valuta(frase)
print(risposta)
def altera(vecchia_frase):
posizione_da_cambiare = random.choice(range(0,len(vecchia_frase)))
lettera_da_cambiare = vecchia_frase[posizione_da_cambiare]
alternative = (string.ascii_lowercase + ' ').replace(lettera_da_cambiare,'')
nuova_frase = list(vecchia_frase)
nuova_frase[posizione_da_cambiare] = random.choice(alternative)
return nuova_frase
i=0
miglior_frase = [random_char() for n in range(0,len(amleto))]
miglior_risultato = valuta(miglior_frase)
while(miglior_risultato < len(amleto)):
frase = altera(miglior_frase)
risposta = valuta(frase)
i = i+1
if risposta > miglior_risultato:
miglior_risultato = risposta
miglior_frase = frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
def migliore(candidati):
ordinati = sorted(candidati,key=lambda tup: tup[1], reverse=True)
return ordinati[0]
def genera_candidati(num_candidati):
candidati = []
for i in range(0,num_candidati):
tmp_frase = genera_frase()
tmp_risposta = valuta(tmp_frase)
candidati.append((tmp_frase,tmp_risposta))
return candidati
candidati = genera_candidati(100)
i=0
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
def valuta( candidato ):
global valutazioni
valutazioni = valutazioni + 1
azzeccate = 0
for (lettera1, lettera2) in zip(candidato, amleto):
if lettera1 == lettera2:
azzeccate = azzeccate + 1
return azzeccate
def prova_piu_frasi_insieme(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
print('Valutazioni totali: '+str(valutazioni))
prova_piu_frasi_insieme(100)
import pprint
pp = pprint.PrettyPrinter()
def stampa_candidati(candidati):
# candidati -> array di char, li trasformo in stringhe con ''.join(...)
# [' ', 'x', 'p', 'l', 'f', โฆ ,'d', 'z', 'h', 'f'] -> ' xplfrvvjjvnmzkovohltroudzhf'
stringhe_e_valori = list(map(lambda x : (''.join(x[0]),x[1]), candidati))
# per comoditร ordino le stringhe in base al numero di lettere corrette, decrescente
stringhe_ordinate = sorted(stringhe_e_valori,key=lambda tup: tup[1], reverse=True)
pp.pprint(stringhe_ordinate)
stampa_candidati(genera_candidati(10))
def prova_piu_frasi_insieme(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
stampa_candidati(candidati)
print('Valutazioni totali: '+str(valutazioni))
prova_piu_frasi_insieme(10)
def mescola(frase1, frase2):
nuova_frase = []
for i in range(0,len(frase1[0])):
if random.random() > 0.5:
nuova_frase.append(frase1[0][i])
else:
nuova_frase.append(frase2[0][i])
return (nuova_frase,valuta(nuova_frase))
test_frase1 , test_frase2 = genera_frase(), genera_frase()
print('frase1: "'+''.join(test_frase1)+'"')
print('frase2: "'+''.join(test_frase2)+'"')
print('mix: "'+''.join(mescola((test_frase1,1),(test_frase2,1))[0])+'"')
def genera_ruota(candidati):
totale = 0
ruota = []
for frase,valore in candidati:
totale = totale + valore
ruota.append((totale,frase,valore))
return ruota
def gira_ruota(wheel):
totale = wheel[-1][0]
pick = totale * random.random()
for (parziale,candidato,valore) in wheel:
if parziale >= pick:
return (candidato,valore)
return wheel[-1][1:]
candidati = genera_candidati(10)
wheel = genera_ruota(candidati)
pretty_wheel = list(map(lambda x:(x[0],''.join(x[1]),x[2]),wheel))
pp.pprint(pretty_wheel)
print("migliore='"+''.join(migliore(candidati)[0])+"'")
def prova_piu_frasi_e_mescola(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
ruota = genera_ruota(candidati)
nuovi_candidati = []
for n in range(0,len(candidati)):
minitorneo = [gira_ruota(ruota),gira_ruota(ruota)]
nuova_frase = altera(mescola(minitorneo[0],minitorneo[1])[0])
nuova_risposta = valuta(nuova_frase)
minitorneo.append((nuova_frase,nuova_risposta))
vincitore,valore_vincitore = migliore(minitorneo)
nuovi_candidati.append((vincitore,valore_vincitore))
if valore_vincitore > miglior_risultato:
miglior_risultato = valore_vincitore
miglior_frase = vincitore
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
stampa_candidati(candidati)
candidati = nuovi_candidati
print('valutazioni: '+str(valutazioni))
prova_piu_frasi_e_mescola(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1) Load the data from bicorr1
Step2: To remind ourselves what this file contains, the columns are
Step3: I used a numpy array. That's kind of a shame. If I had used a pandas array, I could easily add new colums with energies, but oh well. Moving on.
Step 3) Preallocate bhm_e matrix
Step4: Interaction type bins
Step5: Detector pair bins
Step6: Preallocate matrix
Step7: How large when stored to disk?
Step8: This is pretty small. Good. I could even avoid converting it to and from sparse matrix at this size.
Step9: Step 4) Fill the histogram
Step10: Set up dictionaries for returning pair and type indices
Step11: Calculate energy for one event
Step12: These are pretty close together. Now convert those to energy using the time stamps. Only proceed when both time stamps are greater than 0.
Step13: Set up info for filling the histogram
Step14: Only proceed if both particles are neutrons AND both times are greater than 0. How do I implement this logic?
Step15: The tricky thing here is that np.logical_and looks at elements 0 of both input arrays as one pair, then elements 1, etc. I had originally implemented it with the assumption that it looked at each input array as a pair. Thus, the split implementation.
Step16: Functionalize it
Step17: Skipping step 5. I am not going to convert to sparse matrix because the file size will be small anyway.
Step18: Step 6) Save the histogram and related vectors to disk
Step19: Step 7) Reload from disk
Step20: Functionalize for many folders
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import matplotlib.colors
import numpy as np
import os
import scipy.io as sio
import sys
import time
import inspect
import pandas as pd
from tqdm import *
# Plot entire array
np.set_printoptions(threshold=np.nan)
import seaborn as sns
sns.set_palette('spectral')
%load_ext autoreload
%autoreload 2
sys.path.append('../scripts/')
import bicorr as bicorr
import bicorr_plot as bicorr_plot
import bicorr_e as bicorr_e
os.listdir('../datar/1')
with open('../datar/1/bicorr1_part') as f:
print(f.read())
bicorr_data = bicorr.load_bicorr(1, root_path = '../datar')
type(bicorr_data)
help(bicorr_e.build_energy_bin_edges)
e_bin_edges, num_e_bins = bicorr_e.build_energy_bin_edges()
print(e_bin_edges)
print(num_e_bins)
# Number of bins in interaction type
num_intn_types = 1 #(0=nn, 1=np, 2=pn, 3=pp), only going to use nn
# What are the unique detector numbers? Use same technique as in bicorr.py
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag=True)
bhm_e = np.zeros((num_det_pairs,num_intn_types,num_e_bins,num_e_bins),dtype=np.uint32)
bhm_e.shape
bhm_e.nbytes/1e9
help(bicorr_e.alloc_bhm_e)
bhm_e = bicorr_e.alloc_bhm_e(num_det_pairs, num_intn_types, num_e_bins)
bhm_e.shape
dict_det_dist = bicorr_e.build_dict_det_dist()
dict_det_dist
dict_det_dist[45]
# Set up dictionary for returning detector pair index
det_df = bicorr.load_det_df()
dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)
print(det_df)
print(dict_pair_to_index)
print(dict_index_to_pair)
print(dict_pair_to_angle)
# Type index
dict_type_to_index = {11:0, 12:1, 21:2, 22:3}
i = 3
bicorr_data[i]
det1dist = dict_det_dist[bicorr_data[i]['det1ch']]
det2dist = dict_det_dist[bicorr_data[i]['det2ch']]
print(det1dist,det2dist)
det1t = bicorr_data[i]['det1t']
det2t = bicorr_data[i]['det2t']
print(det1t, det2t)
det1e = bicorr.convert_time_to_energy(det1t, det1dist)
det2e = bicorr.convert_time_to_energy(det2t, det2dist)
print(det1e,det2e)
e_min = np.min(e_bin_edges); e_max = np.max(e_bin_edges)
e_step = e_bin_edges[1]-e_bin_edges[0]
i = 16
event = bicorr_data[i]
det1t = event['det1t']; det2t = event['det2t'];
print(event, det1t, det2t, event['det1par'], event['det2par'])
np.logical_and([det1t > 0, event['det1par'] == 1], [det2t>0, event['det2par'] == 1])
for i in tqdm(np.arange(bicorr_data.shape[0]),ascii=True,disable=False):
event = bicorr_data[i]
det1t = event['det1t']; det2t = event['det2t'];
logic = np.logical_and([det1t > 0, event['det1par'] == 1], [det2t>0, event['det2par'] == 1])
if np.logical_and(logic[0],logic[1]): # nn with both t > 0
det1dist = dict_det_dist[event['det1ch']]
det2dist = dict_det_dist[event['det2ch']]
det1e = bicorr.convert_time_to_energy(det1t, det1dist)
det2e = bicorr.convert_time_to_energy(det2t, det2dist)
# Check that they are in range of the histogram
if np.logical_and(e_min < det1e < e_max, e_min < det2e < e_max):
# Determine index of detector pairs
pair_i = dict_pair_to_index[event['det1ch']*100+event['det2ch']]
# Determine indices of energy values
e1_i = int(np.floor((det1e-e_min)/e_step))
e2_i = int(np.floor((det2e-e_min)/e_step))
# Increment bhm_e
pair_i = dict_pair_to_index[event['det1ch']*100+event['det2ch']]
bhm_e[pair_i,0,e1_i,e2_i] += 1
import inspect
print(inspect.getsource(bicorr_e.fill_bhm_e))
bhm_e = bicorr_e.alloc_bhm_e(num_det_pairs, num_intn_types, num_e_bins)
bhm_e = bicorr_e.fill_bhm_e(bhm_e, bicorr_data, det_df, dict_det_dist, e_bin_edges, disable_tqdm = False)
bhm_e.shape
save_filename = r'../datar/1/bhm_e'
note = 'Here is my note'
np.savez(save_filename, bhm_e = bhm_e, e_bin_edges=e_bin_edges, note = note)
bicorr_e.save_bhm_e(bhm_e, e_bin_edges, r'../datar/1/')
load_filename = r'../datar/1/bhm_e.npz'
bhm_e = np.load(load_filename)['bhm_e']
e_bin_edges = np.load(load_filename)['e_bin_edges']
note = np.load(load_filename)['note']
print(bhm_e.shape)
print(e_bin_edges.shape)
print(note)
bhm_e, e_bin_edges, note = bicorr_e.load_bhm_e(r'../datar/1/')
help(bicorr_e.save_bhm_e)
help(bicorr_e.build_bhm_e)
bhm_e, e_bin_edges = bicorr_e.build_bhm_e(1,3,root_path = '../datar/')
help(bicorr_e.load_bhm_e)
bhm_e, e_bin_edges, note = bicorr_e.load_bhm_e()
note
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enoncรฉ
Step2: La lecture depuis un fichier
Step3: La maรฎtrise des index
Step4: Accรจs par ligne (uniquement avec
Step5: Accรจs par positions avec loc.
Step6: Accรจs par positions entiรจres avec iloc.
Step7: La maรฎtrise des index des lignes
Step8: Il faut se souvenir de cette particularitรฉ lors de la fusion de tables.
Step9: On peut les renommer.
Step10: L'opรฉrateur
Step11: Lien vers numpy
Step12: La maรฎtrise du nan
Step13: La maรฎtrise des types
Step14: On peut changer un type, donc convertir toutes les valeurs d'une colonne vers un autre type.
Step15: Crรฉation de colonnes
Step16: Modifications de valeurs
Step17: Une erreur ou warning frรฉquent
Step18: A value is trying to be set on a copy of a slice from a DataFrame.
Step19: La maรฎtrise des fonctions
Step20: Les dates sont considรฉrรฉes comme des chaรฎnes de caractรจres. Il est plus simple pour rรฉaliser des opรฉrations de convertir la colonne sous forme de dates.
Step21: On supprime les colonnes relatives aux rรฉgions et ร l'รขge puis on aggrรจge par jour.
Step22: Avec รฉchelle logarithmique.
Step23: Q1
Step24: Q2
Step25: Q3
Step26: Petit apartรฉ
Step27: Il apparaรฎt que ces corrรฉlations sont trรจs diffรฉrentes selon qu'on les calcule sur les derniรจres donnรฉes et les premiรจres semaines. Cela semblerait indiquer que les donnรฉes mรฉdicales sont trรจs diffรฉrentes. On pourrait chercher plusieurs jours mais le plus simple serait sans de gรฉnรฉrer des donnรฉes artificielles avec un modรจle SIR et vรฉrifier si ce raisonnement tient la route sur des donnรฉes propres.
Step28: Le code suivant explique comment trouver la valeur ISO-8859-1.
Step29: Q5
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
from pandas import DataFrame
rows = [{'col1': 0.5, 'col2': 'schtroumph'},
{'col1': 0.6, 'col2': 'schtroumphette'}]
DataFrame(rows)
%%writefile data.csv
col1,col2
0.5,alpha
0.6,beta
import os
os.getcwd()
from pandas import read_csv
df = read_csv('data.csv')
df
df
df['col1']
df[['col1', 'col2']]
df[:1]
df.loc[0, 'col1']
df.iloc[0, 0]
df
dfi = df.set_index('col2')
dfi
dfi.loc['alpha', 'col1']
df.columns
df.columns = ["valeur", "nom"]
df
df.loc[:, 'valeur':'nom']
df.values
df[['valeur']].values
rows = [{'col1': 0.5, 'col2': 'schtroumph'},
{'col2': 'schtroumphette'}]
DataFrame(rows)
df.dtypes
import numpy
df['valeur'].astype(numpy.float32)
import numpy
df['valeur'].astype(numpy.int32)
df['sup055'] = df['valeur'] >= 0.55
df
df['sup055'] = (df['valeur'] >= 0.55).astype(numpy.int64)
df
df['sup055+'] = df['valeur'] + df['sup055']
df
df.loc[df['nom'] == 'alpha', 'sup055+'] += 1000
df
rows = [{'col1': 0.5, 'col2': 'schtroumph'},
{'col1': 1.5, 'col2': 'schtroumphette'}]
df = DataFrame(rows)
df
df1 = df[df['col1'] > 1.]
df1
df1["col3"] = df1["col1"] + 1.
df1
df2 = df1.copy()
df2["col3"] = df2["col1"] + 1.
# https://www.data.gouv.fr/en/datasets/r/63352e38-d353-4b54-bfd1-f1b3ee1cabd7
from pandas import read_csv
url = "https://www.data.gouv.fr/en/datasets/r/08c18e08-6780-452d-9b8c-ae244ad529b3"
covid = read_csv(url, sep=";")
covid.tail()
covid.dtypes
from pandas import to_datetime
covid['jour'] = to_datetime(covid['jour'])
covid.tail()
covid.dtypes
agg_par_jour = covid.drop(['reg', 'cl_age90'], axis=1).groupby('jour').sum()
agg_par_jour.tail()
agg_par_jour.plot(title="Evolution des hospitalisations par jour",
figsize=(14, 4));
agg_par_jour.plot(title="Evolution des hospitalisations par jour",
figsize=(14, 4), logy=True);
set(covid['cl_age90'])
covid49 = covid[covid.cl_age90 == 49]
agg_par_jour49 = covid49.drop(['reg', 'cl_age90'], axis=1).groupby('jour').sum()
agg_par_jour49.tail()
agg_par_jour49.plot(title="Evolution des hospitalisations par jour\nage=49",
figsize=(14, 4), logy=True);
covid.tail()
diff = covid.drop(['reg', 'cl_age90'], axis=1).groupby(
['jour']).sum().diff()
diff.tail(n=2)
diff.plot(title="Sรฉries diffรฉrenciรฉes", figsize=(14, 4));
diff.rolling(7)
roll = diff.rolling(7).mean()
roll.tail(n=2)
roll.plot(title="Sรฉries diffรฉrenciรฉes lissรฉes", figsize=(14, 4));
data = agg_par_jour49.diff().rolling(7).mean()
data.tail(n=2)
data_last = data.tail(n=90)
cor = []
for i in range(0, 35):
ts = DataFrame(dict(rea=data_last.rea, dc=data_last.dc,
dclag=data_last["dc"].shift(i),
realag=data_last["rea"].shift(i)))
ts_cor = ts.corr()
cor.append(dict(delay=i, corr_dc=ts_cor.iloc[1, 3],
corr_rea=ts_cor.iloc[0, 3]))
DataFrame(cor).set_index('delay').plot(title="Corrรฉlation entre dรฉcรจs et rรฉanimation");
hosp = read_csv("https://www.data.gouv.fr/en/datasets/r/63352e38-d353-4b54-bfd1-f1b3ee1cabd7",
sep=";")
hosp.tail()
indic = read_csv("https://www.data.gouv.fr/fr/datasets/r/4acad602-d8b1-4516-bc71-7d5574d5f33e",
encoding="ISO-8859-1")
indic.tail()
# import chardet
# with open("indicateurs-covid19-dep.csv", "rb") as f:
# content = f.read()
# chardet.detect(content) # {'encoding': 'ISO-8859-1', 'confidence': 0.73, 'language': ''}
dep_pos = read_csv("https://raw.githubusercontent.com/sdpython/ensae_teaching_cs/"
"master/src/ensae_teaching_cs/data/data_shp/departement_french_2018.csv")
dep_pos.tail()
last_extract_date = max(set(indic.extract_date))
last_extract_date
indic_last = indic[indic.extract_date == last_extract_date]
merge = indic_last.merge(dep_pos, left_on='departement', right_on='code_insee')
final = merge[['code_insee', 'nom', 'DEPLONG', 'DEPLAT', 'taux_occupation_sae', 'R']]
metro = final[final.DEPLAT > 40]
metro
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
bigR1 = metro.R >= 1
bigR2 = metro.R >= 1.4
ax[0].scatter(metro.loc[bigR2, 'DEPLONG'], metro.loc[bigR2, 'DEPLAT'], c='red', label='R>=1.4');
ax[0].scatter(metro.loc[bigR1 & ~bigR2, 'DEPLONG'], metro.loc[bigR1 & ~bigR2, 'DEPLAT'], c='orange', label='1.3>=R>=1');
ax[0].scatter(metro.loc[~bigR1, 'DEPLONG'], metro.loc[~bigR1, 'DEPLAT'], c='blue', label='R<1');
ax[0].legend()
bigR1 = metro.taux_occupation_sae >= 25
bigR2 = metro.taux_occupation_sae >= 45
ax[1].scatter(metro.loc[bigR2, 'DEPLONG'], metro.loc[bigR2, 'DEPLAT'], c='red', label='SAE>=45');
ax[1].scatter(metro.loc[bigR1 & ~bigR2, 'DEPLONG'], metro.loc[bigR1 & ~bigR2, 'DEPLAT'], c='orange', label='45>SAE>=25');
ax[1].scatter(metro.loc[~bigR1, 'DEPLONG'], metro.loc[~bigR1, 'DEPLAT'], c='blue', label='SAE<25');
ax[1].legend();
metro[metro.nom == "Ardennes"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to Spark
|
<ASSISTANT_TASK:>
Python Code:
#Add all dependencies to PYTHON_PATH
import sys
sys.path.append("/usr/lib/spark/python")
sys.path.append("/usr/lib/spark/python/lib/py4j-0.10.4-src.zip")
sys.path.append("/usr/lib/python3/dist-packages")
#Define environment variables
import os
os.environ["HADOOP_CONF_DIR"] = "/etc/hadoop/conf"
os.environ["PYSPARK_PYTHON"] = "python3"
os.environ["PYSPARK_DRIVER_PYTHON"] = "ipython"
#Load PySpark to connect to a Spark cluster
from pyspark import SparkConf, SparkContext
#from osgeo import gdal
#To read GeoTiffs as a ByteArray
from io import BytesIO
from rasterio.io import MemoryFile
appName = "co_clustering"
masterURL="spark://pheno0.phenovari-utwente.surf-hosted.nl:7077"
#A context needs to be created if it does not already exist
try:
sc.stop()
except NameError:
print("A new Spark Context will be created.")
sc = SparkContext(conf = SparkConf().setAppName(appName).setMaster(masterURL))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python versions
Step2: Basic data types
Step3: Note that unlike certain other languages like <tt>C</tt>, <tt>C++</tt>, <em>Java</em>, or <tt>C#</tt>, Python does not have unary increment (x++) or decrement (x--) operators.
Step4: Now we let's look at the operations
Step5: Strings
Step6: String objects have a bunch of useful methods; for example
Step7: You can find a list of all string methods in the documentation.
Step8: As usual, you can find all the gory details about lists in the documentation.
Step9: Loops
Step10: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step11: List comprehensions
Step12: You can make this code simpler using a list comprehension
Step13: List comprehensions can also contain conditions
Step14: Dictionaries
Step15: You can find all you need to know about dictionaries in the documentation.
Step16: If you want access to keys and their corresponding values, use the <tt>items</tt> method
Step17: Dictionary comprehensions
Step18: Sets
Step19: Loops
Step20: Set comprehensions
Step21: Tuples
Step22: Functions
Step23: We will often define functions to take optional keyword arguments, like this
Step24: Classes
Step25: Numpy
Step26: Arrays
Step27: Numpy also provides many functions to create arrays
Step28: Array indexing
Step29: A slice of an array is a view into the same data, so modifying it will modify the original array.
Step30: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array.
Step31: Two ways of accessing the data in the middle row of the array.
Step32: Integer array indexing
Step33: The following expression will return an array containing the elements a[0,1]and a[2,3].
Step34: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix
Step35: same as a[0,0], a[1,2], a[2,0], a[3,1],
Step36: Boolean array indexing
Step37: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Step38: You can read all about numpy datatypes in the documentation.
Step39: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step40: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum
Step41: You can find the full list of mathematical functions provided by numpy in the documentation.
Step42: Broadcasting
Step43: Create an empty matrix with the same shape as x. The elements of this matrix are initialized arbitrarily.
Step44: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this
Step45: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting
Step46: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Step47: Add a vector to each column of a matrix
Step48: Another solution is to reshape w to be a row vector of shape (2, 1);
Step49: Multiply a matrix by a constant
Step50: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
Step51: By running this special iPython command, we will be displaying plots inline
Step52: Plotting
Step53: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels
Step54: Subplots
|
<ASSISTANT_TASK:>
Python Code:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
quicksort([3,6,8,10,1,2,1])
!python --version
x = 3
x, type(x)
print(x + 1) # Addition;
print(x - 1) # Subtraction;
print(x * 2) # Multiplication;
print(x ** 2) # Exponentiation;
x += 1
print(x) # Prints "4"
x *= 2
print(x) # Prints "8"
y = 2.5
print(type(y))
print(y, y + 1, y * 2, y ** 2)
t, f = True, False
type(t)
print(type(t))
print(t and f) # Logical AND
print(t or f) # Logical OR
print( not t) # Logical NOT
print(t != f) # Logical XOR
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter.
print(hello, len(hello))
hw = hello + ' ' + world # String concatenation
hw
hw12 = '%s, %s! %d' % (hello, world, 12) # sprintf style string formatting
hw12
s = "hello"
print(s.capitalize()) # Capitalize a string
print(s.upper()) # Convert a string to uppercase
print(s.rjust(7)) # Right-justify a string, padding with spaces
print(s.center(7)) # Center a string, padding with spaces
print(s.replace('l', '\N{greek small letter lamda}')) # Replace all instances of one substring with another
print(' world '.strip()) # Strip leading and trailing whitespace
xs = [3, 1, 2] # Create a list
print(xs, xs[2]) # Indexing starts at 0
print(xs[-1]) # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
xs
xs.append('bar') # Add a new element to the end of the list
xs
x = xs.pop() # Remove and return the last element of the list
x, xs
nums = list(range(5))
print(nums)
print(nums[2:4]) # Get a slice from index 2 to 4 (exclusive)
print(nums[2:]) # Get a slice from index 2 to the end
print(nums[:2]) # Get a slice from the start to index 2 (exclusive)
print(nums[:]) # Get a slice of the whole list, creates a shallow copy
print(nums[:-1]) # Slice indices can be negative
nums[2:4] = [8, 9, 10] # Assign a new sublist to a slice
nums
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print(animal)
animals = ['cat', 'dog', 'monkey']
for idx, animal in enumerate(animals):
print('#%d: %s' % (idx + 1, animal))
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
squares
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
squares
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
even_squares
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print(d['cat']) # Get an entry from a dictionary
print('cat' in d) # Check if a dictionary has a given key
d['fish'] = 'wet' # Set an entry in a dictionary
d['fish']
d['monkey'] # KeyError: 'monkey' not a key of d
print(d.get('monkey', 'N/A')) # Get an element with a default
print(d.get('fish', 'N/A')) # Get an element with a default
del d['fish'] # Remove an element from a dictionary
d.get('fish', 'N/A') # "fish" is no longer a key
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal in d:
legs = d[animal]
print('A %s has %d legs.' % (animal.ljust(6), legs))
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.items():
print('A %s has %d legs.' % (animal.ljust(6), legs))
nums = [0, 1, 2, 3, 4, 5, 6]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
even_num_to_square
animals = {'cat', 'dog'}
print('cat' in animals) # Check if an element is in a set
print('fish' in animals)
animals.add('fish') # Add an element to a set
print('fish' in animals)
print(len(animals)) # Number of elements in a set
animals.add('cat') # Adding an element that is already in the set does nothing
print(len(animals))
animals.remove('cat') # Remove an element from a set
print(len(animals))
animals
animals = {'cat', 'dog', 'fish'}
for idx, animal in enumerate(animals):
print('#%d: %s' % (idx + 1, animal))
from math import sqrt
{ int(sqrt(x)) for x in range(30) }
d = { (x, x + 1): x for x in range(10) } # Create a dictionary with tuple keys
t = (5, 6) # Create a tuple
print(type(t))
print(d[t])
print(d[(1, 2)])
d
t[0] = 1
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print(sign(x))
def hello(name, loud=False):
if loud:
print('HELLO, %s' % name.upper())
else:
print('Hello, %s!' % name)
hello('Bob')
hello('Fred', loud=True)
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print('HELLO, %s!' % self.name.upper())
else:
print('Hello, %s' % self.name)
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method
g.greet(loud=True)
import numpy as np
a = np.array([1, 2, 3]) # Create a rank 1 array
print(type(a), a.shape, a[0], a[1], a[2])
a[0] = 5 # Change an element of the array
a
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
b
print(b.shape)
print(b[0, 0], b[0, 1], b[1, 0])
np.zeros((2,2)) # Create an array of all zeros
np.ones((1,2)) # Create an array of all ones
np.full((2,2), 7) # Create a constant array
np.eye(2) # Create a 2x2 identity matrix
np.random.random((2,2)) # Create an array filled with random values
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(a)
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print(b)
print(a[0, 1])
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print(a[0, 1])
# Create the following rank 2 array with shape (3, 4)
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
a
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print(row_r1, row_r1.shape)
print(row_r2, row_r2.shape)
print(row_r3, row_r3.shape)
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print(col_r1, col_r1.shape)
print()
print(col_r2, col_r2.shape)
a
np.array([a[0, 0], a[1, 1], a[2, 0]])
# When using integer array indexing, you can reuse the same
# element from the source array:
a[[0, 2], [1, 3]]
a[0,1], a[2,3]
# Equivalent to the previous integer array indexing example
np.array([a[0, 1], a[2, 3]])
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
a
# Create an array of indices
b = np.array([0, 2, 0, 1])
b
# Select one element from each row of a using the indices in b
a[[0, 1, 2, 3], b]
a[0,0], a[1,2], a[2,0], a[3,1]
# Mutate one element from each row of a using the indices in b
a[[0, 1, 2, 3], b] += 100
a
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
print('a = \n', a, sep='')
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
bool_idx
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
a[bool_idx]
# We can do all of the above in a single concise statement:
a[a > 2]
x = np.array([1, 2]) # Let numpy choose the datatype
y = np.array([1.0, 2.0]) # Let numpy choose the datatype
z = np.array([1, 2], dtype=np.int64) # Force a particular datatype
x.dtype, y.dtype, z.dtype
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum
x + y
np.add(x, y)
# Elementwise difference
x - y
np.subtract(x, y)
# Elementwise product
x * y
np.multiply(x, y)
# Elementwise division
x / y
np.divide(x, y)
# Elementwise square root
np.sqrt(x)
x = np.array([[1,2],[3,4]])
x
y = np.array([[5,6],[7,8]])
y
v = np.array([9,10])
v
w = np.array([11, 12])
w
# Inner product of vectors
print(v.dot(w))
print(np.dot(v, w))
# Matrix / vector product
print(x.dot(v))
print(np.dot(x, v))
# Matrix / matrix product; both produce the rank 2 array
print(x.dot(y))
print(np.dot(x, y))
x
np.sum(x) # Compute sum of all elements
np.sum(x, axis=0) # Compute sum of each column
np.sum(x, axis=1) # Compute sum of each row
x
x.T
v = np.array([[1,2,3]])
print(v)
print(v.T)
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
x
v = np.array([1, 0, 1])
v
y = np.empty_like(x)
y
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
y
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
vv
y = x + vv # Add x and vv elementwise
y
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
y = x + v # Add v to each row of x using broadcasting
y
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
np.reshape(v, (3, 1)) * w
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
x + v
(x.T + w).T
x + np.reshape(w, (2, 1))
x * 2
import matplotlib.pyplot as plt
%matplotlib inline
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graphical excellence and integrity
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
# Add your filename and uncomment the following line:
Image(filename='graph1.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's take a look at the sprites of an Evolutionary Chain.
Step2: On this step, we will build and test our pre-processing pipeline. Its goal is to identify the main object in the image (A simple task in out sprite dataset), find out its bounding box and redimensionate the image to an adequate size (We will use a 64 x 64 pixels image on this article)
Step3: Finnaly, let's call our centering pipeine on all sprites of a generation. To ensure the process is going smoothly, one in each thirty sprites will be ploted for visual inspection.
Step4: At least, let's process all the images and save them to disk for further use.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from utility.plot import plot_all
#Plotting Bulbassaur ID = 1
plot_all(1)
#Plotting Charmander ID = 4
plot_all(4)
#Plotting Squirtle ID = 7
plot_all(7)
%matplotlib inline
from utility.plot import plot_chain
plot_chain("gen05_black-white",[1,2,3])
%matplotlib inline
from utility.preprocessing import center_and_resize
import matplotlib.image as mpimg
import os
main_folder = "./sprites/pokemon/main-sprites/"
game_folder = "gen05_black-white"
pkm_list = [1, 4, 7, 3]
for pkm in pkm_list:
img_file = "{id}.png".format(id=pkm)
img_path = os.path.join(main_folder,game_folder,img_file)
img = mpimg.imread(img_path)
center_and_resize(img,plot=True,id=img_path)
%matplotlib inline
from utility.preprocessing import center_and_resize
import matplotlib.image as mpimg
from math import ceil
import matplotlib.pyplot as plt
main_folder = "./sprites/pokemon/main-sprites/"
game_folder = "gen05_black-white"
pkm_list = range(1,650)
image_list = []
for pkm in pkm_list:
try:
image_file = "{id}.png".format(id=pkm)
image_path = os.path.join(main_folder,game_folder,image_file)
image = mpimg.imread(image_path)
image_resize = center_and_resize(image,plot=False,id=image_path)
plot = (pkm % 30 == 0)
if plot:
image_list.append((image,image_resize))
except ValueError as e:
print("Out of Bounds Error:", e)
n_cols = 6
n_rows = ceil(2*len(image_list)/n_cols)
plt.figure(figsize=(16,256))
for idx, image_pair in enumerate(image_list):
image, image_resize = image_pair
plt.subplot(100,6,2*idx+1)
plt.imshow(image)
plt.subplot(100,6,2*idx+2)
plt.imshow(image_resize)
import warnings
import os
import matplotlib.image as img
from skimage import io
from utility.preprocessing import center_and_resize
main_folder = "./sprites/pokemon/main-sprites/"
dest_folder = "./sprites/pokemon/centered-sprites/"
if not os.path.exists(dest_folder):
os.makedirs(dest_folder)
gen_folders = {
"gen01_red-blue" : 151,
"gen01_red-green" : 151,
"gen01_yellow" : 151,
"gen02_crystal" : 251,
"gen02_gold" : 251,
"gen02_silver" : 251,
"gen03_emerald" : 386,
"gen03_firered-leafgreen" : 151,
"gen03_ruby-sapphire" : 386,
"gen04_diamond-pearl" : 493,
"gen04_heartgold-soulsilver" : 386,
"gen04_platinum" : 386,
"gen05_black-white" : 649
}
for gen, max_pkm in gen_folders.items():
print("Starting",gen)
main_gen_folder = os.path.join(main_folder,gen)
dest_gen_folder = os.path.join(dest_folder,gen)
if not os.path.exists(dest_gen_folder):
os.makedirs(dest_gen_folder)
for pkm_id in range(1,max_pkm+1):
image_file = "{id}.png".format(id=pkm_id)
image_path = os.path.join(main_gen_folder,image_file)
try:
image = mpimg.imread(image_path)
new_image = center_and_resize(image,plot=False,id=image_path)
new_image_path = os.path.join(dest_gen_folder,image_file)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
io.imsave(new_image_path,new_image)
except FileNotFoundError:
print(" - {file} not found".format(file=image_path))
print("Finished")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the CIFAR-10 dataset
Step2: Necessary Hyperparameters
Step18: Augmentation Utilities
Step20: Data Loading
Step21: View examples of dataset.
Step29: Pseudocode of loss and model
Step31: Barlow Twins' Model Architecture
Step33: Projector network
Step36: Training Loop Model
Step37: Model Training
Step38: Evaluation
|
<ASSISTANT_TASK:>
Python Code:
!pip install tensorflow-addons
import os
# slightly faster improvements, on the first epoch 30 second decrease and a 1-2 second
# decrease in epoch time. Overall saves approx. 5 min of training time
# Allocates two threads for a gpu private which allows more operations to be
# done faster
os.environ["TF_GPU_THREAD_MODE"] = "gpu_private"
import tensorflow as tf # framework
from tensorflow import keras # for tf.keras
import tensorflow_addons as tfa # LAMB optimizer and gaussian_blur_2d function
import numpy as np # np.random.random
import matplotlib.pyplot as plt # graphs
import datetime # tensorboard logs naming
# XLA optimization for faster performance(up to 10-15 minutes total time saved)
tf.config.optimizer.set_jit(True)
[
(train_features, train_labels),
(test_features, test_labels),
] = keras.datasets.cifar10.load_data()
train_features = train_features / 255.0
test_features = test_features / 255.0
# Batch size of dataset
BATCH_SIZE = 512
# Width and height of image
IMAGE_SIZE = 32
class Augmentation(keras.layers.Layer):
Base augmentation class.
Base augmentation class. Contains the random_execute method.
Methods:
random_execute: method that returns true or false based
on a probability. Used to determine whether an augmentation
will be run.
def __init__(self):
super(Augmentation, self).__init__()
@tf.function
def random_execute(self, prob: float) -> bool:
random_execute function.
Arguments:
prob: a float value from 0-1 that determines the
probability.
Returns:
returns true or false based on the probability.
return tf.random.uniform([], minval=0, maxval=1) < prob
class RandomToGrayscale(Augmentation):
RandomToGrayscale class.
RandomToGrayscale class. Randomly makes an image
grayscaled based on the random_execute method. There
is a 20% chance that an image will be grayscaled.
Methods:
call: method that grayscales an image 20% of
the time.
@tf.function
def call(self, x: tf.Tensor) -> tf.Tensor:
call function.
Arguments:
x: a tf.Tensor representing the image.
Returns:
returns a grayscaled version of the image 20% of the time
and the original image 80% of the time.
if self.random_execute(0.2):
x = tf.image.rgb_to_grayscale(x)
x = tf.tile(x, [1, 1, 3])
return x
class RandomColorJitter(Augmentation):
RandomColorJitter class.
RandomColorJitter class. Randomly adds color jitter to an image.
Color jitter means to add random brightness, contrast,
saturation, and hue to an image. There is a 80% chance that an
image will be randomly color-jittered.
Methods:
call: method that color-jitters an image 80% of
the time.
@tf.function
def call(self, x: tf.Tensor) -> tf.Tensor:
call function.
Adds color jitter to image, including:
Brightness change by a max-delta of 0.8
Contrast change by a max-delta of 0.8
Saturation change by a max-delta of 0.8
Hue change by a max-delta of 0.2
Originally, the same deltas of the original paper
were used, but a performance boost of almost 2% was found
when doubling them.
Arguments:
x: a tf.Tensor representing the image.
Returns:
returns a color-jittered version of the image 80% of the time
and the original image 20% of the time.
if self.random_execute(0.8):
x = tf.image.random_brightness(x, 0.8)
x = tf.image.random_contrast(x, 0.4, 1.6)
x = tf.image.random_saturation(x, 0.4, 1.6)
x = tf.image.random_hue(x, 0.2)
return x
class RandomFlip(Augmentation):
RandomFlip class.
RandomFlip class. Randomly flips image horizontally. There is a 50%
chance that an image will be randomly flipped.
Methods:
call: method that flips an image 50% of
the time.
@tf.function
def call(self, x: tf.Tensor) -> tf.Tensor:
call function.
Randomly flips the image.
Arguments:
x: a tf.Tensor representing the image.
Returns:
returns a flipped version of the image 50% of the time
and the original image 50% of the time.
if self.random_execute(0.5):
x = tf.image.random_flip_left_right(x)
return x
class RandomResizedCrop(Augmentation):
RandomResizedCrop class.
RandomResizedCrop class. Randomly crop an image to a random size,
then resize the image back to the original size.
Attributes:
image_size: The dimension of the image
Methods:
__call__: method that does random resize crop to the image.
def __init__(self, image_size):
super(Augmentation, self).__init__()
self.image_size = image_size
def call(self, x: tf.Tensor) -> tf.Tensor:
call function.
Does random resize crop by randomly cropping an image to a random
size 75% - 100% the size of the image. Then resizes it.
Arguments:
x: a tf.Tensor representing the image.
Returns:
returns a randomly cropped image.
rand_size = tf.random.uniform(
shape=[],
minval=int(0.75 * self.image_size),
maxval=1 * self.image_size,
dtype=tf.int32,
)
crop = tf.image.random_crop(x, (rand_size, rand_size, 3))
crop_resize = tf.image.resize(crop, (self.image_size, self.image_size))
return crop_resize
class RandomSolarize(Augmentation):
RandomSolarize class.
RandomSolarize class. Randomly solarizes an image.
Solarization is when pixels accidentally flip to an inverted state.
Methods:
call: method that does random solarization 20% of the time.
@tf.function
def call(self, x: tf.Tensor) -> tf.Tensor:
call function.
Randomly solarizes the image.
Arguments:
x: a tf.Tensor representing the image.
Returns:
returns a solarized version of the image 20% of the time
and the original image 80% of the time.
if self.random_execute(0.2):
# flips abnormally low pixels to abnormally high pixels
x = tf.where(x < 10, x, 255 - x)
return x
class RandomBlur(Augmentation):
RandomBlur class.
RandomBlur class. Randomly blurs an image.
Methods:
call: method that does random blur 20% of the time.
@tf.function
def call(self, x: tf.Tensor) -> tf.Tensor:
call function.
Randomly solarizes the image.
Arguments:
x: a tf.Tensor representing the image.
Returns:
returns a blurred version of the image 20% of the time
and the original image 80% of the time.
if self.random_execute(0.2):
s = np.random.random()
return tfa.image.gaussian_filter2d(image=x, sigma=s)
return x
class RandomAugmentor(keras.Model):
RandomAugmentor class.
RandomAugmentor class. Chains all the augmentations into
one pipeline.
Attributes:
image_size: An integer represing the width and height
of the image. Designed to be used for square images.
random_resized_crop: Instance variable representing the
RandomResizedCrop layer.
random_flip: Instance variable representing the
RandomFlip layer.
random_color_jitter: Instance variable representing the
RandomColorJitter layer.
random_blur: Instance variable representing the
RandomBlur layer
random_to_grayscale: Instance variable representing the
RandomToGrayscale layer
random_solarize: Instance variable representing the
RandomSolarize layer
Methods:
call: chains layers in pipeline together
def __init__(self, image_size: int):
super(RandomAugmentor, self).__init__()
self.image_size = image_size
self.random_resized_crop = RandomResizedCrop(image_size)
self.random_flip = RandomFlip()
self.random_color_jitter = RandomColorJitter()
self.random_blur = RandomBlur()
self.random_to_grayscale = RandomToGrayscale()
self.random_solarize = RandomSolarize()
def call(self, x: tf.Tensor) -> tf.Tensor:
x = self.random_resized_crop(x)
x = self.random_flip(x)
x = self.random_color_jitter(x)
x = self.random_blur(x)
x = self.random_to_grayscale(x)
x = self.random_solarize(x)
x = tf.clip_by_value(x, 0, 1)
return x
bt_augmentor = RandomAugmentor(IMAGE_SIZE)
class BTDatasetCreator:
Barlow twins dataset creator class.
BTDatasetCreator class. Responsible for creating the
barlow twins' dataset.
Attributes:
options: tf.data.Options needed to configure a setting
that may improve performance.
seed: random seed for shuffling. Used to synchronize two
augmented versions.
augmentor: augmentor used for augmentation.
Methods:
__call__: creates barlow dataset.
augmented_version: creates 1 half of the dataset.
def __init__(self, augmentor: RandomAugmentor, seed: int = 1024):
self.options = tf.data.Options()
self.options.threading.max_intra_op_parallelism = 1
self.seed = seed
self.augmentor = augmentor
def augmented_version(self, ds: list) -> tf.data.Dataset:
return (
tf.data.Dataset.from_tensor_slices(ds)
.shuffle(1000, seed=self.seed)
.map(self.augmentor, num_parallel_calls=tf.data.AUTOTUNE)
.batch(BATCH_SIZE, drop_remainder=True)
.prefetch(tf.data.AUTOTUNE)
.with_options(self.options)
)
def __call__(self, ds: list) -> tf.data.Dataset:
a1 = self.augmented_version(ds)
a2 = self.augmented_version(ds)
return tf.data.Dataset.zip((a1, a2)).with_options(self.options)
augment_versions = BTDatasetCreator(bt_augmentor)(train_features)
sample_augment_versions = iter(augment_versions)
def plot_values(batch: tuple):
fig, axs = plt.subplots(3, 3)
fig1, axs1 = plt.subplots(3, 3)
fig.suptitle("Augmentation 1")
fig1.suptitle("Augmentation 2")
a1, a2 = batch
# plots images on both tables
for i in range(3):
for j in range(3):
# CHANGE(add / 255)
axs[i][j].imshow(a1[3 * i + j])
axs[i][j].axis("off")
axs1[i][j].imshow(a2[3 * i + j])
axs1[i][j].axis("off")
plt.show()
plot_values(next(sample_augment_versions))
class BarlowLoss(keras.losses.Loss):
BarlowLoss class.
BarlowLoss class. Creates a loss function based on the cross-correlation
matrix.
Attributes:
batch_size: the batch size of the dataset
lambda_amt: the value for lambda(used in cross_corr_matrix_loss)
Methods:
__init__: gets instance variables
call: gets the loss based on the cross-correlation matrix
make_diag_zeros: Used in calculating off-diagonal section
of loss function; makes diagonals zeros.
cross_corr_matrix_loss: creates loss based on cross correlation
matrix.
def __init__(self, batch_size: int):
__init__ method.
Gets the instance variables
Arguments:
batch_size: An integer value representing the batch size of the
dataset. Used for cross correlation matrix calculation.
super(BarlowLoss, self).__init__()
self.lambda_amt = 5e-3
self.batch_size = batch_size
def get_off_diag(self, c: tf.Tensor) -> tf.Tensor:
get_off_diag method.
Makes the diagonals of the cross correlation matrix zeros.
This is used in the off-diagonal portion of the loss function,
where we take the squares of the off-diagonal values and sum them.
Arguments:
c: A tf.tensor that represents the cross correlation
matrix
Returns:
Returns a tf.tensor which represents the cross correlation
matrix with its diagonals as zeros.
zero_diag = tf.zeros(c.shape[-1])
return tf.linalg.set_diag(c, zero_diag)
def cross_corr_matrix_loss(self, c: tf.Tensor) -> tf.Tensor:
cross_corr_matrix_loss method.
Gets the loss based on the cross correlation matrix.
We want the diagonals to be 1's and everything else to be
zeros to show that the two augmented images are similar.
Loss function procedure:
take the diagonal of the cross-correlation matrix, subtract by 1,
and square that value so no negatives.
Take the off-diagonal of the cc-matrix(see get_off_diag()),
square those values to get rid of negatives and increase the value,
and multiply it by a lambda to weight it such that it is of equal
value to the optimizer as the diagonal(there are more values off-diag
then on-diag)
Take the sum of the first and second parts and then sum them together.
Arguments:
c: A tf.tensor that represents the cross correlation
matrix
Returns:
Returns a tf.tensor which represents the cross correlation
matrix with its diagonals as zeros.
# subtracts diagonals by one and squares them(first part)
c_diff = tf.pow(tf.linalg.diag_part(c) - 1, 2)
# takes off diagonal, squares it, multiplies with lambda(second part)
off_diag = tf.pow(self.get_off_diag(c), 2) * self.lambda_amt
# sum first and second parts together
loss = tf.reduce_sum(c_diff) + tf.reduce_sum(off_diag)
return loss
def normalize(self, output: tf.Tensor) -> tf.Tensor:
normalize method.
Normalizes the model prediction.
Arguments:
output: the model prediction.
Returns:
Returns a normalized version of the model prediction.
return (output - tf.reduce_mean(output, axis=0)) / tf.math.reduce_std(
output, axis=0
)
def cross_corr_matrix(self, z_a_norm: tf.Tensor, z_b_norm: tf.Tensor) -> tf.Tensor:
cross_corr_matrix method.
Creates a cross correlation matrix from the predictions.
It transposes the first prediction and multiplies this with
the second, creating a matrix with shape (n_dense_units, n_dense_units).
See build_twin() for more info. Then it divides this with the
batch size.
Arguments:
z_a_norm: A normalized version of the first prediction.
z_b_norm: A normalized version of the second prediction.
Returns:
Returns a cross correlation matrix.
return (tf.transpose(z_a_norm) @ z_b_norm) / self.batch_size
def call(self, z_a: tf.Tensor, z_b: tf.Tensor) -> tf.Tensor:
call method.
Makes the cross-correlation loss. Uses the CreateCrossCorr
class to make the cross corr matrix, then finds the loss and
returns it(see cross_corr_matrix_loss()).
Arguments:
z_a: The prediction of the first set of augmented data.
z_b: the prediction of the second set of augmented data.
Returns:
Returns a (rank-0) tf.Tensor that represents the loss.
z_a_norm, z_b_norm = self.normalize(z_a), self.normalize(z_b)
c = self.cross_corr_matrix(z_a_norm, z_b_norm)
loss = self.cross_corr_matrix_loss(c)
return loss
class ResNet34:
Resnet34 class.
Responsible for the Resnet 34 architecture.
Modified from
https://www.analyticsvidhya.com/blog/2021/08/how-to-code-your-resnet-from-scratch-in-tensorflow/#h2_2.
https://www.analyticsvidhya.com/blog/2021/08/how-to-code-your-resnet-from-scratch-in-tensorflow/#h2_2.
View their website for more information.
def identity_block(self, x, filter):
# copy tensor to variable called x_skip
x_skip = x
# Layer 1
x = tf.keras.layers.Conv2D(filter, (3, 3), padding="same")(x)
x = tf.keras.layers.BatchNormalization(axis=3)(x)
x = tf.keras.layers.Activation("relu")(x)
# Layer 2
x = tf.keras.layers.Conv2D(filter, (3, 3), padding="same")(x)
x = tf.keras.layers.BatchNormalization(axis=3)(x)
# Add Residue
x = tf.keras.layers.Add()([x, x_skip])
x = tf.keras.layers.Activation("relu")(x)
return x
def convolutional_block(self, x, filter):
# copy tensor to variable called x_skip
x_skip = x
# Layer 1
x = tf.keras.layers.Conv2D(filter, (3, 3), padding="same", strides=(2, 2))(x)
x = tf.keras.layers.BatchNormalization(axis=3)(x)
x = tf.keras.layers.Activation("relu")(x)
# Layer 2
x = tf.keras.layers.Conv2D(filter, (3, 3), padding="same")(x)
x = tf.keras.layers.BatchNormalization(axis=3)(x)
# Processing Residue with conv(1,1)
x_skip = tf.keras.layers.Conv2D(filter, (1, 1), strides=(2, 2))(x_skip)
# Add Residue
x = tf.keras.layers.Add()([x, x_skip])
x = tf.keras.layers.Activation("relu")(x)
return x
def __call__(self, shape=(32, 32, 3)):
# Step 1 (Setup Input Layer)
x_input = tf.keras.layers.Input(shape)
x = tf.keras.layers.ZeroPadding2D((3, 3))(x_input)
# Step 2 (Initial Conv layer along with maxPool)
x = tf.keras.layers.Conv2D(64, kernel_size=7, strides=2, padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.MaxPool2D(pool_size=3, strides=2, padding="same")(x)
# Define size of sub-blocks and initial filter size
block_layers = [3, 4, 6, 3]
filter_size = 64
# Step 3 Add the Resnet Blocks
for i in range(4):
if i == 0:
# For sub-block 1 Residual/Convolutional block not needed
for j in range(block_layers[i]):
x = self.identity_block(x, filter_size)
else:
# One Residual/Convolutional Block followed by Identity blocks
# The filter size will go on increasing by a factor of 2
filter_size = filter_size * 2
x = self.convolutional_block(x, filter_size)
for j in range(block_layers[i] - 1):
x = self.identity_block(x, filter_size)
# Step 4 End Dense Network
x = tf.keras.layers.AveragePooling2D((2, 2), padding="same")(x)
x = tf.keras.layers.Flatten()(x)
model = tf.keras.models.Model(inputs=x_input, outputs=x, name="ResNet34")
return model
def build_twin() -> keras.Model:
build_twin method.
Builds a barlow twins model consisting of an encoder(resnet-34)
and a projector, which generates embeddings for the images
Returns:
returns a barlow twins model
# number of dense neurons in the projector
n_dense_neurons = 5000
# encoder network
resnet = ResNet34()()
last_layer = resnet.layers[-1].output
# intermediate layers of the projector network
n_layers = 2
for i in range(n_layers):
dense = tf.keras.layers.Dense(n_dense_neurons, name=f"projector_dense_{i}")
if i == 0:
x = dense(last_layer)
else:
x = dense(x)
x = tf.keras.layers.BatchNormalization(name=f"projector_bn_{i}")(x)
x = tf.keras.layers.ReLU(name=f"projector_relu_{i}")(x)
x = tf.keras.layers.Dense(n_dense_neurons, name=f"projector_dense_{n_layers}")(x)
model = keras.Model(resnet.input, x)
return model
class BarlowModel(keras.Model):
BarlowModel class.
BarlowModel class. Responsible for making predictions and handling
gradient descent with the optimizer.
Attributes:
model: the barlow model architecture.
loss_tracker: the loss metric.
Methods:
train_step: one train step; do model predictions, loss, and
optimizer step.
metrics: Returns metrics.
def __init__(self):
super(BarlowModel, self).__init__()
self.model = build_twin()
self.loss_tracker = keras.metrics.Mean(name="loss")
@property
def metrics(self):
return [self.loss_tracker]
def train_step(self, batch: tf.Tensor) -> tf.Tensor:
train_step method.
Do one train step. Make model predictions, find loss, pass loss to
optimizer, and make optimizer apply gradients.
Arguments:
batch: one batch of data to be given to the loss function.
Returns:
Returns a dictionary with the loss metric.
# get the two augmentations from the batch
y_a, y_b = batch
with tf.GradientTape() as tape:
# get two versions of predictions
z_a, z_b = self.model(y_a, training=True), self.model(y_b, training=True)
loss = self.loss(z_a, z_b)
grads_model = tape.gradient(loss, self.model.trainable_variables)
self.optimizer.apply_gradients(zip(grads_model, self.model.trainable_variables))
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
# sets up model, optimizer, loss
bm = BarlowModel()
# chose the LAMB optimizer due to high batch sizes. Converged MUCH faster
# than ADAM or SGD
optimizer = tfa.optimizers.LAMB()
loss = BarlowLoss(BATCH_SIZE)
bm.compile(optimizer=optimizer, loss=loss)
# Expected training time: 1 hours 30 min
history = bm.fit(augment_versions, epochs=160)
plt.plot(history.history["loss"])
plt.show()
# Approx: 64% accuracy with this barlow twins model.
xy_ds = (
tf.data.Dataset.from_tensor_slices((train_features, train_labels))
.shuffle(1000)
.batch(BATCH_SIZE, drop_remainder=True)
.prefetch(tf.data.AUTOTUNE)
)
test_ds = (
tf.data.Dataset.from_tensor_slices((test_features, test_labels))
.shuffle(1000)
.batch(BATCH_SIZE, drop_remainder=True)
.prefetch(tf.data.AUTOTUNE)
)
model = keras.models.Sequential(
[
bm.model,
keras.layers.Dense(
10, activation="softmax", kernel_regularizer=keras.regularizers.l2(0.02)
),
]
)
model.layers[0].trainable = False
linear_optimizer = tfa.optimizers.LAMB()
model.compile(
optimizer=linear_optimizer,
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
model.fit(xy_ds, epochs=35, validation_data=test_ds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.random.rand(3, 3, 3)
b = np.arange(3*3*3).reshape((3, 3, 3))
sort_indices = np.argsort(a, axis=0)[::-1, :, :]
static_indices = np.indices(a.shape)
c = b[sort_indices, static_indices[1], static_indices[2]]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load population data
Step2: Extracts concepts of interest
Step3: Plots Revenu disponible before reform
Step4: Define your reform here !
Step5: Plots revenu disponible after reform
Step6: Distribution of the changes in revenu in euros
Step7: Distribution of the change in revenu in percentage
Step8: Change as a function of the number of children
Step9: Change as a function of the age of declarant 1
Step10: Cases that
Step11: Best compromise simplicity / matching current legislation
|
<ASSISTANT_TASK:>
Python Code:
from compare_simulators import CalculatorComparator
from population_simulator import CerfaPopulationSimulator
from utils import show_histogram
from utils import percent_diff
from utils import scatter_plot
import matplotlib.pyplot as plt
import numpy as np
import random
%matplotlib inline
comp = CalculatorComparator()
comp.load_results_from_json('1aj-1bj-f-2000')
revdisp = comp.get_variable_from_openfisca('revdisp')
population = []
original_index = []
def cas_improbable(case):
# It is highly unlikely to have enfants a charge above 64 so we remove those cases
if (int(case['0DA']) <= 1950 or ('0DB' in case and int(case['0DA'] <= 1950))) and 'F' in case and int(case['F']) > 0:
return True
return False
for i in range(0, len(comp.testcases)):
case = comp.testcases[i]
if case.get('1AJ', 0) < 1 and case.get('1BJ', 0) < 1 and not cas_improbable(case):
original_index.append(i)
new_family = {}
new_family['taxable_income'] = case.get('1AJ', 0)
new_family['revdisp'] = revdisp[i]
if 'F' in case:
if case['F'] == 1:
new_family['enfant_unique'] = 1
if case['F'] >= 2:
new_family['enfants_deux_ou_plus'] = 1
if case['F'] > 2:
new_family['nb_enfants_au_dessus_de_2'] = case['F'] - 2
new_family['nb_enfants'] = case['F']
if 'O' in case or 'M' in case:
new_family['two_people'] = 1
one_declarant_above_24 = False
both_declarant_parent_below_24 = 'F' in case
if '0DA' in case:
age_1 = 2014 - case['0DA']
new_family['age-dec1'] = age_1
# if age <= 24 and 'F' in case:
# new_family['declarant parent <= 24 ans'] = 1
if age_1 >= 24:
one_declarant_above_24 = True
both_declarant_parent_below_24 = False
# new_family['declarant > 24 ans'] = 1
if age_1 > 64:
new_family['declarant > 64 ans'] = 1
new_family['declarants > 64 ans'] = 1
age_2 = 0
if '0DB' in case:
age_2 = 2014 - case['0DB']
new_family['age-dec2'] = age_2
if age_2 >= 24:
one_declarant_above_24 = True
both_declarant_parent_below_24 = False
# new_family['codeclarant > 24 ans'] = 1
if age_2 > 64:
new_family['declarants > 64 ans'] = new_family.get('declarants > 64 ans', 0) + 1
new_family['codeclarant > 64 ans'] = 1
if age_1 >= 24 and age_2 >= 24:
new_family['both > 24 ans'] = 1
if both_declarant_parent_below_24:
new_family['both_declarant_parent_below_24'] = True
if one_declarant_above_24:
new_family['one_declarant_above_24'] = True
if 'F' in case and ('C' in case or 'D' in case or 'V' in case):
new_family['parent_isolรฉ'] = True
if 'F' in case and ('M' in case or 'O' in case):
new_family['parents_en_couple'] = True
population.append(new_family)
print 'Number of family: ' + repr(len(population))
total_people = 0
for family in comp.testcases:
total_people += 1
if '0DB' in family and family['0DB'] == 1:
total_people += 1
if 'F' in family:
total_people += family['F']
# We assume that there are 2000000 people with RSA
echantillon = float(total_people) / 2000000
print 'Echantillon of ' + repr(total_people) + ' people, in percent of french population for similar revenu: ' + repr(echantillon)
revdisp_when_no_salary = list(family['revdisp'] for family in population)
show_histogram(revdisp_when_no_salary, 'Distribution of revenu disponible')
from reformators import Excalibur
sword = Excalibur(population, 'revdisp', 'taxable_income', echantillon=echantillon)
simulated_reform = sword.suggest_reform(
boolean_parameters = ['one_declarant_above_24',
'codeclarant > 64 ans',
'declarant > 64 ans',
'both_declarant_parent_below_24',
'two_people'],
linear_parameters = ['nb_enfants'],
barem_parameters = [],
save=0)
xmin = 4900
xmax = 19000
nb_buckets = 25
bins = np.linspace(xmin, xmax, nb_buckets)
plt.hist(revdisp_when_no_salary, bins, alpha=0.5, label='current')
plt.hist(simulated_reform, bins, alpha=0.5, label='reform')
plt.legend(loc='upper right')
plt.show()
difference = list(simulated_reform[i] - revdisp_when_no_salary[i] for i in range(len(simulated_reform)))
show_histogram(difference, 'Changes in revenu')
percentage_difference = list(100 * percent_diff(simulated_reform[i], revdisp_when_no_salary[i]) for i in range(len(simulated_reform)))
show_histogram(percentage_difference, 'Changes in revenu')
nb_children = list((population[i].get('nb_enfants', 0) for i in range(len(population))))
scatter_plot(nb_children, difference, 'Age declarant 1', 'Difference reform - current', alpha=0.01)
age_dec1 = list((population[i].get('age-dec1', 0) for i in range(len(population))))
scatter_plot(age_dec1, difference, 'Age declarant 1', 'Difference reform - current', alpha=0.1)
original_population = []
original_revdisp = []
for i in range(0, len(population)):
original_population.append(comp.testcases[original_index[i]])
original_revdisp.append(revdisp[original_index[i]])
order = sorted(range(len(simulated_reform)), key=lambda k: -abs(simulated_reform[k] - revdisp_when_no_salary[k]))
for i in order:
print 'Case ' + repr(original_population[i]) + ' Current =' + repr(int(revdisp_when_no_salary[i])) + ' Reform = ' + repr(int(simulated_reform[i]))
sword = Excalibur(population,'revdisp', 'taxable_income', echantillon=echantillon)
res = sword.suggest_reform(boolean_parameters=[
'enfant_unique',
'enfants_deux_ou_plus',
'nb_enfants_au_dessus_de_2',
'one_declarant_above_24',
'declarant > 64 ans',
'codeclarant > 64 ans',
'two_people',
],
save=0)
original_population = []
original_revdisp = []
for i in range(0, len(population)):
original_population.append(comp.testcases[original_index[i]])
original_revdisp.append(revdisp[original_index[i]])
order = sorted(range(len(simulated_reform)), key=lambda k: -abs(simulated_reform[k] - revdisp_when_no_salary[k]))
for i in order:
print 'Case ' + repr(original_population[i]) + ' Current =' + repr(int(revdisp_when_no_salary[i])) + ' Reform = ' + repr(int(simulated_reform[i]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 Read data
Step2: 3 Extracting features from the MATCHUP column
Step3: 3 Dealing categorical features
Step4: 3.1 Factorizing
Step5: As we can see, all our nominal labels were now replaced by numeric values.
Step6: As we can see, we get similar results
Step7: Please note that we need to explicitly use drop_first=True to get n-1 (leave one out) columns in the encoded dataframe.
Step8: Oops! Could not convert string to float! Looks like we will have to label encode our variable first, to make it numeric.
Step9: 3.5 Binarizing labels
Step10: Converted to a dataframe, the output is similar to that of the one hot encoder above.
Step11: Please note that, unlike in Pandas, there is no automatic way to apply a leave-one-out approach (n-1 features), which may lead to collinearity problems.
Step12: 4.1 Rescaling
Step13: Alternatively, the MaxAbsScaler() scaler (used in similar fashion) would return data in the range [-1, 1].
Step14: 4.3 Normalizing
Step15: 4.4 Binarizing
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
% matplotlib inline
from matplotlib import pyplot as plt
from sklearn import preprocessing as pp
def read_data(path, with_preview=False):
data = pd.read_csv(path)
data.columns = data.columns.str.upper()
return data
data = read_data("../data/shot_logs.csv")
def extract_features(string):
(date, string) = string.split('-')
if '@' in string:
(away, home) = string.split('@')
if 'vs.' in string:
(away, home) = string.split('vs.')
date = date.strip()
away = away.strip()
home = home.strip()
return (date, away, home)
def extract_date(string):
features = extract_features(string)
date = features[0]
date = pd.to_datetime(date)
date = date
return date
def extract_away_team(string):
features = extract_features(string)
away = features[1]
return away
def extract_home_team(string):
features = extract_features(string)
home = features[2]
return home
def expand_matchup_column(data, with_preview=False):
data['MATCHUP_DATE'] = data['MATCHUP'].apply(extract_date)
data['MATCHUP_DATE'] = data['MATCHUP_DATE'].astype(np.int64) / int(1e6)
data['MATCHUP_AWAY_TEAM'] = data['MATCHUP'].apply(extract_away_team)
data['MATCHUP_HOME_TEAM'] = data['MATCHUP'].apply(extract_home_team)
data = data.drop('MATCHUP', axis=1)
return data
data = expand_matchup_column(data)
display(data.head(n=3))
data.columns.values
def factorize_column(data, colname):
categorical_feature = data[colname]
categorical_feature_encoded = pd.factorize(categorical_feature)[0]
return categorical_feature_encoded
def compare_with_original_column(data, colname, encoded_column):
comparison = pd.DataFrame()
comparison['NOMINAL'] = data[colname]
comparison['ORDINAL'] = encoded_column
comparison_sample = comparison.sample(n=5)
return comparison_sample
encoded_column = factorize_column(data, 'MATCHUP_AWAY_TEAM')
compare_with_original_column(data, 'MATCHUP_AWAY_TEAM', encoded_column)
def label_encode_column(data, colname):
categorical_feature = data[colname]
encoder = pp.LabelEncoder()
categorical_feature_encoded = encoder.fit_transform(categorical_feature)
return categorical_feature_encoded
encoded_column = label_encode_column(data, 'MATCHUP_AWAY_TEAM')
compare_with_original_column(data, 'MATCHUP_AWAY_TEAM', encoded_column)
def dummy_encode_column(data, colname):
categorical_feature = data[colname]
number_of_levels = len(categorical_feature.unique())
dummy_encode_example = pd.get_dummies(categorical_feature, drop_first=True)
dummy_encode_number_of_cols = dummy_encode_example.shape[1]
dummy_encode_sample = dummy_encode_example
print("There are %s total levels for the categorical variable %s." % (number_of_levels, colname))
print("There are %s (n-1) columns in the dummy encoded dataframe." % dummy_encode_number_of_cols)
return dummy_encode_sample
dummy_encode_column(data, 'MATCHUP_AWAY_TEAM').sample(n=5)
def one_hot_encode_column(data, colname):
categorical_feature = data[colname]
number_of_levels = len(np.unique(categorical_feature))
encoder = pp.OneHotEncoder()
one_hot_encode_feature = encoder.fit_transform(categorical_feature)
return one_hot_encode_feature
# one_hot_encode_column(data, 'MATCHUP_AWAY_TEAM')
def one_hot_encode_column(data, colname):
data_to_encode = pd.DataFrame()
data_to_encode[colname] = label_encode_column(data, colname)
number_of_levels = len(data_to_encode[colname].unique())
encoder = pp.OneHotEncoder(categorical_features=[0])
one_hot_encoded_feature = encoder.fit_transform(data_to_encode).toarray()
one_hot_encoded_feature = pd.DataFrame(one_hot_encoded_feature)
return one_hot_encoded_feature
one_hot_encode_column(data, 'MATCHUP_AWAY_TEAM').sample(n=5)
def binarize_labels_column(data, colname):
categorical_feature = data[colname]
encoder = pp.LabelBinarizer()
return encoder.fit_transform(categorical_feature)
binarize_labels_column(data, 'MATCHUP_AWAY_TEAM')
pd.DataFrame(binarize_labels_column(data, 'MATCHUP_AWAY_TEAM')).sample(n=5)
data = data[data['TOUCH_TIME'] > 0].copy()
def scatter_plot_two_features(data, colnames, total_samples):
scatter_plot_data = data[[colnames[0], colnames[1]]].sample(n=total_samples)
x = scatter_plot_data[colnames[0]]
y = scatter_plot_data[colnames[1]]
plot = plt.scatter(x, y)
return plot
cols = ['TOUCH_TIME', 'SHOT_DIST']
scatter_plot_two_features(data, cols, 1000)
def rescale_columns(data, colnames, total_samples_preview):
scaler = pp.MinMaxScaler()
rescaled_data = data[colnames]
rescaled_data = scaler.fit_transform(rescaled_data)
rescaled_data = pd.DataFrame(rescaled_data, columns=colnames)
rescaled_data_sample = rescaled_data.sample(n=5)
scatter_plot_two_features(rescaled_data, colnames, total_samples_preview)
return rescaled_data_sample
rescale_columns(data, cols, 500)
def standardize_columns(data, colnames, total_samples_preview):
scaler = pp.StandardScaler()
rescaled_data = data[colnames]
rescaled_data = scaler.fit_transform(rescaled_data)
rescaled_data = pd.DataFrame(rescaled_data, columns=colnames)
rescaled_data_sample = rescaled_data.sample(n=5)
scatter_plot_two_features(rescaled_data, colnames, total_samples_preview)
return rescaled_data_sample
standardize_columns(data, cols, 500)
def normalize_columns(data, colnames, total_samples_preview):
scaler = pp.Normalizer()
rescaled_data = data[colnames]
rescaled_data = scaler.fit_transform(rescaled_data)
rescaled_data = pd.DataFrame(rescaled_data, columns=colnames)
rescaled_data_sample = rescaled_data.sample(n=5)
scatter_plot_two_features(rescaled_data, colnames, total_samples_preview)
return rescaled_data_sample
normalize_columns(data, cols, 500)
def binarize_columns(data, colnames, threshold):
scaler = pp.Binarizer(threshold=threshold)
binarized_data = data
binarized_data['3_POINTER'] = scaler.fit_transform(binarized_data[colnames].values.reshape(-1, 1))
binarized_data = binarized_data[[colnames, '3_POINTER']]
binarized_data_sample = binarized_data.sample(n=5)
return binarized_data_sample
three_point_line = 23.9
binarize_columns(data, 'SHOT_DIST', three_point_line)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Step6: Train the network
Step8: Load the data
Step9: Train a network
Step10: Debug the training
Step11: Tune your hyperparameters
Step12: Run on the test set
|
<ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False, batch_size=1)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=5000, batch_size=200,
learning_rate=6e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
best_val = -1
hidden_list = [10, 30, 50]
lr_list = [5e-3, 1e-3, 5e-4]
reg_list = [0.6, 0.5, 0.4]
input_size = 32 * 32 * 3
for hidden_size in hidden_list:
for lr in lr_list:
for reg in reg_list:
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
val_acc = (net.predict(X_val) == y_val).mean()
print 'val given %d hiddens, %f lr, %f reg is: %f ' % (hidden_size, lr, reg, val_acc)
if(val_acc > best_val):
best_val = val_acc
best_net = net
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Combining data is essential functionality in a data analysis workflow.
Step2: Adding columns
Step3: Adding multiple columns at once is also possible. For example, the following method gives us a DataFrame of two columns
Step4: We can add both at once to the dataframe
Step5: Concatenating data
Step6: We now want to combine the rows of both datasets
Step7: If we don't want the index to be preserved
Step8: When the two dataframes don't have the same set of columns, by default missing values get introduced
Step9: We can also pass a dictionary of objects instead of a list of objects. Now the keys of the dictionary are preserved as an additional index level
Step10: <div class="alert alert-info">
Step11: Assume we have another dataframe with more information about the 'Embarked' locations
Step12: We now want to add those columns to the titanic dataframe, for which we can use pd.merge, specifying the column on which we want to merge the two datasets
Step13: In this case we use how='left (a "left join") because we wanted to keep the original rows of df and only add matching values from locations to it. Other options are 'inner', 'outer' and 'right' (see the docs for more on this, or this visualization
Step14: SQLite (https
Step15: Pandas provides functionality to query data from a database. Let's fetch the main dataset contained in this file
Step16: More information about the identifyer variables (the first three columns) can be found in the other tables. For example, the "CD_LGL_PSN_VAT" column contains information about the legal form of the enterprise. What the values in this column mean, can be found in a different table
Step17: This type of data organization is called a "star schema" (https
Step18: <div class="alert alert-success">
Step19: <div class="alert alert-success">
Step20: Joining with spatial data to make a map
Step21: The resulting dataframe (a GeoDataFrame) has a "geometry" column (in this case with polygons representing the borders of the municipalities), and a couple of new methods with geospatial functionality (for example, the plot() method by default makes a map). It is still a DataFrame, and everything we have learned about pandas can be used here as well.
Step22: And add a new column with the relative change in the number of registered enterprises
Step23: We can now merge the dataframe with the geospatial information of the municipalities with the dataframe with the enterprise numbers
Step24: With this joined dataframe, we can make a new map, now visualizing the change in number of registered enterprises ("NUM_VAT_CHANGE")
Step25: Combining columns - pd.concat with axis=1
Step26: pd.concat matches the different objects based on the index
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# redefining the example objects
# series
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
# dataframe
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
pop_density = countries['population']*1e6 / countries['area']
pop_density
countries['pop_density'] = pop_density
countries
countries["country"].str.split(" ", expand=True)
countries[['first', 'last']] = countries["country"].str.split(" ", expand=True)
countries
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
data = {'country': ['Nigeria', 'Rwanda', 'Egypt', 'Morocco', ],
'population': [182.2, 11.3, 94.3, 34.4],
'area': [923768, 26338 , 1010408, 710850],
'capital': ['Abuja', 'Kigali', 'Cairo', 'Rabat']}
countries_africa = pd.DataFrame(data)
countries_africa
pd.concat([countries, countries_africa])
pd.concat([countries, countries_africa], ignore_index=True)
pd.concat([countries, countries_africa[['country', 'capital']]], ignore_index=True)
pd.concat({'europe': countries, 'africa': countries_africa})
df = pd.read_csv("data/titanic.csv")
df = df.loc[:9, ['Survived', 'Pclass', 'Sex', 'Age', 'Fare', 'Embarked']]
df
locations = pd.DataFrame({'Embarked': ['S', 'C', 'N'],
'City': ['Southampton', 'Cherbourg', 'New York City'],
'Country': ['United Kindom', 'France', 'United States']})
locations
pd.merge(df, locations, on='Embarked', how='left')
import zipfile
with zipfile.ZipFile("data/TF_VAT_NACE_SQ_2019.zip", "r") as zip_ref:
zip_ref.extractall()
import sqlite3
# connect with the database file
con = sqlite3.connect("TF_VAT_NACE_2019.sqlite")
# list the tables that are present in the database
con.execute("SELECT name FROM sqlite_master WHERE type='table';").fetchall()
df = pd.read_sql("SELECT * FROM TF_VAT_NACE_2019", con)
df
df_legal_forms = pd.read_sql("SELECT * FROM TD_LGL_PSN_VAT", con)
df_legal_forms
joined = pd.merge(df, df_legal_forms, on="CD_LGL_PSN_VAT", how="left")
joined
joined.groupby("TX_LGL_PSN_VAT_EN_LVL1")["MS_NUM_VAT"].sum().sort_values(ascending=False)
df_muni = pd.read_sql("SELECT * FROM TD_MUNTY_REFNIS", con)
df_muni
joined = pd.merge(df, df_muni[["CD_REFNIS", "TX_PROV_DESCR_EN"]], on="CD_REFNIS", how="left")
joined
joined.groupby("TX_PROV_DESCR_EN")["MS_NUM_VAT"].sum()
import geopandas
import fiona
stat = geopandas.read_file("data/statbel_statistical_sectors_2019.shp.zip")
stat.head()
stat.plot()
df_by_muni = df.groupby("CD_REFNIS").sum()
df_by_muni["NUM_VAT_CHANGE"] = (df_by_muni["MS_NUM_VAT_START"] - df_by_muni["MS_NUM_VAT_STOP"]) / df_by_muni["MS_NUM_VAT"] * 100
df_by_muni
joined = pd.merge(stat, df_by_muni, left_on="CNIS5_2019", right_on="CD_REFNIS")
joined
joined["NUM_VAT_CHANGE_CAT"] = pd.cut(joined["NUM_VAT_CHANGE"], [-15, -6, -4, -2, 2, 4, 6, 15])
joined.plot(column="NUM_VAT_CHANGE_CAT", figsize=(10, 10), cmap="coolwarm", legend=True)#k=7, scheme="equal_interval")
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
data = {'country': ['Belgium', 'France', 'Netherlands'],
'GDP': [496477, 2650823, 820726],
'area': [8.0, 9.9, 5.7]}
country_economics = pd.DataFrame(data).set_index('country')
country_economics
pd.concat([countries, country_economics], axis=1)
countries2 = countries.set_index('country')
countries2
pd.concat([countries2, country_economics], axis="columns")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create dataframe
Step2: Make plot
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
raw_data = {'officer_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'jan_arrests': [4, 24, 31, 2, 3],
'feb_arrests': [25, 94, 57, 62, 70],
'march_arrests': [5, 43, 23, 23, 51]}
df = pd.DataFrame(raw_data, columns = ['officer_name', 'jan_arrests', 'feb_arrests', 'march_arrests'])
df
# Create a column with the total arrests for each officer
df['total_arrests'] = df['jan_arrests'] + df['feb_arrests'] + df['march_arrests']
df
# Create a list of colors (from iWantHue)
colors = ["#E13F29", "#D69A80", "#D63B59", "#AE5552", "#CB5C3B", "#EB8076", "#96624E"]
# Create a pie chart
plt.pie(
# using data total)arrests
df['total_arrests'],
# with the labels being officer names
labels=df['officer_name'],
# with no shadows
shadow=False,
# with colors
colors=colors,
# with one slide exploded out
explode=(0, 0, 0, 0, 0.15),
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.1f%%',
)
# View the plot drop above
plt.axis('equal')
# View the plot
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and create a new Bundle. See Building a System for more details.
Step2: Overriding Computation Times
Step3: compute_times (when not empty) overrides the value of times when computing the model. However, passing times as a keyword argument to run_compute will take precedence over either - and override the computed times across all enabled datasets.
Step4: Phase-Time Conversion
Step5: Essentially, this constraint does the same thing as b.to_phase or b.to_time, using the appropriate t0 according to compute_phases_t0 from the top-level orbit in the hierarchy.
Step6: In order to provide compute_phases instead of compute_times, we must call b.flip_constraint.
Step7: Note that under the hood, PHOEBE always works in time-space, meaning it is the constrained value of compute_times that is being passed under-the-hood.
Step8: In the case of times, this will automatically interpolate in phase-space if the provided time is outside the range of the referenced times array. If you have a logger enabled with at least the 'warning' level, this will raise a warning and state the phases at which the interpolation will be completed.
Step9: Determining & Plotting Residuals
Step10: If we plot the dataset and model, we see that the model was only computed for one cycle, whereas the dataset extends further in time.
Step11: But we can also plot the residuals. Here, calculate_residuals is called internally, interpolating in phase-space, and then plotted in time-space. See the options for y in the plot API docs for more details.
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
import phoebe
from phoebe import u # units
b = phoebe.default_binary()
b.add_dataset('lc', times=phoebe.linspace(0,10,101), dataset='lc01')
print(b.filter(qualifier=['times', 'compute_times'], context='dataset'))
b.set_value('compute_times', phoebe.linspace(0,3,11))
b.run_compute()
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
b.run_compute(times=[0,0.2])
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
b.run_compute()
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
b.get_value('times', context='dataset'),
b.get_value('compute_times', context='dataset'),
b.get_value('times', context='model')
))
print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'compute_phases_t0'], context='dataset'))
print(b.get_constraint('compute_phases'))
print(b.get_parameter('compute_phases_t0').choices)
b.flip_constraint('compute_phases', solve_for='compute_times')
b.set_value('compute_phases', phoebe.linspace(0,1,11))
print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'compute_phases_t0'], context='dataset'))
b.get_parameter('fluxes', context='model').get_value()
b.get_parameter('fluxes', context='model').interp_value(times=1.0)
b.get_parameter('fluxes', context='model').interp_value(times=phoebe.linspace(0,3,101))
b.get_parameter('fluxes', context='model').interp_value(times=5)
b.add_dataset('lc',
times=phoebe.linspace(0,10,1000),
dataset='lc01',
overwrite=True)
b.run_compute(irrad_method='none')
fluxes = b.get_value('fluxes', context='model')
b.set_value('fluxes', context='dataset', value=fluxes)
b.flip_constraint('compute_phases', solve_for='compute_times')
b.set_value('compute_phases', phoebe.linspace(0,1,101))
b.set_value('teff', component='primary', value=5950)
b.run_compute(irrad_method='none')
print(len(b.get_value('fluxes', context='dataset')), len(b.get_value('fluxes', context='model')))
b.calculate_residuals()
afig, mplfig = b.plot(show=True)
afig, mplfig = b.plot(y='residuals', show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Expressรตes Lambda
|
<ASSISTANT_TASK:>
Python Code:
# Versรฃo da Linguagem Python
from platform import python_version
print('Versรฃo da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Definindo uma funรงรฃo - 3 linhas de cรณdigo
def potencia(num):
result = num**2
return result
potencia(5)
# Definindo uma funรงรฃo - 2 linhas de cรณdigo
def potencia(num):
return num**2
potencia(5)
# Definindo uma funรงรฃo - 1 linha de cรณdigo
def potencia(num): return num**2
potencia(5)
# Definindo uma expressรฃo lambda
potencia = lambda num: num**2
potencia(5)
# Lembre: operadores de comparaรงรฃo retornam boolean, true or false
Par = lambda x: x%2==0
Par(3)
Par(4)
first = lambda s: s[0]
first('Python')
reverso = lambda s: s[::-1]
reverso('Python')
addNum = lambda x,y : x+y
addNum(2,3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: True Changepoints
Step2: Estimated Changepoints with GFGL smoother
Step3: Visualising Graphical Models
Step4: Note that we only select three graphs to compare with the ground truth, i.e. those that lay inbetween the major changepoints at 30 and 60.
Step5: Generating Dynamic Graphical Models
|
<ASSISTANT_TASK:>
Python Code:
y = np.load('../data/y.npy')
sigma = np.load('../data/sigma.npy')
sigma_inv = np.load('../data/sigma_inv.npy')
T = 90 # Steps
K = 2 # Changepoints
P = 10 # Variables
M = 5 # Active Edges
eps = 0.000001 # Edge threshold epsilon
edges = get_edges(sigma_inv[0], eps)
change_points = get_change_points(sigma_inv, eps)
fig = plot_data_with_cps(y, change_points, ymin=-5, ymax=5)
verbose = False
tol = 1e-4
max_iter = 500
gammas = [1, 1, 1] # gamma_V1, gamma_V2, gamma_W
lambda1G = 0.15
lambda2G = 25
lambda1I = 0.25
lambda2I = 2
gfgl = GroupFusedGraphLasso(lambda1G, lambda2G, gammas[0], gammas[1], gammas[2], tol, max_iter, verbose)
gfgl.fit(y)
cps = get_change_points(gfgl.sparse_Theta, 0.01)
fig = plot_data_with_cps(y, cps, ymin=-5, ymax=5)
from graphtime.simulate import DynamicGraphicalModel
DGM = DynamicGraphicalModel.from_Thetas(sigma_inv)
DGM_est = DynamicGraphicalModel.from_Thetas(gfgl.sparse_Theta[[0, 45, 75]], eps=0.1)
DGM.draw();
DGM_est.draw();
DGM = DynamicGraphicalModel(10)
DGM.generate_graphs(n_edges_list=[3, 6])
DGM.draw();
X = DGM.sample(60, changepoints=[30])
plot_data_with_cps(X, [30]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The way you define groups affects your statistical tests
Step3: A perfect experiment
Step4: A $t$-test between samples of these populations is not consistent with the null hypothesis
Step5: Now the frequency of each experimental group is also plotted, as a histogram. The blue group - our negative examples, match the idealised distribution well. The green group - our positives - match the profile less well, but even so the difference between means is visibly apparent.
Step6: Again, the $P$-value reported by the $t$-test allows us to reject the null hypothesis. But there's a difference to the earlier graphs, as the two $P$-values in the title differ. That is because they represent two different tests.
Step7: Now the $t$-test reports several orders of magnitude difference in $P$-values. Both are still very small, and the difference in population means is clear, but the trend is obvious
Step8: The direction of change is again the same
Step9: With $FPR=0.2$, the $t$-test now reports a $P$-value that can be 20-30 orders of magnitude different from what would be seen with no misclassification. This is a considerable move towards being less able to reject the null hypothesis in what we might imagine to be a clear-cut case of having two distinct populations.
Step10: The effects of class size
Step13: Some realisations of the last example (n_neg=200, n_pos=10, fpr=0.1, fnr=0.1) result in a $P$-value that (at the 0.05 level) rejects the null hypothesis when there is no misclassification, but cannot reject the null hypothesis when misclassification is taken into account.
Step14: The perfect experiment
Step15: The red lines in the plot indicate a nominal $P=0.05$ threshold.
Step16: The effet of increasing $FNR$ is to move the reported $P$-values away from the $y=x$ diagonal, and towards the $P=0.05$ threshold. Even with $FNR=0.1$, almost every run of the experiment misreports the 'real' $P$-value such that we are less likely to reject the null hypothesis.
Step17: We see the same progression of 'observed' $P$-values away from what would be the 'real' $P$-value without misclassification, but this time much more rapidly than with $FNR$ as $FPR$ increases. Even for this very distinct pair of populations, whose 'true' $P$-value should be โ$10^-40$, an $FPR$ of 0.2 runs the risk of occasionally failing to reject the null hypothesis that the population means are the same.
Step18: A more realistic population?
Step19: In a single realisation of a perfect experiment with no misclassification, the reported $P$-value is likely to very strongly reject the null-hypothesis
Step20: What is the impact of misclassification?
Step21: And now, at $FPR=FNR=0.05$ we start to see the population of experiments creeping into the upper left quadrant. In this quadrant we have experiments where data that (if classified correctly) would reject the null hypothesis are observed to accept it instead. This problem gets worse as the rate of misclassification increases.
Step22: At $FPR=FNR=0.2$
Step23: What does this all mean?
Step24: Note that, apart from the overall sample size (170 mouse proteins instead of 2000) the parameters for this run are not very different from those we have been using.
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from scipy import stats
from ipywidgets import interact, fixed
def sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high):
Returns subsamples and observations from two normal
distributions.
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
# subsamples
samples = (clip(stats.norm.rvs(mu_neg, sd_neg, size=n_neg), clip_low, clip_high),
clip(stats.norm.rvs(mu_pos, sd_pos, size=n_pos), clip_low, clip_high))
# observed samples, including FPR and FNR
[shuffle(s) for s in samples]
obs_neg = concatenate((samples[0][:int((1-fpr)*n_neg)],
samples[1][int((1-fnr)*n_pos):]))
obs_pos = concatenate((samples[1][:int((1-fnr)*n_pos)],
samples[0][int((1-fpr)*n_neg):]))
# return subsamples and observations
return ((samples[0], samples[1]), (obs_neg, obs_pos))
def draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100,
num_bins=50,
xmin=50, xmax=100, points=100,
subsample=True,
negcolor='blue', poscolor='green'):
Renders a matplotlib plot of normal distributions and subsamples,
and returns t-test P values that the means of the two subsamples are
equal, with and without FNR/FPR.
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
- bins number of bins for histogram
- xmin x-axis lower limit
- xmax x-axis upper limit
- points number of points for plotting PDF
- subsample Boolean: True plots subsamples
x = linspace(points, xmin, xmax)
# Normal PDFs
norms = (normpdf(x, mu_neg, sd_neg), normpdf(x, mu_pos, sd_pos))
# Get subsamples and observations
samples, obs = sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high)
# Plot distribution and samples
plot(x, norms[0], color=negcolor)
plot(x, norms[1], color=poscolor)
if subsample:
h_neg = hist(samples[0], num_bins, normed=1, facecolor=negcolor, alpha=0.5)
h_pos = hist(samples[1], num_bins, normed=1, facecolor=poscolor, alpha=0.5)
ax = gca()
ax.set_xlabel("value")
ax.set_ylabel("frequency")
# Calculate t-tests
t_sam = stats.ttest_ind(samples[0], samples[1], equal_var=False)
t_obs = stats.ttest_ind(obs[0], obs[1], equal_var=False)
ax.set_title("$P_{real}$: %.02e $P_{obs}$: %.02e" % (t_sam[1], t_obs[1]))
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, subsample=False)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.01)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.2)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01, fnr=0.01)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2, fnr=0.2)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=1000, n_pos=50, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=200, n_pos=10, fpr=0.1, fnr=0.1)
def multiple_samples(n_samp=1000,
mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100):
Returns the distribution of P-values obtained from subsampled
and observed (with FNR/FPR) normal distributions, over n_samp
repetitions.
- n_samp number of times to (re)sample from the distribution
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
p_sam, p_obs = [], []
for n in range(n_samp):
samples, obs = sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high)
t_sam = stats.ttest_ind(samples[0], samples[1], equal_var=False)
t_obs = stats.ttest_ind(obs[0], obs[1], equal_var=False)
p_sam.append(t_sam[1])
p_obs.append(t_obs[1])
# return the P-values
return (p_sam, p_obs)
def draw_multiple_samples(n_samp=1000,
mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100,
logy=True):
Plots the distribution of P-values obtained from subsampled
and observed (with FNR/FPR) normal distributions, over n_samp
repetitions.
- n_samp number of times to (re)sample from the distribution
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
p_sam, p_obs = multiple_samples(n_samp, mu_neg, mu_pos,
sd_neg, sd_pos, n_neg, n_pos,
fnr, fpr, clip_low, clip_high)
# plot P-values against each other
if logy:
p = loglog(p_sam, p_obs, 'o', alpha=0.3)
else:
p = semilogx(p_sam, p_obs, 'o', alpha=0.3)
ax = gca()
ax.set_xlabel("'Real' subsample P-value")
ax.set_ylabel("Observed subsample P-value")
ax.set_title("reps=%d $n_{neg}$=%d $n_{pos}$=%d FNR=%.02f FPR=%.02f" %
(n_samp, n_neg, n_pos, fnr, fpr))
# Add y=x lines, P=0.05
lims = [min([ax.get_xlim(), ax.get_ylim()]),
max([(0.05, 0.05), max([ax.get_xlim(), ax.get_ylim()])])]
if logy:
loglog(lims, lims, 'k', alpha=0.75)
ax.set_aspect('equal')
else:
semilogx(lims, lims, 'k', alpha=0.75)
vlines(0.05, min(ax.get_ylim()), max(max(ax.get_ylim()), 0.05), color='red') # add P=0.05 lines
hlines(0.05, min(ax.get_xlim()), max(max(ax.get_xlim()), 0.05), color='red')
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.2)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2, fnr=0.2)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.01, fpr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.05, fpr=0.05)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.1, fpr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50,
fnr=0.1, fpr=0.1, logy=False)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.2, fpr=0.2)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50,
fnr=0.2, fpr=0.2, logy=False)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=89, sd_neg=7, sd_pos=7, n_neg=150, n_pos=20,
fnr=0.1, fpr=0.15, logy=False)
interact(draw_sample_comparison,
mu_neg=(60, 99, 1), mu_pos=(60, 99, 1),
sd_neg=(0, 15, 1), sd_pos=(0, 15, 1),
n_neg=(0, 150, 1), n_pos=(0, 150, 1),
fnr=(0, 1, 0.01), fpr=(0, 1, 0.01),
clip_low=fixed(0), clip_high=fixed(100),
num_bins=fixed(50), xmin=fixed(50),
xmax=fixed(100), points=fixed(100),
subsample=True, negcolor=fixed('blue'),
poscolor=fixed('green'))
interact(draw_multiple_samples,
mu_neg=(60, 99, 1), mu_pos=(60, 99, 1),
sd_neg=(0, 15, 1), sd_pos=(0, 15, 1),
n_neg=(0, 150, 1), n_pos=(0, 150, 1),
fnr=(0, 1, 0.01), fpr=(0, 1, 0.01),
clip_low=fixed(0), clip_high=fixed(100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: degree centrality
Step2: closeness centrality
Step3: betweenness centrality
Step4: degree assortativity coefficient
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import networkx as nx
from copy import deepcopy
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.backends.backend_pdf import PdfPages
from glob import glob
fileName = 'article0'
def getFiles(fileName):
matches = glob('*'+fileName+'*')
bigFile = matches[0]
data = pd.DataFrame.from_csv(bigFile)
return clearSource(data)
def clearSource(data):
columns = ['source','target']
pre = len(data)
for column in columns:
data = data[pd.notnull(data[column])]
post = len(data)
print "Filtered %s rows to %s rows by removing rows with blank values in columns %s" % (pre,post,columns)
return data
#data = getFiles(fileName)
def getStuff(data,labels):
forEdges = labels == ['edge']
columns = list(data.columns.values)
items = dict()
nameFunc = {True: lambda x,y: '%s - %s - %s' % (x['source'],x['edge'],x['target']),
False: lambda x,y: x[y]}[forEdges]
extra = ['source','target'] * forEdges
for label in labels:
relevant = [col for col in columns if label+'-' in col] + extra
#relevant = extra
print "Extracting %s data from %s" % (label,relevant)
for i in data.index:
row = data.ix[i]
for col in relevant:
if str(row[col]).lower() != 'nan':
name = nameFunc(row,label)
if name not in items:
items[name] = dict()
items[name][col.replace(label+'-','')] = row[col]
return items
def getNodes(data):
return getStuff(data,['source','target'])
def getEdges(data):
return getStuff(data,['edge'])
#allNodes = getNodes(data); allEdges = getEdges(data)
def addNodes(graph,nodes):
for key,value in nodes.iteritems():
graph.add_node(key,attr_dict=value)
return graph
def addEdges(graph,edges):
for key,value in edges.iteritems():
value['label'] = key
value['edge'] = key.split(' - ')[1]
graph.add_edge(value['source'],value['target'],attr_dict = value)
return graph
#########
def createNetwork(edges,nodes):
graph = nx.MultiGraph()
graph = addNodes(graph,nodes)
graph = addEdges(graph,edges)
return graph
#fullGraph = createNetwork(allEdges,allNodes)
def drawIt(graph,what='graph', save_plot=None):
style=nx.spring_layout(graph)
size = graph.number_of_nodes()
print "Drawing %s of size %s:" % (what,size)
if size > 20:
plt.figure(figsize=(10,10))
if size > 40:
nx.draw(graph,style,node_size=60,font_size=8)
if save_plot is not None:
print('saving: {}'.format(save_plot))
plt.savefig(save_plot)
else:
nx.draw(graph,style)
if save_plot is not None:
print('saving: {}'.format(save_plot))
plt.savefig(save_plot)
else:
nx.draw(graph,style)
if save_plot is not None:
print('saving: {}'.format(save_plot))
plt.savefig(save_plot)
plt.show()
def describeGraph(graph, save_plot=None):
components = nx.connected_components(graph)
components = list(components)
isolated = [entry[0] for entry in components if len(entry)==1]
params = (graph.number_of_edges(),graph.number_of_nodes(),len(components),len(isolated))
print "Graph has %s nodes, %s edges, %s connected components, and %s isolated nodes\n" % params
drawIt(graph, save_plot=save_plot)
for idx, sub in enumerate(components):
drawIt(graph.subgraph(sub),what='component', save_plot='{}-{}.png'.format('component', idx))
print "Isolated nodes:", isolated
def getGraph(fileRef, save_plot=None):
data = getFiles(fileName)
nodes = getNodes(data)
edges = getEdges(data)
graph = createNetwork(edges,nodes)
fileOut = fileRef.split('.')[0]+'.gml'
print "Writing GML file to %s" % fileOut
nx.write_gml(graph, fileOut)
fileOutNet = fileRef.split('.')[0]+'.net'
print "Writing net file to %s" % fileOutNet
nx.write_pajek(graph, fileOutNet)
describeGraph(graph, save_plot)
return graph, nodes, edges
fileName = 'data/csv/article1'
graph, nodes, edges = getGraph(fileName, save_plot='graph.png')
plt.figure(figsize=(12, 12))
nx.draw_spring(graph, node_color='g', with_labels=True, arrows=True)
plt.show()
# return a dictionary of centrality values for each node
nx.degree_centrality(graph)
# the type of degree centrality is a dictionary
type(nx.degree_centrality(graph))
# get all the values of the dictionary, this returns a list of centrality scores
# turn the list into a numpy array
# take the mean of the numpy array
np.array(nx.degree_centrality(graph).values()).mean()
nx.closeness_centrality(graph)
nx.betweenness_centrality(graph)
np.array(nx.betweenness_centrality(graph).values()).mean()
nx.degree_assortativity_coefficient(graph)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will continue to refer to this client object for accessing the remote server.
Step2: NOTE
|
<ASSISTANT_TASK:>
Python Code:
from ga4gh.client import client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
dataset = c.search_datasets().next()
print dataset
data_set_id = dataset.id
dataset_via_get = c.get_dataset(dataset_id=data_set_id)
print dataset_via_get
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the previous chapter we modeled objects moving in one dimension, with and without drag. Now let's move on to two dimensions, and baseball!
Step2: You can access the components of a Vector by name using the dot
Step3: You can also access them by index using brackets, like this
Step4: Vector objects support most mathematical operations, including
Step5: For the definition and graphical interpretation of these operations, see http
Step6: The magnitude is 5 because the length of A is the hypotenuse of a 3-4-5 triangle.
Step7: And a function to convert degrees to radians
Step8: Following convention, I'll use angle for a value in degrees and theta for a value in radians.
Step9: Another way to represent the direction of A is a unit vector,
Step10: We can do the same thing using the vector_hat function, so named because unit vectors are conventionally decorated with a hat, like this
Step11: Now let's get back to the game.
Step12: I got the mass and diameter of the baseball from Wikipedia and the coefficient of drag is from The Physics of Baseball
Step13: make_system uses deg2rad to convert angle to radians and
Step14: And here's the initial State
Step15: Next we need a function to compute drag force
Step16: This function takes V as a Vector and returns f_drag as a
Step17: The result is a Vector that represents the drag force on the baseball, in Newtons, under the initial conditions.
Step18: As usual, the parameters of the slope function are a time, a State object, and a System object.
Step19: Using vectors to represent forces and accelerations makes the code
Step20: The event function takes the same parameters as the slope function, and returns the $y$ coordinate of position. When the $y$ coordinate passes through 0, the simulation stops.
Step21: Now we're ready to run the simulation
Step22: details contains information about the simulation, including a message that indicates that a "termination event" occurred; that is, the simulated ball reached the ground.
Step23: We can get the flight time like this
Step24: And the final state like this
Step25: The final value of y is close to 0, as it should be. The final value of x tells us how far the ball flew, in meters.
Step26: We can also get the final velocity, like this
Step27: The speed of the ball on impact is about 26 m/s, which is substantially slower than the initial velocity, 40 m/s.
Step28: Trajectories
Step29: As expected, the $x$ component increases as the ball moves away from home plate. The $y$ position climbs initially and then descends, falling to 0ย m near 5.0ย s.
Step30: This way of visualizing the results is called a trajectory plot (see http
Step31: Exercises
Step32: Exercise
Step33: Exercise
Step35: Modify the model to include the dependence of C_d on velocity, and see how much it affects the results.
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
A = Vector(3, 4)
A
A.x, A.y
A[0], A[1]
B = Vector(1, 2)
B
A + B
A - B
mag = vector_mag(A)
theta = vector_angle(A)
mag, theta
from numpy import rad2deg
angle = rad2deg(theta)
angle
from numpy import deg2rad
theta = deg2rad(angle)
theta
x, y = pol2cart(theta, mag)
Vector(x, y)
A / vector_mag(A)
vector_hat(A)
params = Params(
x = 0, # m
y = 1, # m
angle = 45, # degree
velocity = 40, # m / s
mass = 145e-3, # kg
diameter = 73e-3, # m
C_d = 0.33, # dimensionless
rho = 1.2, # kg/m**3
g = 9.8, # m/s**2
t_end = 10, # s
)
from numpy import pi, deg2rad
def make_system(params):
# convert angle to degrees
theta = deg2rad(params.angle)
# compute x and y components of velocity
vx, vy = pol2cart(theta, params.velocity)
# make the initial state
init = State(x=params.x, y=params.y, vx=vx, vy=vy)
# compute the frontal area
area = pi * (params.diameter/2)**2
return System(params,
init = init,
area = area,
)
system = make_system(params)
system.init
def drag_force(V, system):
rho, C_d, area = system.rho, system.C_d, system.area
mag = rho * vector_mag(V)**2 * C_d * area / 2
direction = -vector_hat(V)
f_drag = mag * direction
return f_drag
vx, vy = system.init.vx, system.init.vy
V_test = Vector(vx, vy)
drag_force(V_test, system)
def slope_func(t, state, system):
x, y, vx, vy = state
mass, g = system.mass, system.g
V = Vector(vx, vy)
a_drag = drag_force(V, system) / mass
a_grav = g * Vector(0, -1)
A = a_grav + a_drag
return V.x, V.y, A.x, A.y
slope_func(0, system.init, system)
def event_func(t, state, system):
x, y, vx, vy = state
return y
event_func(0, system.init, system)
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details.message
results.tail()
flight_time = results.index[-1]
flight_time
final_state = results.iloc[-1]
final_state
x_dist = final_state.x
x_dist
final_V = Vector(final_state.vx, final_state.vy)
final_V
vector_mag(final_V)
results.x.plot(color='C4')
results.y.plot(color='C2', style='--')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
def plot_trajectory(results):
x = results.x
y = results.y
make_series(x, y).plot(label='trajectory')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
plot_trajectory(results)
from matplotlib.pyplot import plot
xlim = results.x.min(), results.x.max()
ylim = results.y.min(), results.y.max()
def draw_func(t, state):
plot(state.x, state.y, 'bo')
decorate(xlabel='x position (m)',
ylabel='y position (m)',
xlim=xlim,
ylim=ylim)
# animate(results, draw_func)
# Hint
system2 = make_system(params.set(C_d=0))
# Solution
results2, details2 = run_solve_ivp(system2, slope_func,
events=event_func)
details.message
# Solution
plot_trajectory(results)
plot_trajectory(results2)
# Solution
x_dist2 = results2.iloc[-1].x
x_dist2
# Solution
x_dist2 - x_dist
# Hint
system3 = make_system(params.set(rho=1.0))
# Solution
results3, details3 = run_solve_ivp(system3, slope_func,
events=event_func)
x_dist3 = results3.iloc[-1].x
x_dist3
# Solution
x_dist3 - x_dist
import os
filename = 'baseball_drag.csv'
if not os.path.exists(filename):
!wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/baseball_drag.csv
from pandas import read_csv
baseball_drag = read_csv(filename)
mph = Quantity(baseball_drag['Velocity in mph'], units.mph)
mps = mph.to(units.meter / units.second)
baseball_drag.index = mps.magnitude
baseball_drag.index.name = 'Velocity in meters per second'
baseball_drag.head()
drag_interp = interpolate(baseball_drag['Drag coefficient'])
vs = linspace(0, 60)
cds = drag_interp(vs)
make_series(vs, cds).plot()
decorate(xlabel='Velocity (m/s)', ylabel='C_d')
# Solution
def drag_force2(V, system):
Computes drag force in the opposite direction of `v`.
v: velocity
system: System object with rho, C_d, area
returns: Vector drag force
rho, area = system.rho, system.area
C_d = drag_interp(vector_mag(V))
mag = -rho * vector_mag(V)**2 * C_d * area / 2
direction = vector_hat(V)
f_drag = direction * mag
return f_drag
# Solution
def slope_func2(t, state, system):
x, y, vx, vy = state
mass, g = system.mass, system.g
V = Vector(vx, vy)
a_drag = drag_force2(V, system) / mass
a_grav = g * Vector(0, -1)
A = a_grav + a_drag
return V.x, V.y, A.x, A.y
# Solution
system4 = make_system(params)
# Solution
V = Vector(30, 30)
f_drag = drag_force(V, system4)
f_drag
# Solution
slope_func(0, system4.init, system4)
# Solution
results4, details4 = run_solve_ivp(system4, slope_func2,
events=event_func)
details4.message
# Solution
results4.tail()
# Solution
x_dist4 = results4.iloc[-1].x
x_dist4
# Solution
x_dist4 - x_dist
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: doc2vec
Step2: Prediction
Step3: The documents vector are going to be identified by the id we used in the preprocesing section, for example document 1 is going to have vector
Step4: We can ask for similarity words or documents on document 1
|
<ASSISTANT_TASK:>
Python Code:
import os
import nltk
directories = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']
input_file = open('../data/alldata.txt', 'w')
id_ = 0
for directory in directories:
rootdir = os.path.join('../data/aclImdb', directory)
for subdir, dirs, files in os.walk(rootdir):
for file_ in files:
with open(os.path.join(subdir, file_), "r") as f:
doc_id = "_*%i" % id_
id_ = id_ + 1
text = f.read()
text = text
tokens = nltk.word_tokenize(text)
doc = " ".join(tokens).lower()
doc = doc.encode("ascii", "ignore")
input_file.write("%s %s\n" % (doc_id, doc))
input_file.close()
%load_ext autoreload
%autoreload 2
import word2vec
word2vec.doc2vec('../data/alldata.txt', '../data/doc2vec-vectors.bin', cbow=0, size=100, window=10, negative=5,
hs=0, sample='1e-4', threads=12, iter_=20, min_count=1, binary=True, verbose=True)
%load_ext autoreload
%autoreload 2
import word2vec
model = word2vec.load('../data/doc2vec-vectors.bin')
model.vectors.shape
model['_*1']
indexes, metrics = model.similar('_*1')
model.generate_response(indexes, metrics).tolist()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convert from maggies to magnitudes, use the "clean" fluxes when available, but only if >0 to avoid problems with logs.
Step2: Cut some questionable data
Step3: Known Quasars II
Step4: Use Coleman and Tina's densityplot instead. Don't know why it won't do the color bar or why it won't show the full plot at once.
Step5: Test data
Step6: Self-Evaluationa Regression Tests
Step7: So 500 objects are taking 2.5s. We need to run it on as many as 200,000 objects eventually, so that is >2500s, which is something to keep in mind!
Step8: We will use dask to handle the fact that we don't have enough memory to process the full array size at once and the amount of free disk for swap is also insufficient.
Step9: Normally, I compute the fraction that are within $\Delta z$ of 0.1 or 0.3 as my diagnostic. Ideally shooting for 90% within 0.3.
Step10: Can also compute sklearn metrics.
Step11: explained variance is 0.78 for Xtest and is was undefined for XStest due to a nan in the output data array.
Step12: Alternative Training Sets
Step13: Not as good as Nadaraya-Watson. Let's look at a plot to see why.
Step14: Not obvious, looks about the same.
Step15: SVM
Step16: Stochastic Gradient Descent
Step17: Interesting. This is terrible, but there are essentially no outliers. I wonder if it is worth trying to use as a prior?
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from astropy.table import Table
import numpy as np
import matplotlib.pyplot as plt
data3 = Table.read('/Users/gtr/Work/sdss/mastercat/GTR-ADM-QSO-master-sweeps-Feb5-2016.zspeconly.fits')
print len(data3)
#data3.keys()
# Get rid of objects with negative fluxes
mask = ( (data3['PSFFLUX'][:,0]>0) & (data3['PSFFLUX'][:,1]>0) & (data3['PSFFLUX'][:,2]>0) & (data3['PSFFLUX'][:,3]>0) & (data3['PSFFLUX'][:,4]>0) )
data3 = data3[mask]
data3['u'] = 22.5-2.5*np.log10(np.where(((data3['PSF_CLEAN_NUSE'][:,0]>0) & (data3['PSFFLUX_CLEAN'][:,0]>0)), data3['PSFFLUX_CLEAN'][:,0], data3['PSFFLUX'][:,0]))
data3['g'] = 22.5-2.5*np.log10(np.where(((data3['PSF_CLEAN_NUSE'][:,1]>0) & (data3['PSFFLUX_CLEAN'][:,1]>0)), data3['PSFFLUX_CLEAN'][:,1], data3['PSFFLUX'][:,1]))
data3['r'] = 22.5-2.5*np.log10(np.where(((data3['PSF_CLEAN_NUSE'][:,2]>0) & (data3['PSFFLUX_CLEAN'][:,2]>0)), data3['PSFFLUX_CLEAN'][:,2], data3['PSFFLUX'][:,2]))
data3['i'] = 22.5-2.5*np.log10(np.where(((data3['PSF_CLEAN_NUSE'][:,3]>0) & (data3['PSFFLUX_CLEAN'][:,3]>0)), data3['PSFFLUX_CLEAN'][:,3], data3['PSFFLUX'][:,3]))
data3['z'] = 22.5-2.5*np.log10(np.where(((data3['PSF_CLEAN_NUSE'][:,4]>0) & (data3['PSFFLUX_CLEAN'][:,4]>0)), data3['PSFFLUX_CLEAN'][:,4], data3['PSFFLUX'][:,4]))
mask2 = ((data3['u']>35) | (data3['g']>30) | (data3['r']>30) | (data3['i']>30) | (data3['z']>30))
data3 = data3[mask2]
X = np.vstack([ data3['u'], data3['g'], data3['r'], data3['i'], data3['z']]).T
y = np.array(data3['ZSPEC'])
ug = np.array(data3['u']-data3['g'])
gr = data3['g']-data3['r']
fig = plt.figure(figsize=(6,6))
#plt.scatter(y,(X[:,0]-X[:,1]))
plt.scatter(y,ug)
plt.xlabel('Redshift')
plt.ylabel('u-g')
plt.show()
from sklearn.model_selection import train_test_split, cross_val_predict
from sklearn.metrics import classification_report
# Split the training data into training and test sets for cross-validation
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.25, random_state=42)
# Read in data file
%matplotlib inline
from astropy.table import Table
import numpy as np
import matplotlib.pyplot as plt
data = Table.read('GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean.fits')
# Remove stars
qmask = (data['zspec']>0)
qdata = data[qmask]
print len(qdata)
# X is in the format need for all of the sklearn tools, it just has the colors
X = np.vstack([ qdata['ug'], qdata['gr'], qdata['ri'], qdata['iz'], qdata['zs1'], qdata['s1s2']]).T
#y = np.array(data['labels'])
y = np.array(qdata['zspec'])
from sklearn.model_selection import train_test_split, cross_val_predict
from sklearn.metrics import classification_report
# Split the training data into training and test sets for cross-validation
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.20, random_state=42)
# For algorithms that need scaled data:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(Xtrain) # Don't cheat - fit only on training data
XStrain = scaler.transform(Xtrain)
XStest = scaler.transform(Xtest) # apply same transformation to test data
#Include more attributes to see if that helps at all with classification
#XX = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], data['zs1'], data['s1s2'], data['imag'], data['extinctu'], data['morph']]).T
fig = plt.figure(figsize=(6,6))
plt.scatter(y,qdata['ug'])
plt.xlabel('Redshift')
plt.ylabel('u-g')
plt.show()
# Same plot, but with KDE
Xplot = np.vstack([ qdata['zspec'], qdata['ug'] ]).T
from sklearn.neighbors import KernelDensity
kde = KernelDensity(kernel='gaussian', bandwidth=0.1)
kde.fit(Xplot) #fit the model to the data
u = np.linspace(-0.1,7,100)
v = np.linspace(-2.5,7.5,100)
Xgrid = np.vstack(map(np.ravel, np.meshgrid(u, v))).T
dens = np.exp(kde.score_samples(Xgrid)) #evaluate the model on the grid
fig = plt.figure(figsize=(6,6))
plt.scatter(Xgrid[:,0],Xgrid[:,1], c=dens, cmap="Purples", edgecolor="None")
plt.xlabel('Redshift')
plt.ylabel('u-g')
plt.xlim([0,5])
plt.ylim([-0.5,3])
plt.colorbar()
plt.show()
from densityplot import *
from pylab import *
fig = plt.figure(figsize=(6,6))
hex_scatter(y,qdata['ug'], min_cnt=10, levels=2, std=True, smoothing=1,
hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues},
skwargs={'color': 'k'})
plt.colorbar()
#testdata = Table.read('GTR-ADM-QSO-ir_good_test_2016.fits')
testdata = Table.read('GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits')
print testdata.keys()
qsocandmask = ((testdata['ypredRFC']==0) | (testdata['ypredSVM']==0) | (testdata['ypredBAG']==0))
testdatacand = testdata[qsocandmask]
print len(testdata),len(testdatacand)
# How many of these are known quasars?
print len(testdata[testdata['knownqso']==1])
print len(testdatacand[testdatacand['knownqso']==1])
import numpy as np
from astroML.linear_model import NadarayaWatson
model = NadarayaWatson('gaussian', 0.05)
#model = NadarayaWatson('rbf')
model.fit(Xtrain,ytrain)
%timeit ypred = model.predict(Xtest[:500])
print len(Xtest)
fig = plt.figure(figsize=(6,6))
#plt.scatter(y,(X[:,0]-X[:,1]))
plt.scatter(ytest[:500],ypred)
plt.xlabel('zspec')
plt.ylabel('zphot')
plt.xlim([-0.1,5.5])
plt.ylim([-0.1,5.5])
plt.show()
from dask import compute, delayed
def process(Xin):
return model.predict(Xin)
# Create dask objects
# Reshape is necessary because the format of x as drawm from Xtest
# is not what sklearn wants.
dobjs = [delayed(process)(x.reshape(1,-1)) for x in Xtest]
#print dobjs
#%%timeit
import dask.threaded
ypredselfNW = compute(*dobjs, get=dask.threaded.get)
# The dask output is sort of odd, so this is just to put the result back into the expected format.
ypredselfNW = np.array(ypredselfNW).reshape(1,-1)[0]
print len(ypredselfNW),ypredselfNW.max()
n = len(ypredselfNW)
mask1 = (np.abs(ypredselfNW-ytest)<0.1)
mask3 = (np.abs(ypredselfNW-ytest)<0.3)
m1 = len(ypredselfNW[mask1])
m3 = len(ypredselfNW[mask3])
frac1 = 1.0*m1/n
frac3 = 1.0*m3/n
print frac1,frac3
from sklearn.metrics import explained_variance_score
explained_variance_score(ytest, ypredselfNW)
from densityplot import *
from pylab import *
fig = plt.figure(figsize=(6,6))
hex_scatter(ytest,ypredselfNW, min_cnt=10, levels=2, std=True, smoothing=1,
hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues},
skwargs={'color': 'k'})
plt.xlabel('zspec')
plt.ylabel('zphot')
#plt.xlim([-0.1,5.5])
#plt.ylim([-0.1,5.5])
plt.show()
# Another way to do implement dask
%%timeit
import dask.multiprocessing
ypred2 = compute(*dobjs, get=dask.multiprocessing.get)
# Fit a Random Forest to the full spectroscopic sample
from sklearn.ensemble import RandomForestRegressor
modelRF = RandomForestRegressor()
modelRF.fit(Xtrain,ytrain)
ypredselfRFR = modelRF.predict(Xtest)
n = len(ypredselfRFR)
mask1 = (np.abs(ypredselfRFR-ytest)<0.1)
mask3 = (np.abs(ypredselfRFR-ytest)<0.3)
m1 = len(ypredselfRFR[mask1])
m3 = len(ypredselfRFR[mask3])
frac1 = 1.0*m1/n
frac3 = 1.0*m3/n
print frac1,frac3
from sklearn.metrics import explained_variance_score
explained_variance_score(ytest, ypredselfRFR)
from densityplot import *
from pylab import *
fig = plt.figure(figsize=(6,6))
hex_scatter(ytest,ypredselfRFR, min_cnt=10, levels=2, std=True, smoothing=1,
hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues},
skwargs={'color': 'k'})
plt.xlabel('zspec')
plt.ylabel('zphot')
#plt.xlim([-0.1,5.5])
#plt.ylim([-0.1,5.5])
plt.show()
from sklearn.gaussian_process import GaussianProcessRegressor
gp = GaussianProcessRegressor()
gp.fit(Xtrain,ytrain)
ypredselfGP = gp.predict(Xtest)
n = len(ypredselfGP)
mask1 = (np.abs(ypredselfGP-ytest)<0.1)
mask3 = (np.abs(ypredselfGP-ytest)<0.3)
m1 = len(ypredselfGP[mask1])
m3 = len(ypredselfGP[mask3])
frac1 = 1.0*m1/n
frac3 = 1.0*m3/n
print frac1,frac3
from sklearn.metrics import explained_variance_score
explained_variance_score(ytest, ypredselfGP)
from densityplot import *
from pylab import *
fig = plt.figure(figsize=(6,6))
hex_scatter(ytest,ypredselfGP, min_cnt=10, levels=2, std=True, smoothing=1,
hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues},
skwargs={'color': 'k'})
plt.xlabel('zspec')
plt.ylabel('zphot')
#plt.xlim([-0.1,5.5])
#plt.ylim([-0.1,5.5])
plt.show()
from sklearn import svm
model = svm.SVR(max_iter=10)
model.fit(Xtrain, ytrain)
# Use dask instead
# ypredselfSVM = model.predict(Xtest)
from dask import compute, delayed
def process(Xin):
return model.predict(Xin)
dobjs = [delayed(process)(x.reshape(1,-1)) for x in Xtest]
import dask.threaded
ypredselfSVM = compute(*dobjs, get=dask.threaded.get)
ypredselfSVM = np.array(ypredselfSVM).reshape(1,-1)[0]
n = len(ypredselfSVM)
mask1 = (np.abs(ypredselfSVM-ytest)<0.1)
mask3 = (np.abs(ypredselfSVM-ytest)<0.3)
m1 = len(ypredselfSVM[mask1])
m3 = len(ypredselfSVM[mask3])
frac1 = 1.0*m1/n
frac3 = 1.0*m3/n
print frac1,frac3
from sklearn.metrics import explained_variance_score
explained_variance_score(ytest, ypredselfSVM)
from densityplot import *
from pylab import *
fig = plt.figure(figsize=(6,6))
hex_scatter(ytest,ypredselfSVM, min_cnt=10, levels=2, std=True, smoothing=1,
hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues},
skwargs={'color': 'k'})
plt.xlabel('zspec')
plt.ylabel('zphot')
#plt.xlim([-0.1,5.5])
#plt.ylim([-0.1,5.5])
plt.show()
from sklearn.linear_model import SGDRegressor
model = SGDRegressor()
model.fit(Xtrain, ytrain)
ypredselfSGD = model.predict(XStest)
n = len(ypredselfSGD)
mask1 = (np.abs(ypredselfSGD-ytest)<0.1)
mask3 = (np.abs(ypredselfSGD-ytest)<0.3)
m1 = len(ypredselfSGD[mask1])
m3 = len(ypredselfSGD[mask3])
frac1 = 1.0*m1/n
frac3 = 1.0*m3/n
print frac1,frac3
from sklearn.metrics import explained_variance_score
explained_variance_score(ytest, ypredselfSGD)
from densityplot import *
from pylab import *
fig = plt.figure(figsize=(6,6))
hex_scatter(ytest,ypredselfSGD, min_cnt=10, levels=2, std=True, smoothing=1,
hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues},
skwargs={'color': 'k'})
plt.xlabel('zspec')
plt.ylabel('zphot')
#plt.xlim([-0.1,5.5])
#plt.ylim([-0.1,5.5])
plt.show()
junk = Table.read('GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits')
print junk.keys()
print len(junk)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Complete graph Laplacian
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
a = np.zeros((n,n))
b = a.astype(dtype=np.int)
for x in range(n):
b[x,x] = n-1
return b
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
a = np.ones((n,n))
b = a.astype(dtype=np.int)
for x in range(n):
b[x,x] = 0
return b
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
def laplacian(n):
return complete_deg(n)-complete_adj(n)
one = laplacian(1)
two = laplacian(2)
three = laplacian(3)
four = laplacian(4)
ten = laplacian(10)
five = laplacian(5)
print(one)
print(np.linalg.eigvals(one))
print(two)
print(np.linalg.eigvals(two))
print(three)
print(np.linalg.eigvals(three))
print(four)
print(np.linalg.eigvals(four))
print(five)
print(np.linalg.eigvals(five))
print(ten)
print(np.linalg.eigvals(ten))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: References
Step2: 1.3 Morse
Step3: 1.4 Card Deck
Step4: 1.5 ๋ฌธ์ฅ ์์ ๋จ์ด ๊ฐ์
Step5: 1.6 ๋จ์ด์ ์์๋ฅผ ๋ฐ๊ฟ ์ถ๋ ฅ
Step6: 1.7 ๋จ์ด์ ์์์ ๊ทธ ๋จ์ด์ ๊ธ์ ์์ ๋ฐ๊ฟ ์ถ๋ ฅ
Step7: 2. Indexing and Slicing
Step8: ์ฌ์ (Dictionary)
Step9: 2.2 Slice
Step10: ์ ์ฐ์ต ๋ฌธ์ ์ค Hashtags๋ก ์ด๋ํด์ '#'๋ง ๋นผ๋ด
์๋ค.
Step11: 3. Function
Step12: Input
Step13: ์ ๊ทธ๋ฆผ ์์ ์๋ ์ฝ๋๊ฐ ๋ญ ๋ปํ๋์ง๋ ๋ชฐ๋ผ๋ ๋ญ๊ฐ ๊ณ์ํด์ ์ค๋ณต๋๋ค๋ ๊ฒ์ ๋ณด์ผ ๊ฒ์
๋๋ค. ์ด๋ ๋ฏ ํจ์๋ฅผ ํตํด ์ฝ๋๊ฐ ๋ฐ๋ณต๋ ๋ ์ค๋ณต์ ํผํ๊ณ ์ฝ๋ ์ฌ์ฌ์ฉ์ ์ต๋ํํ ์ ์์ต๋๋ค.
Step14: - Return Value๊ฐ ์์ ๊ฒฝ์ฐ ์ ์ธ
Step15: - Return Value๊ฐ ์์ ๊ฒฝ์ฐ ์ ์ธ
Step16: 2) ํจ์๋ฅผ ์คํํ๋ ๋ฐฉ๋ฒ
Step17: 3.4 Arguments
Step18: 3.5 Keyword Arguments
Step19: 3.6 Doc String
Step21: Doc String์ ๋ณด๋ ๋ฐฉ๋ฒ
Step23: 3.7 Annotations
Step24: Annotations๋ฅผ ๋ณด๋ ๋ฐฉ๋ฒ
Step25: ํนํ ์์ Doc String๊ณผ Annotation์ ์คํ ์์ค๊ฐ ๋ง์์ง๋ฉด์ ์ ์ฉํ๊ฒ ์ฐ์ผ ์ ์์ต๋๋ค. ์ด๋ฐ ๊ฒ์ Informational Metadata ํน์ Metadata๋ผ๊ณ ํฉ๋๋ค. ์ด๋ฐ ๋ถ๋ถ์ ์ ๊ธฐ์
ํ๋ฉด ๋์ค์ ๋ฌธ์ํํ ๋๋ ๋ง์ ๋์์ด ๋ฉ๋๋ค. Python์ ๋ฌธ์ํํด์ฃผ๋ ์คํ ์์ค๋ ๋ํ์ ์ผ๋ก Sphinx๊ฐ ์์ต๋๋ค.
Step26: 4.2 ๋ฏธ๊ตญ์ ์๋ ์ฃผ(State)์ ์๋๋ ๋ฌด์์ธ์ง ์์๋ผ ์ ์๋ ํจ์๋ฅผ ๋ง๋ค์ด๋ณด์ธ์.
Step27: 4.3 Algorithm - Bubble Sort
Step28: Bubble Sort
Step29: Sorting Algorithms
Step30: ์ฌ์ค ์ง๊ธ๊น์ง ๋ง์ ๊ฒ์ ๋ฐฐ์ ์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ ์ง๊ธ๊น์ง ๋ฐฐ์ด ๊ฒ๋ง์ผ๋ก๋ ๋ง์ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ ์ ์์ต๋๋ค. ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ์ง์๋ณด๋ค๋ ๋ฌธ์ ๋ฅผ ์ด๋ป๊ฒ ํด๊ฒฐํด ๋๊ฐ์ผํ ์ง ์ฌ๊ณ ๋ ฅ์ด ์ค์ํด์ง๊ณ ์์ต๋๋ค. ํ๋ก๊ทธ๋๋ฐ์์๋ ์ด๋ฅผ ์๊ณ ๋ฆฌ์ฆ์ด๋ผ๋ ๊ฒ์ผ๋ก ๋ถ๋ฆ
๋๋ค. ์ด๋ฒ์๋ ์๊ณ ๋ฆฌ์ฆ ์ค์ ์ ๋ ฌ(Sorting)์ด๋ผ๋ ๊ฒ์ ๋ณด๋ฉฐ ํ๋ก๊ทธ๋๋ฐ ์ค๋ ฅ์ ํค์๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
# ๊ฒ์๊ธ ์ ๋ชฉ
title = "On top of the world! Life is so fantastic if you just let it. \
I have never been happier. #nyc #newyork #vacation #traveling"
# Write your code below.
# ๋ชจ์ค๋ถํธ
morse = {
'.-':'A','-...':'B','-.-.':'C','-..':'D','.':'E','..-.':'F',
'--.':'G','....':'H','..':'I','.---':'J','-.-':'K','.-..':'L',
'--':'M','-.':'N','---':'O','.--.':'P','--.-':'Q','.-.':'R',
'...':'S','-':'T','..-':'U','...-':'V','.--':'W','-..-':'X',
'-.--':'Y','--..':'Z'
}
# ํ์ด์ผํ ์ํธ
code = '.... . ... .-.. . . .--. ... . .- .-. .-.. -.--'
# Write your code below.
front = ['s', 'c', 'd', 'h', ] # Spade, Club, Diamond, Heart
back = [
'2',
'3',
'4',
'5',
'6',
'7',
'8',
'9',
'T', # Ten
'J', # Jack
'Q', # Queen
'K', # King
'A', # Ace
]
# Write your code below.
# ์ฃผ์ด์ง ๋ฌธ์ฅ์ ๋ชจ๋ ์๋ฌธ์๋ก ๋ง๋ค๊ณ ',', '.'์ ์ ๊ฑฐํ๋ผ.
# ๊ทธ๋ฆฌ๊ณ ๊ฐ ๋จ์ด๊ฐ ๋ช ๊ฐ ์ฌ์ฉํ๋์ง Countingํ๋ผ.
s = 'We propose to start by making it possible to teach programming in Python, \
an existing scripting language, and to focus on creating \
a new development environment and teaching materials for it.'
# Write your code below.
# ๋จ์ด์ ์์๋ฅผ ๋ฐ๊ฟ์ ์ถ๋ ฅํ๋ผ.
s = "Sometimes I feel like a data scientist"
# Write your code below.
# ๋จ์ด์ ์์๋ฅผ ๋ฐ๊พธ๊ณ
# ๋จ์ด์ ๊ธ์ ์์๋ ๋ฐ๊ฟ ์ถ๋ ฅํ๋ผ.
# tsitneics atada a ekil leef I semitemoS"
s = "Sometimes I feel like a data scientist"
# Write your code below.
s = 'bicycle'
s[0]
morse
morse['....']
morse.get('....')
Image(filename='images/slicing.png')
s = 'bicycle'
s[1:7]
s[1:7:2] # Skipping
s[1::2]
s[:7:2]
l = list(range(10))
l
l[2:5] = [20, 30]
l
l[2:5] = 100
l
l[2:5] = [100]
l
del l[5:7]
l
s[::3]
s[::-1]
s[::-2]
Image(filename='images/function2.png')
Image(filename='images/function.png')
def addition0():
print("๋ํ๊ธฐ ํจ์์
๋๋ค.")
# Argument์ ๊ฐ์์ ์๊ด์์ด Return Value๊ฐ ์์ ์๋ ์๊ณ ์์ ์๋ ์์ต๋๋ค.
addition0()
def addition1(x, y):
print(x + y)
addition1(1, 2)
def addition2(x, y):
return x + y
print("%d๊ณผ %d์ ํฉ์ %s์
๋๋ค." % (10, 30, addition2(10, 30)))
addition0()
addition1(10, 10)
a = addition1(100, 200)
print(a)
print(addition2(10, 20))
a = addition2(10, 20)
print(a)
def printa(x, y, *args):
print(type(args)) # *args๋ Tuple ํ์
for item in args: # *args๋ x, y๋ฅผ ๋บ ๋๋จธ์ง
print(item)
printa("a", "b", "c", "d", "c", "1")
def foo(x=10, greeting="hello", **kwargs):
print(kwargs)
print(type(kwargs))
print(kwargs.get('you'))
print(kwargs['you'])
print(greeting, x, kwargs.get('you'))
foo(you="mbc")
def foo():
'''This is a foo.'''
return "foo"
foo()
foo.__doc__
help(foo)
def add(x:int, y:int) -> int:
๋ํ๊ธฐ ํจ์์
๋๋ค.
return x + y
help(add)
add.__annotations__
# ์ฌ์ฉ์๊ฐ ๋ช ๊ฐ์ ์ธ์๋ ์๊ด์์ด ๋ชจ๋ ์ธ์์ ํ๊ท ์ ๊ตฌํ๋ ํจ์๋ฅผ ๋ง๋ค์ด๋ด
์๋ค.
# Write your code below.
def avg(*args):
return sum(args) / len(args)
print(avg(1, 2))
# ๋ฏธ๊ตญ์ ์ฃผ์ ์๋๋ฅผ ์ฐพ๋ ํจ์๋ฅผ ๋ง๋์ธ์.
STATES_CAPITALS = {
'Alabama': 'Montgomery',
'Alaska': 'Juneau',
'Arizona': 'Phoenix',
'Wyoming': 'Cheyenne',
}
# Write your code below.
def find_capital(name=''):
return STATES_CAPITALS.get(name, 'a')
print(find_capital(name='Al'))
from IPython.display import YouTubeVideo
YouTubeVideo('Cq7SMsQBEUw')
YouTubeVideo('ZZuD6iUe3Pc')
target_list = [54, 26, 93, 17, 77, 31, 44, 55, 20]
len(target_list)
for item in range(len(target_list)):
print(item)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.