text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import sys
def maxSubArraySum(a , n , k , i ) :
max_so_far = - sys . maxsize ;
max_ending_here = 0 ;
while(i < n ) :
max_ending_here = max_ending_here + a[i ] ;
if(max_so_far < max_ending_here ) :
max_so_far = max_ending_here ;
if(max_ending_here < 0 ) :
max_ending_here = 0 ;
i += k ;
return max_so_far ;
def find(arr , n , k ) :
maxSum = 0 ;
for i in range(0 , min(n , k ) + 1 ) :
sum = 0 ;
maxSum = max(maxSum , maxSubArraySum(arr , n , k , i ) ) ;
return maxSum ;
if __name__== ' __main __' :
arr =[2 , - 3 , - 1 , - 1 , 2 ] ;
n = len(arr ) ;
k = 2 ;
print(find(arr , n , k ) ) ;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 Graph
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: 2 Fourier Basis
Step8: Visualize the eigenvectors $u_\ell$ corresponding to the first eight non-zero eigenvalues $\lambda_\ell$.
Step9: 3 Graph Signals
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.spatial
import matplotlib.pyplot as plt
%matplotlib inline
d = 2 # Dimensionality.
n = 100 # Number of samples.
c = 1 # Number of communities.
# Data matrix, structured in communities.
X = np.random.uniform(0, 1, (n, d))
X += np.linspace(0, 2, c).repeat(n//c)[:, np.newaxis]
fig, ax = plt.subplots(1, 1, squeeze=True)
ax.scatter(X[:n//c, 0], X[:n//c, 1], c='b', s=40, linewidths=0, label='class 0');
ax.scatter(X[n//c:, 0], X[n//c:, 1], c='r', s=40, linewidths=0, label='class 1');
lim1 = X.min() - 0.5
lim2 = X.max() + 0.5
ax.set_xlim(lim1, lim2)
ax.set_ylim(lim1, lim2)
ax.set_aspect('equal')
ax.legend(loc='upper left');
# Pairwise distances.
dist = scipy.spatial.distance.pdist(X, metric='euclidean')
dist = scipy.spatial.distance.squareform(dist)
plt.figure(figsize=(15, 5))
plt.hist(dist.flatten(), bins=40);
k = 10 # Miminum number of edges per node.
idx = np.argsort(dist)[:, 1:k+1]
dist.sort()
dist = dist[:, 1:k+1]
assert dist.shape == (n, k)
# Scaling factor.
sigma2 = np.mean(dist[:, -1])**2
# Weights with Gaussian kernel.
dist = np.exp(- dist**2 / sigma2)
plt.figure(figsize=(15, 5))
plt.hist(dist.flatten(), bins=40);
# Weight matrix.
I = np.arange(0, n).repeat(k)
J = idx.reshape(n*k)
V = dist.reshape(n*k)
W = scipy.sparse.coo_matrix((V, (I, J)), shape=(n, n))
# No self-connections.
W.setdiag(0)
# Non-directed graph.
bigger = W.T > W
W = W - W.multiply(bigger) + W.T.multiply(bigger)
assert type(W) == scipy.sparse.csr_matrix
print('n = |V| = {}, k|V| < |E| = {}'.format(n, W.nnz))
plt.spy(W, markersize=2, color='black');
import scipy.io
import os.path
scipy.io.mmwrite(os.path.join('datasets', 'graph_inpainting', 'embedding.mtx'), X)
scipy.io.mmwrite(os.path.join('datasets', 'graph_inpainting', 'graph.mtx'), W)
# Degree matrix.
D = W.sum(axis=0)
D = scipy.sparse.diags(D.A.squeeze(), 0)
# Laplacian matrix.
L = D - W
fig, axes = plt.subplots(1, 2, squeeze=True, figsize=(15, 5))
axes[0].spy(L, markersize=2, color='black');
axes[1].plot(D.diagonal(), '.');
lamb, U = np.linalg.eigh(L.toarray())
#print(lamb)
plt.figure(figsize=(15, 5))
plt.plot(lamb, '.-');
def scatter(ax, x):
ax.scatter(X[:, 0], X[:, 1], c=x, s=40, linewidths=0)
ax.set_xlim(lim1, lim2)
ax.set_ylim(lim1, lim2)
ax.set_aspect('equal')
fig, axes = plt.subplots(2, 4, figsize=(15, 6))
for i, ax in enumerate(axes.flatten()):
u = U[:, i+1]
scatter(ax, u)
ax.set_title('u_{}'.format(i+1))
def f1(u, a=2):
y = np.zeros(n)
y[:a] = 1
return y
def f2(u):
return f1(u, a=3)
def f3(u):
return f1(u, a=n//4)
def f4(u):
return f1(u, a=n)
def f5(u, m=4):
return np.maximum(1 - m * u / u[-1], 0)
def f6(u):
return f5(u, 2)
def f7(u):
return f5(u, 1)
def f8(u):
return f5(u, 1/2)
def f9(u, a=1/2):
return np.exp(-u / a)
def f10(u):
return f9(u, a=1)
def f11(u):
return f9(u, a=2)
def f12(u):
return f9(u, a=4)
def plot(F):
plt.figure(figsize=(15, 5))
for f in F:
plt.plot(lamb, eval(f)(lamb), '.-', label=f)
plt.xlim(0, lamb[-1])
plt.legend()
F = ['f{}'.format(i+1) for i in range(12)]
plot(F[0:4])
plot(F[4:8])
plot(F[8:12])
fig, axes = plt.subplots(3, 4, figsize=(15, 9))
for f, ax in zip(F, axes.flatten()):
xhat = eval(f)(lamb)
x = U.dot(xhat) # U @ xhat
#x = U.dot(xhat * U.T[:,2])
scatter(ax, x)
ax.set_title(f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-2', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Step 8
Step9: Step 9
Step10: Step 10
|
<ASSISTANT_TASK:>
Python Code:
# For using the same code in either Python 2 or 3
from __future__ import print_function
## Note: Python 2 users, use raw_input() to get player input. Python 3 users, use input()
from IPython.display import clear_output
def display_board(board):
pass
def player_input():
pass
def place_marker(board, marker, position):
pass
def win_check(board,mark):
pass
import random
def choose_first():
pass
def space_check(board, position):
pass
def full_board_check(board):
pass
def player_choice(board):
pass
def replay():
pass
print('Welcome to Tic Tac Toe!')
#while True:
# Set the game up here
#pass
#while game_on:
#Player 1 Turn
# Player2's turn.
#pass
#if not replay():
#break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate a noisy measurement to fit
Step2: Write down likelihood, prior, and posterior probilities
Step3: Sample the posterior using emcee
Step4: Check walker positions for burn-in
Step5: Model parameter results
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn; seaborn.set()
from clusterlensing import ClusterEnsemble
import emcee
import corner
% matplotlib inline
import matplotlib
matplotlib.rcParams["axes.labelsize"] = 20
matplotlib.rcParams["legend.fontsize"] = 12
logm_true = 14
off_true = 0.3
nbins = 10
redshifts = [0.2]
mass = [10**logm_true]
offsets = [off_true]
rbins = np.logspace(np.log10(0.1), np.log10(5), num = nbins)
cdata = ClusterEnsemble(redshifts)
cdata.m200 = mass
cdata.calc_nfw(rbins=rbins, offsets=offsets)
dsigma_true = cdata.deltasigma_nfw.mean(axis=0).value
# add scatter with a stddev of 20% of data
noise = np.random.normal(scale=dsigma_true*0.2, size=nbins)
y = dsigma_true + noise
yerr = np.abs(dsigma_true/3) # 33% error bars
plt.plot(rbins, dsigma_true, 'bo-', label='True $\Delta\Sigma(R)$')
plt.plot(rbins, y, 'g^-', label='Noisy $\Delta\Sigma(R)$')
plt.errorbar(rbins, y, yerr=yerr, color='g', linestyle='None')
plt.xscale('log')
plt.legend(loc='best')
plt.show()
# probability of the data given the model
def lnlike(theta, z, rbins, data, stddev):
logm, offsets = theta
# calculate the model
c = ClusterEnsemble(z)
c.m200 = [10 ** logm]
c.calc_nfw(rbins=rbins, offsets=[offsets])
model = c.deltasigma_nfw.mean(axis=0).value
diff = data - model
lnlikelihood = -0.5 * np.sum(diff**2 / stddev**2)
return lnlikelihood
# uninformative prior
def lnprior(theta):
logm, offset = theta
if 10 < logm < 16 and 0.0 <= offset < 5.0:
return 0.0
else:
return -np.inf
# posterior probability
def lnprob(theta, z, rbins, data, stddev):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
else:
return lp + lnlike(theta, z, rbins, data, stddev)
ndim = 2
nwalkers = 20
p0 = np.random.rand(ndim * nwalkers).reshape((nwalkers, ndim))
p0[:,0] = p0[:,0] + 13.5 # start somewhere close to true logm ~ 14
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob,
args=(redshifts, rbins, y, yerr), threads=8)
# the MCMC chains take some time: about 49 minutes for the 500 samples below
i_can_wait = False # or can you? Set to True to run the MCMC chains
if i_can_wait:
pos, prob, state = sampler.run_mcmc(p0, 500)
if i_can_wait:
fig, axes = plt.subplots(2, 1, sharex=True, figsize=(8, 6))
axes[0].plot(sampler.chain[:, :, 0].T, color="k", alpha=0.4)
axes[0].axhline(logm_true, color="g", lw=2)
axes[0].set_ylabel("log-mass")
axes[1].plot(sampler.chain[:, :, 1].T, color="k", alpha=0.4)
axes[1].axhline(off_true, color="g", lw=2)
axes[1].set_ylabel("offset")
axes[1].set_xlabel("step number")
if i_can_wait:
burn_in_step = 50 # based on a rough look at the walker positions above
samples = sampler.chain[:, burn_in_step:, :].reshape((-1, ndim))
else:
# read in a previously generated chain
samples = np.loadtxt('samples.txt')
fig = corner.corner(samples,
labels=["$\mathrm{log}M_{200}$", "$\sigma_\mathrm{off}$"],
truths=[logm_true, off_true])
fig.savefig('cornerplot.png')
# save the chain for later
np.savetxt('samples.txt', samples)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Query Data
Step2: Let's query every talk description
Step3: Okay, make a dataframe and add some helpful columns
Step4: visualize some stuff
|
<ASSISTANT_TASK:>
Python Code:
import requests as rq
import pandas as pd
import matplotlib.pyplot as mpl
import bs4
import os
from tqdm import tqdm_notebook
from datetime import time
%matplotlib inline
base_url = "https://pydata.org"
r = rq.get(base_url + "/berlin2018/schedule/")
bs = bs4.BeautifulSoup(r.text, "html.parser")
data = {}
for ahref in tqdm_notebook(bs.find_all("a")):
if 'schedule/presentation' in ahref.get("href"):
url = ahref.get("href")
else:
continue
data[url] = {}
resp = bs4.BeautifulSoup(rq.get(base_url + url).text, "html.parser")
title = resp.find("h2").text
resp = resp.find_all(attrs={'class':"container"})[1]
when, who = resp.find_all("h4")
date_info = when.string.split("\n")[1:]
day_info = date_info[0].strip()
time_inf = date_info[1].strip()
room_inf = date_info[3].strip()[3:]
speaker = who.find("a").text
level = resp.find("dd").text
abstract = resp.find(attrs={'class':'abstract'}).text
description = resp.find(attrs={'class':'description'}).text
data[url] = {
'day_info': day_info,
'title': title,
'time_inf': time_inf,
'room_inf': room_inf,
'speaker': speaker,
'level': level,
'abstract': abstract,
'description': description
}
df = pd.DataFrame.from_dict(data, orient='index')
df.reset_index(drop=True, inplace=True)
# Tutorials on Friday
df.loc[df.day_info=='Friday', 'tutorial'] = True
df['tutorial'].fillna(False, inplace=True)
# time handling
df['time_from'], df['time_to'] = zip(*df.time_inf.str.split(u'\u2013'))
df.time_from = pd.to_datetime(df.time_from).dt.time
df.time_to = pd.to_datetime(df.time_to).dt.time
del df['time_inf']
df.to_json('./data.json')
df.head(3)
# Example: Let's query all non-novice talks on sunday, starting at 4 pm
tmp = df.query("(level!='Novice') & (day_info=='Sunday')")
tmp[tmp.time_from >= time(16)]
plt.style.use('seaborn-darkgrid')#'seaborn-darkgrid')
plt.rcParams['savefig.dpi'] = 200
plt.rcParams['figure.dpi'] = 120
plt.rcParams['figure.autolayout'] = False
plt.rcParams['figure.figsize'] = 10, 5
plt.rcParams['axes.labelsize'] = 17
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2.0
plt.rcParams['lines.markersize'] = 8
plt.rcParams['legend.fontsize'] = 11
plt.rcParams['font.family'] = "serif"
plt.rcParams['font.serif'] = "cm"
plt.rcParams['text.latex.preamble'] = "\\usepackage{subdepth}, \\usepackage{type1cm}"
plt.rcParams['text.usetex'] = True
ax = df.level.value_counts().plot.bar(rot=0)
ax.set_ylabel("number of talks")
ax.set_title("levels of the talks where:")
plt.show()
ax = df.rename(columns={'day_info': 'dayinfo'}).groupby("dayinfo")['level'].value_counts(normalize=True).round(2).unstack(level=0).plot.bar(rot=0)
ax.set_xlabel('')
ax.set_title('So the last day is more kind of "fade-out"?')
plt.show()
ax = df.groupby("tutorial")['level'].value_counts(normalize=True).round(2).unstack(level=0).T.plot.bar(rot=0)
ax.set_title('the percentage of experienced slots is higher for tutorials!\n\\small{So come on fridays for experienced level ;-)}')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import statements
Step2: Utility functions
Step3: Query OpenStreetMap using OverpassAPI via overpy python package
Step4: define bounding box from a 1km-buffered envelope around the study area boundary
Step7: define query
Step9: execute query
Step10: Write OpenStreetMap data to a shapefile
|
<ASSISTANT_TASK:>
Python Code:
bounding_box_file = ""
result_shapefile_filepath = ""
p1 = pyproj.Proj("+init=epsg:31254")
p2 = pyproj.Proj("+init=epsg:4326")
p3 = pyproj.Proj("+init=epsg:3857")
p4 = pyproj.Proj("+init=epsg:25832")
import overpy
import fiona
import numpy
import geopandas
from shapely.ops import polygonize
from shapely.geometry import LineString
from database.models import Site
import pyproj
from matplotlib import pyplot
%matplotlib inline
def print_results(results):
for way in result.ways:
print("Name: %s" % way.tags.get("name", "n/a"))
print(" Highway: %s" % way.tags.get("highway", "n/a"))
print(" Nodes:")
for node in way.nodes:
print(" Lat: %f, Lon: %f" % (node.lat, node.lon))
api = overpy.Overpass()
with fiona.open(bounding_box_file, mode='r') as bounding_box:
bounds = bounding_box.bounds
bounding_box.close()
print(bounds)
query = way({bottom},{left},{top},{right}) ["highway"]; (._;>;); out body;.format(bottom=bounds[1],
left=bounds[0],
top=bounds[3],
right=bounds[2])
query =
[out:json];
relation
["boundary"="administrative"]
["admin_level"="2"]
["name:en"="Austria"];
(._;>;);
out;
.replace("\n", "").replace(" ", "")
query
result = api.query(query)
ways = numpy.empty(len(result.ways), dtype=numpy.object)
for i, way in enumerate(result.ways):
ways[i] = LineString([ (node.lon, node.lat) for node in way.nodes ])
boundaries = list(polygonize(ways))
boundaries = geopandas.GeoDataFrame(geometry=boundaries, crs="+init=epsg:4326")
boundaries
boundaries.plot(facecolor='white', edgecolor='red')
bbox = boundaries.bounds.iloc[0]
bbox
query =
relation({s}, {w}, {n}, {e})
["boundary"="administrative"]
["admin_level"="2"];
(._;>;);
out;
.format(s=bbox['miny'], w=bbox['minx'], n=bbox['maxy'], e=bbox['maxx']).replace("\n", "").replace(" ", "")
query
result = api.query(query)
ways = numpy.empty(len(result.ways), dtype=numpy.object)
for i, way in enumerate(result.ways):
ways[i] = LineString([ (node.lon, node.lat) for node in way.nodes ]).simplify(0.01, preserve_topology=False)
boundaries = list(polygonize(ways))
boundaries = geopandas.GeoDataFrame(geometry=boundaries, crs="+init=epsg:4326")
boundaries = boundaries.to_crs(crs="+init=epsg:25832")
center = Site.objects.get(name='Hofgarten')
x, y = center.geometry.coords
x, y = pyproj.transform(p1, p4, x, y)
x
y
geopandas.datasets.available
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
area = world[world.name.isin(['Austria', 'Germany', 'Switzerland', 'Italy'])]
plt = area.plot(facecolor='white', edgecolor='black')
plt.set_frame_on(False)
from fiona.crs import from_epsg
schema = {'geometry': 'LineString', 'properties': {'Name':'str:80', 'Type':'str:80'}}
with fiona.open(result_shapefile_filepath, 'w', crs=from_epsg(4326), driver='ESRI Shapefile', schema=schema) as output:
for way in result.ways:
# the shapefile geometry use (lon,lat)
line = {'type': 'LineString', 'coordinates':[(node.lon, node.lat) for node in way.nodes]}
prop = {'Name': way.tags.get("name", "n/a"), 'Type': way.tags.get("highway", "n/a")}
output.write({'geometry': line, 'properties':prop})
output.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Loading
Step2: K-Means Evaluation
Step3: Loading the training data
Step4: Training
Step5: Test Set
Step6: Now to cluster the test data
Step8: Observations
Step9: Evaluation of Accuracy on Each Attack
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.preprocessing import minmax_scale
def load_data(file_path, cols=None):
COL_NAMES = ["duration", "protocol_type", "service", "flag", "src_bytes",
"dst_bytes", "land", "wrong_fragment", "urgent", "hot", "num_failed_logins",
"logged_in", "num_compromised", "root_shell", "su_attempted", "num_root",
"num_file_creations", "num_shells", "num_access_files", "num_outbound_cmds",
"is_host_login", "is_guest_login", "count", "srv_count", "serror_rate",
"srv_serror_rate", "rerror_rate", "srv_rerror_rate", "same_srv_rate",
"diff_srv_rate", "srv_diff_host_rate", "dst_host_count", "dst_host_srv_count",
"dst_host_same_srv_rate", "dst_host_diff_srv_rate", "dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate", "dst_host_serror_rate", "dst_host_srv_serror_rate",
"dst_host_rerror_rate", "dst_host_srv_rerror_rate", "labels"]
data = pd.read_csv(file_path, names=COL_NAMES, index_col=False)
# Shuffle data
data = data.sample(frac=1).reset_index(drop=True)
NOM_IND = [1, 2, 3]
BIN_IND = [6, 11, 13, 14, 20, 21]
# Need to find the numerical columns for normalization
NUM_IND = list(set(range(40)).difference(NOM_IND).difference(BIN_IND))
# Scale all numerical data to [0-1]
data.iloc[:, NUM_IND] = minmax_scale(data.iloc[:, NUM_IND])
labels = data['labels']
# Binary labeling
del data['labels']
data = pd.get_dummies(data)
if cols is None:
cols = data.columns
else:
map_data = pd.DataFrame(columns=cols)
map_data = map_data.append(data)
data = map_data.fillna(0)
data = data[cols]
return [data, labels, cols]
def get_results(data, labels, clf):
preds = clf.predict(data)
ans = pd.DataFrame({'label':labels.values, 'kmean':preds})
return ans
def evaluate_kmeans(data, labels, clf=None):
if clf is None:
clf = KMeans(n_clusters=4,init='random').fit(data)
ans = get_results(data, labels, clf)
ans = ans.groupby(['kmean', 'label']).size()
print(ans)
# Get the larger number from each cluster
correct = sum([anom if anom > norm else norm for anom, norm in zip(ans[::2],ans[1::2])])
print("Total accuracy: {0:.1%}".format(correct/sum(ans)))
return clf
train_data, train_labels, cols = load_data('data/KDDTrain+.csv')
train_data.head()
bin_train_labels = train_labels.apply(lambda x: x if x =='normal' else 'anomaly')
clf = evaluate_kmeans(train_data, bin_train_labels)
test_data, test_labels, cols = load_data('data/KDDTest+.csv', cols)
test_data.head()
bin_test_labels = test_labels.apply(lambda x: x if x =='normal' else 'anomaly')
evaluate_kmeans(test_data, bin_test_labels, clf)
import matplotlib.pyplot as plt
import numpy as np
ind = np.arange(4)
width = .35
ans = get_results(test_data, bin_test_labels, clf)
normal = []
anom = []
bin_ans = ans.groupby(['kmean', 'label']).size()
roof = round(bin_ans.max(), -2) + 3000
for i in range(0,4):
normal.append(bin_ans[i]['normal'])
anom.append(bin_ans[i]['anomaly'])
fig, ax = plt.subplots()
rects1 = ax.bar(ind, normal, width, color='grey')
rects2 = ax.bar(ind + width, anom, width, color='crimson')
ax.set_ylabel('Number of Rows')
ax.set_title('Distribution of Clusters')
ax.set_yticks(np.arange(roof, step=roof/6))
ax.set_xlabel('Clusters')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(('1', '2', '3', '4'))
ax.legend((rects1[0], rects2[0]), ('Normal', 'Anomaly'))
def autolabel(rects, ax):
Attach a text label above each bar displaying its height
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%d' % int(height),
ha='center', va='bottom')
autolabel(rects1, ax)
autolabel(rects2, ax)
ATTACKS = {
'normal': 'normal',
'back': 'DoS',
'land': 'DoS',
'neptune': 'DoS',
'pod': 'DoS',
'smurf': 'DoS',
'teardrop': 'DoS',
'mailbomb': 'DoS',
'apache2': 'DoS',
'processtable': 'DoS',
'udpstorm': 'DoS',
'ipsweep': 'Probe',
'nmap': 'Probe',
'portsweep': 'Probe',
'satan': 'Probe',
'mscan': 'Probe',
'saint': 'Probe',
'ftp_write': 'R2L',
'guess_passwd': 'R2L',
'imap': 'R2L',
'multihop': 'R2L',
'phf': 'R2L',
'spy': 'R2L',
'warezclient': 'R2L',
'warezmaster': 'R2L',
'sendmail': 'R2L',
'named': 'R2L',
'snmpgetattack': 'R2L',
'snmpguess': 'R2L',
'xlock': 'R2L',
'xsnoop': 'R2L',
'worm': 'R2L',
'buffer_overflow': 'U2R',
'loadmodule': 'U2R',
'perl': 'U2R',
'rootkit': 'U2R',
'httptunnel': 'U2R',
'ps': 'U2R',
'sqlattack': 'U2R',
'xterm': 'U2R'
}
clusters = ['normal' if norm > anom else 'anom' for anom, norm in zip(bin_ans[::2], bin_ans[1::2])]
categ_ans = ans
test_categ_labels = test_labels.apply(lambda x: ATTACKS[x])
categ_ans['label'] = test_categ_labels
categ_ans['kmean'] = categ_ans['kmean'].apply(lambda x: clusters[x])
categ_ans = categ_ans[categ_ans['label'] != 'normal']
print(categ_ans.groupby(['kmean', 'label']).size())
for label in categ_ans.label.unique():
print('\n' + label)
total = sum(categ_ans['label']==label)
print('Total rows: {}'.format(total))
correct = sum(categ_ans[categ_ans['label']==label]['kmean'] == 'anom')
print('Percent correctly classified: {:.1%}\n'.format(correct/total))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Label encoding
Step2: After encoding the class labels(diagnosis) in an array y, the malignant tumors are now represented as class 1(i.e prescence of cancer cells) and the benign tumors are represented as class 0 (i.e no cancer cells detection), respectively, illustrated by calling the transform method of LabelEncorder on two dummy variables.**
Step3: Feature Standardization
Step4: Feature decomposition using Principal Component Analysis( PCA)
Step5: Now, what we got after applying the linear PCA transformation is a lower dimensional subspace (from 3D to 2D in this case), where the samples are “most spread” along the new feature axes.
Step6: Deciding How Many Principal Components to Retain
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
#Load libraries for data processing
import pandas as pd #data processing, CSV file I/O (e.g. pd.read_csv)
import numpy as np
from scipy.stats import norm
# visualization
import seaborn as sns
plt.style.use('fivethirtyeight')
sns.set_style("white")
plt.rcParams['figure.figsize'] = (8,4)
#plt.rcParams['axes.titlesize'] = 'large'
data = pd.read_csv('data/clean-data.csv', index_col=False)
data.drop('Unnamed: 0',axis=1, inplace=True)
#data.head()
#Assign predictors to a variable of ndarray (matrix) type
array = data.values
X = array[:,1:31]
y = array[:,0]
#transform the class labels from their original string representation (M and B) into integers
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
#Call the transform method of LabelEncorder on two dummy variables
#le.transform (['M', 'B'])
from sklearn.model_selection import train_test_split
##Split data set in train 70% and test 30%
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.25, random_state=7)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
from sklearn.preprocessing import StandardScaler
# Normalize the data (center around 0 and scale to remove the variance).
scaler =StandardScaler()
Xs = scaler.fit_transform(X)
from sklearn.decomposition import PCA
# feature extraction
pca = PCA(n_components=10)
fit = pca.fit(Xs)
# summarize components
#print("Explained Variance: %s") % fit.explained_variance_ratio_
#print(fit.components_)
X_pca = pca.transform(Xs)
PCA_df = pd.DataFrame()
PCA_df['PCA_1'] = X_pca[:,0]
PCA_df['PCA_2'] = X_pca[:,1]
plt.plot(PCA_df['PCA_1'][data.diagnosis == 'M'],PCA_df['PCA_2'][data.diagnosis == 'M'],'o', alpha = 0.7, color = 'r')
plt.plot(PCA_df['PCA_1'][data.diagnosis == 'B'],PCA_df['PCA_2'][data.diagnosis == 'B'],'o', alpha = 0.7, color = 'b')
plt.xlabel('PCA_1')
plt.ylabel('PCA_2')
plt.legend(['Malignant','Benign'])
plt.show()
#The amount of variance that each PC explains
var= pca.explained_variance_ratio_
#Cumulative Variance explains
#var1=np.cumsum(np.round(pca.explained_variance_ratio_, decimals=4)*100)
#print(var1)
#The amount of variance that each PC explains
var= pca.explained_variance_ratio_
#Cumulative Variance explains
#var1=np.cumsum(np.round(pca.explained_variance_ratio_, decimals=4)*100)
#print(var1)
plt.plot(var)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Eigenvalue')
leg = plt.legend(['Eigenvalues from PCA'], loc='best', borderpad=0.3,shadow=False,markerscale=0.4)
leg.get_frame().set_alpha(0.4)
leg.draggable(state=True)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare data for localization
Step2: Examine our coordinate alignment for source localization and compute a
Step3: Perform dipole fitting
Step4: Perform minimum-norm localization
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import mne
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif')
coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(None, 90, h_trans_bandwidth=10.)
raw.notch_filter(50., notch_widths=1)
# Set epoch rejection threshold a bit larger than for SQUIDs
reject = dict(mag=2e-10)
tmin, tmax = -0.5, 1
# Find Median nerve stimulator trigger
event_id = dict(Median=257)
events = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and')
picks = mne.pick_types(raw.info, meg=True, eeg=False)
# we use verbose='error' to suppress warning about decimation causing aliasing
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, verbose='error',
reject=reject, picks=picks, proj=False, decim=4)
evoked = epochs.average()
evoked.plot()
cov = mne.compute_covariance(epochs, tmax=0.)
bem = mne.read_bem_solution(bem_fname)
trans = None
# To compute the forward solution, we must
# provide our temporary/custom coil definitions, which can be done as::
#
# with mne.use_coil_def(coil_def_fname):
# fwd = mne.make_forward_solution(
# raw.info, trans, src, bem, eeg=False, mindist=5.0,
# n_jobs=1, verbose=True)
fwd = mne.read_forward_solution(fwd_fname)
with mne.use_coil_def(coil_def_fname):
fig = mne.viz.plot_alignment(
raw.info, trans, subject, subjects_dir, ('head', 'pial'), bem=bem)
mne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4,
focalpoint=(0.02, 0, 0.04))
# Fit dipoles on a subset of time points
with mne.use_coil_def(coil_def_fname):
dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.015, 0.080),
cov, bem, trans, verbose=True)
idx = np.argmax(dip_opm.gof)
print('Best dipole at t=%0.1f ms with %0.1f%% GOF'
% (1000 * dip_opm.times[idx], dip_opm.gof[idx]))
# Plot N20m dipole as an example
dip_opm.plot_locations(trans, subject, subjects_dir,
mode='orthoview', idx=idx)
inverse_operator = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov)
method = "MNE"
snr = 3.
lambda2 = 1. / snr ** 2
stc = mne.minimum_norm.apply_inverse(
evoked, inverse_operator, lambda2, method=method,
pick_ori=None, verbose=True)
# Plot source estimate at time of best dipole fit
brain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir,
initial_time=dip_opm.times[idx],
clim=dict(kind='percent', lims=[99, 99.9, 99.99]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper functions
Step2: <a id="ref0"></a>
Step3: <a id="ref1"></a>
Step4: Create a logistic regression object or model
Step5: Replace the random initialized variable values. Theses random initialized variable values did convergence for the RMS Loss but will converge for the Cross-Entropy Loss.
Step6: Create a <code> plot_error_surfaces</code> object to visualize the data space and the parameter space during training
Step7: Define the cost or criterion function
Step8: Create a dataloader object
Step9: <a id="ref2"></a>
Step10: Get the actual class of each sample and calculate the accuracy on the test data
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import torch
from torch.utils.data import Dataset, DataLoader
import torch.nn as nn
class plot_error_surfaces(object):
def __init__(self,w_range, b_range,X,Y,n_samples=50,go=True):
W = np.linspace(-w_range, w_range, n_samples)
B = np.linspace(-b_range, b_range, n_samples)
w, b = np.meshgrid(W, B)
Z=np.zeros((30,30))
count1=0
self.y=Y.numpy()
self.x=X.numpy()
for w1,b1 in zip(w,b):
count2=0
for w2,b2 in zip(w1,b1):
yhat= 1 / (1 + np.exp(-1*(w2*self.x+b2)))
Z[count1,count2]=-1*np.mean(self.y*np.log(yhat+1e-16) +(1-self.y)*np.log(1-yhat+1e-16))
count2 +=1
count1 +=1
self.Z=Z
self.w=w
self.b=b
self.W=[]
self.B=[]
self.LOSS=[]
self.n=0
if go==True:
plt.figure()
plt.figure(figsize=(7.5,5))
plt.axes(projection='3d').plot_surface(self.w, self.b, self.Z, rstride=1, cstride=1,cmap='viridis', edgecolor='none')
plt.title('Loss Surface')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
plt.figure()
plt.title('Loss Surface Contour')
plt.xlabel('w')
plt.ylabel('b')
plt.contour(self.w, self.b, self.Z)
plt.show()
def get_stuff(self,model,loss):
self.n=self.n+1
self.W.append(list(model.parameters())[0].item())
self.B.append(list(model.parameters())[1].item())
self.LOSS.append(loss)
def final_plot(self):
ax = plt.axes(projection='3d')
ax.plot_wireframe(self.w, self.b, self.Z)
ax.scatter(self.W,self.B, self.LOSS, c='r', marker='x',s=200,alpha=1)
plt.figure()
plt.contour(self.w,self.b, self.Z)
plt.scatter(self.W,self.B,c='r', marker='x')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
def plot_ps(self):
plt.subplot(121)
plt.ylim
plt.plot(self.x,self.y,'ro',label="training points")
plt.plot(self.x,self.W[-1]*self.x+self.B[-1],label="estimated line")
plt.plot(self.x,1 / (1 + np.exp(-1*(self.W[-1]*self.x+self.B[-1]))),label='sigmoid')
plt.xlabel('x')
plt.ylabel('y')
plt.ylim((-0.1, 2))
plt.title('Data Space Iteration: '+str(self.n))
plt.legend()
plt.show()
plt.subplot(122)
plt.contour(self.w,self.b, self.Z)
plt.scatter(self.W,self.B,c='r', marker='x')
plt.title('Loss Surface Contour Iteration'+str(self.n) )
plt.xlabel('w')
plt.ylabel('b')
plt.legend()
torch.manual_seed(0)
def PlotStuff(X,Y,model,epoch,leg=True):
plt.plot(X.numpy(),model(X).detach().numpy(),label='epoch '+str(epoch))
plt.plot(X.numpy(),Y.numpy(),'r')
if leg==True:
plt.legend()
else:
pass
from torch.utils.data import Dataset, DataLoader
class Data(Dataset):
def __init__(self):
self.x=torch.arange(-1,1,0.1).view(-1,1)
self.y=-torch.zeros(self.x.shape[0],1)
self.y[self.x[:,0]>0.2]=1
self.len=self.x.shape[0]
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
data_set=Data()
trainloader=DataLoader(dataset=data_set,batch_size=3)
class logistic_regression(nn.Module):
def __init__(self,n_inputs):
super(logistic_regression,self).__init__()
self.linear=nn.Linear(n_inputs,1)
def forward(self,x):
yhat=torch.sigmoid(self.linear(x))
return yhat
model=logistic_regression(1)
model.state_dict() ['linear.weight'].data[0]=torch.tensor([[-5]])
model.state_dict() ['linear.bias'].data[0]=torch.tensor([[-10]])
get_surface=plot_error_surfaces(15,13,data_set[:][0],data_set[:][1],30)
#build in criterion
#criterion=nn.BCELoss()
def criterion(yhat,y):
out=-1*torch.mean(y*torch.log(yhat) +(1-y)*torch.log(1-yhat))
return out
learning_rate=2
optimizer=torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(100):
for x,y in trainloader:
#make a prediction
yhat= model(x)
#calculate the loss
loss = criterion(yhat, y)
#clear gradient
optimizer.zero_grad()
#Backward pass: compute gradient of the loss with respect to all the learnable parameters
loss.backward()
#the step function on an Optimizer makes an update to its parameters
optimizer.step()
#for plotting
get_surface.get_stuff(model,loss.tolist())
#plot every 20 iterataions
if epoch%20==0:
get_surface.plot_ps()
yhat=model(data_set.x)
lable=yhat>0.5
print(torch.mean((lable==data_set.y.type(torch.ByteTensor)).type(torch.float)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: a) Implement a crawler
Step2: Importierung der analysierten Seiten
Step3: Konstanten
Step4: Funktionen
Step5: Hilfe-Funktionen
Step6: Die Funktion termination_condition() gibt einen boolischen Wert zurück, der sagt, ob die Abbruchbedingung erfüllt ist oder nicht. Im Falle, dass es keine zwei Vergleichswerte gibt, also rank+1 gleich None ist, wird False zurückgegeben, ansonsten wird der Rückgabewert mit folgender Formel berechnet
Step7: Die Funktion set_ranks() setzt, wenn rank+1 nicht None ist, für alle Seiten den älteren PageRank-Wert rank auf den aktuellen PageRank-Wert rank+1.
Step8: Berechnung der PageRanks für alle Dokumente
Step9: Speichern der PageRanks
Step10: Überprüfung der Summe aller PageRanks
Step11: d) Build a tf-Index for the words contained in the documents
Step12: Funktionen
Step13: Initalisierung der Stopwörter
Step14: Berechnung aller vorkommenden Wörter
Step15: Berechnung des Term-Frequency-Indexes
Step16: Speichern des Term-Frequency-Indexes
Step17: e) Implement a function search to search for documents containing given words
Step18: Funktionen
Step19: Gewichteter TF-IDF-Wert
Step20: Search
Step21: Berechnung der Document-Frequency für jedes Wort
Step22: Suche mit TF-IDF und anschließende Speicherung
Step23: Im folgenden Code-Block wird jeder Such-Term durchgegangen und der jeweilige Wert für jedes Dokument berechnet. Der Wert setzt sich aus dem TF-IDF-Wert zusammen. Anschließend wird das Ergebnis in der Datei pageranke_search.txt gespeichert.
Step24: f) Extend your search function and include PageRank to score the documents
Step25: Im folgenden Code-Block wird jeder Such-Term durchgegangen und der jeweilige Wert für jede Seite berechnet. Der Wert setzt sich aus dem TF-IDF-Wert und PageRank zusammen. Anschließend wird das Ergebnis in der Datei pageranke_search.txt gespeichert.
|
<ASSISTANT_TASK:>
Python Code:
import json
import os
import nltk
import string
import re
import pandas as pd
from IPython.display import display
import numpy as np
import math
directory = 'sitespider/sites'
files = [x[2] for x in os.walk(directory)][0]
pages = []
for file in files:
with open("%s/%s" % (directory, file)) as json_data:
pages += [json.load(json_data)]
n = len(pages)
t = 0.05
d = 1 - t
δ = 0.04
def calculate_page_rank(page_i):
sum_result = 0
for page_j in pages:
if page_i['id'] in page_j['back_links']:
sum_result += page_j['rank'] / len(page_j['back_links'])
if len(page_j['back_links']) == 0:
sum_result += page_j['rank'] / n
return d * sum_result + t / n
def initialize():
for page in pages:
page['rank'] = 1/n
page['rank+1'] = None
def termination_condition():
delta = 0
for page in pages:
if page['rank+1'] is None:
return False
else:
delta += abs(page['rank+1'] - page['rank'])
return delta <= δ
def set_ranks():
for page in pages:
if page['rank+1'] is not None:
page['rank'] = page['rank+1']
initialize()
while not termination_condition():
set_ranks()
for page in pages:
page['rank+1'] = calculate_page_rank(page)
with open('rank.txt', 'w') as f:
for page in pages:
f.write("%s: %s\n" % (page['id'], page['rank+1']))
rank_sum = 0
for page in pages:
rank_sum += page['rank+1']
print(rank_sum)
tf_dict = {}
term_set = set()
stopwords = []
exclude = set(string.punctuation)
porter = nltk.PorterStemmer()
def get_weighted_tf(doc_id, term):
if term in tf_dict[doc_id]:
if tf_dict[doc_id][term] == 0:
return 0
else:
return 1 + math.log10(tf_dict[doc_id][term])
else:
return 0
with open('stop_words.txt') as line:
stopwords += re.sub('[^a-zA-Z0-9,]', '', line.read()).split(',')
for page in pages:
for term in nltk.word_tokenize(page['text']):
if term not in exclude and term not in stopwords:
term_set.add(porter.stem(term).lower())
for page in pages:
tf_dict[page['id']] = {}
for term in term_set:
tf_dict[page['id']][term] = 0
for term in nltk.word_tokenize(page['text']):
if term not in exclude and term not in stopwords:
tf_dict[page['id']][porter.stem(term).lower()] += 1
tf_df = pd.DataFrame(tf_dict)
tf_df.to_csv('index.txt', header=True, index=True, sep=';')
df_dict = {}
def get_weighted_idf(term):
if term in df_dict:
if df_dict[term]['count'] == 0:
return 0
else:
return math.log10( n / df_dict[term]['count'])
else:
return 0
def get_weighted_tf_idf(doc_id, term):
return get_weighted_tf(doc_id, term) * get_weighted_idf(term)
def search(terms, page_rank=False):
result = {}
for page in pages:
result[page['id']] = 0
for term in terms:
result[page['id']] += get_weighted_tf_idf(page['id'], porter.stem(term).lower())
if page_rank:
result[page['id']] *= page['rank+1']
return result
for page in pages:
for term in nltk.word_tokenize(page['text']):
if term not in exclude and term not in stopwords:
if porter.stem(term).lower() in df_dict and page['id'] not in df_dict[porter.stem(term).lower()]['documents']:
df_dict[porter.stem(term).lower()]['count'] += 1
df_dict[porter.stem(term).lower()]['documents'] += [page['id']]
elif porter.stem(term).lower() not in df_dict:
df_dict[porter.stem(term).lower()] = {'count': 1,
'documents': [page['id']]
}
search_terms = [['token'],['index'],['classification'],['classification', 'token']]
with open('tfidf_search.txt', 'w') as f:
for search_term in search_terms:
f.write('Suchwort: %s\n\n' % ', '.join(search_term))
result = search(search_term)
for key in result:
f.write("%s: %s\n" % (key, result[key]))
f.write('\n\n')
search_terms = [['token'],['index'],['classification'],['classification', 'token']]
with open('pageranke_search.txt', 'w') as f:
for search_term in search_terms:
f.write('Suchwort: %s\n\n' % ', '.join(search_term))
result = search(search_term, page_rank=True)
for key in result:
f.write("%s: %s\n" % (key, result[key]))
f.write('\n\n')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: non-contigous slice
Step2: N - slices
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
print os.getenv("HOME")
wd = os.path.join( os.getenv("HOME"),"mpi_tmpdir")
if not os.path.isdir(wd):
os.mkdir(wd)
os.chdir(wd)
print "WD is now:",os.getcwd()
%%writefile mpi002.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] =A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N=52
Niter=211
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.1
c = D*dt
u = np.zeros([N, N])
if rank == 0:
u[-2,u.shape[1]/2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
for i in range(Niter):
if rank == 0:
comm.Send([u[-2,:], MPI.FLOAT], dest=1)
comm.Recv([u[-1,:], MPI.FLOAT], source=1)
elif rank == 1:
comm.Recv([u[0,:], MPI.FLOAT], source=0)
comm.Send([u[1,:], MPI.FLOAT], dest=0)
numpy_diff2d(u,dx2,dy2,c)
#np.savez("udata%04d"%rank, u=u)
U = comm.gather(u[1:-1,1:-1])
if rank==0:
np.savez("Udata", U=U)
!mpirun -n 2 python mpi002.py
data = np.load("Udata.npz")
plt.imshow(np.vstack(data['U']))
print data['U'].shape
!pwd
%%writefile mpi003.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] =A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N=52
Niter=211
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.1
c = D*dt
u = np.zeros([N, N])
if rank == 0:
u[u.shape[1]/2,-2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
for i in range(Niter):
if rank == 0:
OUT = u[:,-2].copy()
IN = np.empty_like(OUT)
comm.Send([OUT, MPI.FLOAT], dest=1)
comm.Recv([IN, MPI.FLOAT], source=1)
u[:,-1] = IN
elif rank == 1:
OUT = u[:,1].copy()
IN = np.empty_like(OUT)
comm.Recv([IN, MPI.FLOAT], source=0)
comm.Send([OUT, MPI.FLOAT], dest=0)
u[:,0] = IN
numpy_diff2d(u,dx2,dy2,c)
np.savez("udata%04d"%rank, u=u)
!mpirun -n 2 python mpi003.py
u1 = np.load('udata0000.npz')['u']
u2 = np.load('udata0001.npz')['u']
plt.imshow(np.hstack([u1[:,:-1],u2[:,1:]]))
%%writefile mpi004.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
Nproc = comm.size
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] = A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N = 16*128
Nx = N
Ny = N/Nproc
Niter=200
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.2
c = D*dt
u = np.zeros([Ny, Nx])
if rank == 0:
u[-2,u.shape[1]/2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
t0 = MPI.Wtime()
for i in range(Niter):
if Nproc>1:
if rank == 0:
comm.Send([u[-2,:], MPI.FLOAT], dest=1)
if rank >0 and rank < Nproc-1:
comm.Recv([u[0,:], MPI.FLOAT], source=rank-1)
comm.Send([u[-2,:], MPI.FLOAT], dest=rank+1)
if rank == Nproc - 1:
comm.Recv([u[0,:], MPI.FLOAT], source=Nproc-2)
comm.Send([u[1,:], MPI.FLOAT], dest=Nproc-2)
if rank >0 and rank < Nproc-1:
comm.Recv([u[-1,:], MPI.FLOAT], source=rank+1)
comm.Send([u[1,:], MPI.FLOAT], dest=rank-1)
if rank == 0:
comm.Recv([u[-1,:], MPI.FLOAT], source=1)
#print rank
comm.Barrier()
numpy_diff2d(u,dx2,dy2,c)
t1 = MPI.Wtime()
print rank,t1-t0
#np.savez("udata%04d"%rank, u=u)
if Nproc>1:
U = comm.gather(u[1:-1,1:-1])
if rank==0:
np.savez("Udata", U=U)
!mpirun -H gpu2,gpu3 python mpi004.py
!mpirun -n 4 python mpi004.py
data = np.load("Udata.npz")
plt.imshow(np.vstack(data['U']))
print data['U'].shape
a = np.arange(0,16).reshape(4,4)
b = a[:,2]
c = a[2,:]
np.may_share_memory(a,b),np.may_share_memory(a,c)
a.flags
b.flags
c.flags
a=np.array(range(6))
b = a[2:4]
b=666
print a
np.may_share_memory?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating a Binding
Step2: At this point, you can use the "util" model_to_dict() to visualize the binding and the attached models
Step3: Populating models
Step4: Let's work through the interface list
Step5: Populating the model from a dict
Step6: Populating the model from a device
Step7: Populating from a file
Step8: Translating models
Step9: But this is just the begining, the fun part is yet to come
Step10: Note the "delete" tags. Let's actually load the configuration in the device and see which changes are reported.
Step11: You can see that the device is reporting the changes we expected. Let's try now a replace instead.
Step12: Note that instead of "delete", now we have a replace in one of the top containers, indicating to the device we want to replace everything underneath. Let's merge and see what happens
Step13: Interestingly, there is an extra change. That is due to the fact that the dhcp parameter is outside our model's control.
Step14: As in the previous example, we got exactly the same changes we were expecting.
Step15: With the replace instead, we got some extra changes as some things are outside our model's control.
Step16: Note that parse_state accepts the same parameters as parse_config which means you can override profiles or even parse from files.
Step17: Diff'ing models with state is also supported.
Step18: Now we can load the validation file. Here is the content for reference
Step19: We can see it's complaining that the value of Et2's MTU is 1500. Let's fix it and try again
|
<ASSISTANT_TASK:>
Python Code:
from napalm import get_network_driver
import napalm_yang
import json
def use_mock_devices():
junos_configuration = {
'hostname': '127.0.0.1',
'username': 'vagrant',
'password': '',
'optional_args': {'path': "./junos_mock/", 'profile': ['junos'],
'increase_count_on_error': False}
}
eos_configuration = {
'hostname': '127.0.0.1',
'username': 'vagrant',
'password': 'vagrant',
'optional_args': {'path': "./eos_mock", 'profile': ['eos'],
'increase_count_on_error': False}
}
junos = get_network_driver("mock")
junos_device = junos(**junos_configuration)
eos = get_network_driver("mock")
eos_device = eos(**eos_configuration)
return junos_device, eos_device
def use_real_devices():
junos_configuration = {
'hostname': '127.0.0.1',
'username': 'vagrant',
'password': '',
'optional_args': {'port': 12203, 'config_lock': False}
}
eos_configuration = {
'hostname': '127.0.0.1',
'username': 'vagrant',
'password': 'vagrant',
'optional_args': {'port': 12443}
}
junos = get_network_driver("junos")
junos_device = junos(**junos_configuration)
junos_device.open()
eos = get_network_driver("eos")
eos_device = eos(**eos_configuration)
eos_device.open()
return junos_device, eos_device
def pretty_print(dictionary):
print(json.dumps(dictionary, sort_keys=True, indent=4))
# Use real devices on your lab, tweak config
# junos_device, eos_device = use_real_devices()
# Use mocked devices intended for this test
junos_device, eos_device = use_mock_devices()
config = napalm_yang.base.Root()
# Adding models to the object
config.add_model(napalm_yang.models.openconfig_interfaces())
config.add_model(napalm_yang.models.openconfig_vlan())
# Printing the model in a human readable format
pretty_print(napalm_yang.utils.model_to_dict(config))
# We create an interface and set the description and the mtu
et1 = config.interfaces.interface.add("et1")
et1.config.description = "My description"
et1.config.mtu = 1500
print(et1.config.description)
print(et1.config.mtu)
# Let's create a second interface, this time accessing it from the root
config.interfaces.interface.add("et2")
config.interfaces.interface["et2"].config.description = "Another description"
config.interfaces.interface["et2"].config.mtu = 9000
print(config.interfaces.interface["et2"].config.description)
print(config.interfaces.interface["et2"].config.mtu)
# You can also get the contents as a dict with the ``get`` method.
# ``filter`` let's you decide whether you want to show empty fields or not.
pretty_print(config.get(filter=True))
# If the value is not valid things will break
try:
et1.config.mtu = -1
except ValueError as e:
print(e)
# Iterating
for iface, data in config.interfaces.interface.items():
print(iface, data.config.description)
# We can also delete interfaces
print(config.interfaces.interface.keys())
config.interfaces.interface.delete("et1")
print(config.interfaces.interface.keys())
vlans_dict = {
"vlans": { "vlan": { 100: {
"config": {
"vlan_id": 100, "name": "production"}},
200: {
"config": {
"vlan_id": 200, "name": "dev"}}}}}
config.load_dict(vlans_dict)
print(config.vlans.vlan.keys())
print(100, config.vlans.vlan[100].config.name)
print(200, config.vlans.vlan[200].config.name)
with eos_device as d:
running_config = napalm_yang.base.Root()
running_config.add_model(napalm_yang.models.openconfig_interfaces)
running_config.parse_config(device=d)
pretty_print(running_config.get(filter=True))
with open("junos.config", "r") as f:
config = f.read()
running_config = napalm_yang.base.Root()
running_config.add_model(napalm_yang.models.openconfig_interfaces)
running_config.parse_config(native=[config], profile=["junos"])
pretty_print(running_config.get(filter=True))
# Let's create a candidate configuration
candidate = napalm_yang.base.Root()
candidate.add_model(napalm_yang.models.openconfig_interfaces())
def create_iface(candidate, name, description, mtu, prefix, prefix_length):
interface = candidate.interfaces.interface.add(name)
interface.config.description = description
interface.config.mtu = mtu
ip = interface.routed_vlan.ipv4.addresses.address.add(prefix)
ip.config.ip = prefix
ip.config.prefix_length = prefix_length
create_iface(candidate, "et1", "Uplink1", 9000, "192.168.1.1", 24)
create_iface(candidate, "et2", "Uplink2", 9000, "192.168.2.1", 24)
pretty_print(candidate.get(filter=True))
# Now let's translate the object to JunOS
print(candidate.translate_config(profile=junos_device.profile))
# And now to EOS
print(candidate.translate_config(eos_device.profile))
with junos_device as device:
# first let's create a candidate config by retrieving the current state of the device
candidate = napalm_yang.base.Root()
candidate.add_model(napalm_yang.models.openconfig_interfaces)
candidate.parse_config(device=junos_device)
# now let's do a few changes, let's remove lo0.0 and create lo0.1
candidate.interfaces.interface["lo0"].subinterfaces.subinterface.delete("0")
lo1 = candidate.interfaces.interface["lo0"].subinterfaces.subinterface.add("1")
lo1.config.description = "new loopback"
# Let's also default the mtu of ge-0/0/0 which is set to 1400
candidate.interfaces.interface["ge-0/0/0"].config._unset_mtu()
# We will also need a running configuration to compare against
running = napalm_yang.base.Root()
running.add_model(napalm_yang.models.openconfig_interfaces)
running.parse_config(device=junos_device)
# Now let's see how the merge configuration would be
config = candidate.translate_config(profile=junos_device.profile, merge=running)
print(config)
with junos_device as d:
d.load_merge_candidate(config=config)
print(d.compare_config())
d.discard_config()
config = candidate.translate_config(profile=junos_device.profile, replace=running)
print(config)
with junos_device as d:
d.load_merge_candidate(config=config)
print(d.compare_config())
d.discard_config()
with eos_device as device:
# first let's create a candidate config by retrieving the current state of the device
candidate = napalm_yang.base.Root()
candidate.add_model(napalm_yang.models.openconfig_interfaces)
candidate.parse_config(device=device)
# now let's do a few changes, let's remove lo1 and create lo0
candidate.interfaces.interface.delete("Loopback1")
lo0 = candidate.interfaces.interface.add("Loopback0")
lo0.config.description = "new loopback"
# Let's also default the mtu of ge-0/0/0 which is set to 1400
candidate.interfaces.interface["Port-Channel1"].config._unset_mtu()
# We will also need a running configuration to compare against
running = napalm_yang.base.Root()
running.add_model(napalm_yang.models.openconfig_interfaces)
running.parse_config(device=device)
# Now let's see how the merge configuration would be
config = candidate.translate_config(profile=eos_device.profile, merge=running)
print(config)
with eos_device as d:
d.load_merge_candidate(config=config)
print(d.compare_config())
d.discard_config()
config = candidate.translate_config(profile=eos_device.profile, replace=running)
print(config)
with eos_device as d:
d.load_merge_candidate(config=config)
print(d.compare_config())
d.discard_config()
state = napalm_yang.base.Root()
state.add_model(napalm_yang.models.openconfig_interfaces)
with junos_device as d:
state.parse_state(device=d)
pretty_print(state.get(filter=True))
diff = napalm_yang.utils.diff(candidate, running)
pretty_print(diff)
data = {
"interfaces": {
"interface":{
"Et1": {
"config": {
"mtu": 9000
},
},
"Et2": {
"config": {
"mtu": 1500
}
}
}
}
}
# We load a dict for convenience, any source will do
config = napalm_yang.base.Root()
config.add_model(napalm_yang.models.openconfig_interfaces())
config.load_dict(data)
report = config.compliance_report("validate.yaml")
pretty_print(report)
config.interfaces.interface["Et2"].config.mtu = 9000
report = config.compliance_report("validate.yaml")
pretty_print(report)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, we read the stc from file.
Step2: This is a
Step3: The SourceEstimate object is in fact a surface source estimate. MNE also
Step4: You can also morph it to fsaverage and visualize it using a flatmap.
Step5: Note that here we used initial_time=0.1, but we can also browse through
Step6: Volume Source Estimates
Step7: Then, we can load the precomputed inverse operator from a file.
Step8: The source estimate is computed using the inverse operator and the
Step9: This time, we have a different container
Step10: This too comes with a convenient plot method.
Step11: For this visualization, nilearn must be installed.
Step12: You can also extract label time courses using volumetric atlases. Here we'll
Step13: We can plot several labels with the most activation in their time course
Step14: And we can project these label time courses back to their original
Step15: Vector Source Estimates
Step16: Dipole fits
Step17: Dipoles are fit independently for each time point, so let us crop our time
Step18: Finally, we can visualize the dipole.
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample, fetch_hcp_mmp_parcellation
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne import read_evokeds
data_path = sample.data_path()
sample_dir = op.join(data_path, 'MEG', 'sample')
subjects_dir = op.join(data_path, 'subjects')
fname_evoked = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_stc = op.join(sample_dir, 'sample_audvis-meg')
fetch_hcp_mmp_parcellation(subjects_dir)
stc = mne.read_source_estimate(fname_stc, subject='sample')
print(stc)
initial_time = 0.1
brain = stc.plot(subjects_dir=subjects_dir, initial_time=initial_time,
clim=dict(kind='value', lims=[3, 6, 9]),
smoothing_steps=7)
stc_fs = mne.compute_source_morph(stc, 'sample', 'fsaverage', subjects_dir,
smooth=5, verbose='error').apply(stc)
brain = stc_fs.plot(subjects_dir=subjects_dir, initial_time=initial_time,
clim=dict(kind='value', lims=[3, 6, 9]),
surface='flat', hemi='both', size=(1000, 500),
smoothing_steps=5, time_viewer=False,
add_data_kwargs=dict(
colorbar_kwargs=dict(label_font_size=10)))
# to help orient us, let's add a parcellation (red=auditory, green=motor,
# blue=visual)
brain.add_annotation('HCPMMP1_combined', borders=2, subjects_dir=subjects_dir)
# You can save a movie like the one on our documentation website with:
# brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16,
# interpolation='linear', framerate=10)
mpl_fig = stc.plot(subjects_dir=subjects_dir, initial_time=initial_time,
backend='matplotlib', verbose='error', smoothing_steps=7)
evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False).crop(0.05, 0.15)
# this risks aliasing, but these data are very smooth
evoked.decimate(10, verbose='error')
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-vol-7-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
src = inv['src']
mri_head_t = inv['mri_head_t']
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
stc = apply_inverse(evoked, inv, lambda2, method)
del inv
print(stc)
stc.plot(src, subject='sample', subjects_dir=subjects_dir)
stc.plot(src, subject='sample', subjects_dir=subjects_dir, mode='glass_brain')
fname_aseg = op.join(subjects_dir, 'sample', 'mri', 'aparc+aseg.mgz')
label_names = mne.get_volume_labels_from_aseg(fname_aseg)
label_tc = stc.extract_label_time_course(fname_aseg, src=src)
lidx, tidx = np.unravel_index(np.argmax(label_tc), label_tc.shape)
fig, ax = plt.subplots(1)
ax.plot(stc.times, label_tc.T, 'k', lw=1., alpha=0.5)
xy = np.array([stc.times[tidx], label_tc[lidx, tidx]])
xytext = xy + [0.01, 1]
ax.annotate(
label_names[lidx], xy, xytext, arrowprops=dict(arrowstyle='->'), color='r')
ax.set(xlim=stc.times[[0, -1]], xlabel='Time (s)', ylabel='Activation')
for key in ('right', 'top'):
ax.spines[key].set_visible(False)
fig.tight_layout()
labels = [label_names[idx] for idx in np.argsort(label_tc.max(axis=1))[:7]
if 'unknown' not in label_names[idx].lower()] # remove catch-all
brain = mne.viz.Brain('sample', hemi='both', surf='pial', alpha=0.5,
cortex='low_contrast', subjects_dir=subjects_dir)
brain.add_volume_labels(aseg='aparc+aseg', labels=labels)
brain.show_view(azimuth=250, elevation=40, distance=400)
brain.enable_depth_peeling()
stc_back = mne.labels_to_stc(fname_aseg, label_tc, src=src)
stc_back.plot(src, subjects_dir=subjects_dir, mode='glass_brain')
fname_inv = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv = read_inverse_operator(fname_inv)
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
brain = stc.plot(subject='sample', subjects_dir=subjects_dir,
initial_time=initial_time, brain_kwargs=dict(
silhouette=True), smoothing_steps=7)
fname_cov = op.join(sample_dir, 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(sample_dir, 'sample_audvis_raw-trans.fif')
evoked.crop(0.1, 0.1)
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
dip.plot_locations(fname_trans, 'sample', subjects_dir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Vertex constants
Step12: Install Docker (Colab or Local)
Step13: Start Docker service
Step14: Hardware Accelerators
Step15: Container (Docker) image
Step16: Container (Docker) image for prediction
Step17: Machine Type
Step18: Tutorial
Step19: Train a model
Step20: Prepare your disk specification
Step21: Define the worker pool specification
Step22: Assemble a job specification
Step23: Examine the training package
Step24: Task.py contents
Step25: Store training script on your Cloud Storage bucket
Step26: Train the model
Step27: Now get the unique identifier for the custom job you created.
Step28: Get information on a custom job
Step29: Deployment
Step30: Load the saved model
Step31: Evaluate the model
Step32: Perform the model evaluation
Step33: Upload the model for serving
Step34: Upload the model
Step35: Get Model resource information
Step36: Deploy the Model resource
Step37: Now get the unique identifier for the Endpoint resource you created.
Step38: Compute instance scaling
Step39: Deploy Model resource to the Endpoint resource
Step40: Make a online prediction request
Step41: Send the prediction request
Step42: Undeploy the Model resource
Step43: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
if "google.colab" in sys.modules:
! sudo apt update
! sudo apt install apt-transport-https ca-certificates curl software-properties-common
! curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
! sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
! sudo apt update
! sudo apt install docker-ce
if "google.colab" in sys.modules:
! sudo service docker start
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
if DEPLOY_GPU:
! docker pull tensorflow/serving:latest-gpu
DEPLOY_IMAGE = "gcr.io/" + PROJECT_ID + "/tf_serving:gpu"
else:
! docker pull tensorflow/serving:latest
DEPLOY_IMAGE = "gcr.io/" + PROJECT_ID + "/tf_serving"
! docker tag tensorflow/serving $DEPLOY_IMAGE
! docker push $DEPLOY_IMAGE
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}/1".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_imdb.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: IMDB Movie Reviews text binary classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for IMDB
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=1e-4, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print(device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets():
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
encoder = info.features['text'].encoder
padded_shapes = ([None],())
return train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes), encoder
train_dataset, encoder = make_datasets()
# Build the Keras model
def build_and_compile_rnn_model(encoder):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(args.lr),
metrics=['accuracy'])
return model
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_rnn_model(encoder)
# Train the model
model.fit(train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_imdb.tar.gz
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
encoder = info.features["text"].encoder
BATCH_SIZE = 64
padded_shapes = ([None], ())
test_dataset = test_dataset.padded_batch(BATCH_SIZE, padded_shapes)
model.evaluate(test_dataset)
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
IMAGE_URI = DEPLOY_IMAGE
MODEL_NAME = "imdb-" + TIMESTAMP
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": ["/usr/bin/tensorflow_model_server"],
"args": [
"--model_name=" + MODEL_NAME,
"--model_base_path=" + "$(AIP_STORAGE_URI)",
"--rest_api_port=8080",
"--port=8500",
"--file_system_poll_wait_seconds=31540000",
],
"health_route": "/v1/models/" + MODEL_NAME,
"predict_route": "/v1/models/" + MODEL_NAME + ":predict",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"imdb-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy[:-2]
)
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
ENDPOINT_NAME = "imdb_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
MIN_NODES = 1
MAX_NODES = 1
DEPLOYED_NAME = "imdb_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
test_dataset.take(1)
for data in test_dataset:
print(data)
break
test_item = data[0].numpy()
def predict_data(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: data.tolist()}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_data(test_item, endpoint_id, None)
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'ciesm', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: [Optional] Setup token to run the experiment on a real device
Step2: MaxCut problem
Step3: Brute force approach
Step4: Mapping to the Ising problem
Step5: Checking that the full Hamiltonian gives the right cost
Step6: Running it on quantum computer
Step7: Traveling Salesman Problem
Step8: Brute force approach
Step9: Mapping to the Ising problem
Step10: Checking that the full Hamiltonian gives the right cost
Step11: Running it on quantum computer
|
<ASSISTANT_TASK:>
Python Code:
# useful additional packages
import matplotlib.pyplot as plt
import matplotlib.axes as axes
%matplotlib inline
import numpy as np
import networkx as nx
from qiskit.tools.visualization import plot_histogram
from qiskit_aqua import Operator, run_algorithm, get_algorithm_instance
from qiskit_aqua.input import get_input_instance
from qiskit_aqua.translators.ising import maxcut, tsp
# setup aqua logging
import logging
from qiskit_aqua._logging import set_logging_config, build_logging_config
# set_logging_config(build_logging_config(logging.DEBUG)) # choose INFO, DEBUG to see the log
from qiskit import IBMQ
IBMQ.load_accounts()
# Generating a graph of 4 nodes
n=4 # Number of nodes in graph
G=nx.Graph()
G.add_nodes_from(np.arange(0,n,1))
elist=[(0,1,1.0),(0,2,1.0),(0,3,1.0),(1,2,1.0),(2,3,1.0)]
# tuple is (i,j,weight) where (i,j) is the edge
G.add_weighted_edges_from(elist)
colors = ['r' for node in G.nodes()]
pos = nx.spring_layout(G)
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
# Computing the weight matrix from the random graph
w = np.zeros([n,n])
for i in range(n):
for j in range(n):
temp = G.get_edge_data(i,j,default=0)
if temp != 0:
w[i,j] = temp['weight']
print(w)
best_cost_brute = 0
for b in range(2**n):
x = [int(t) for t in reversed(list(bin(b)[2:].zfill(n)))]
cost = 0
for i in range(n):
for j in range(n):
cost = cost + w[i,j]*x[i]*(1-x[j])
if best_cost_brute < cost:
best_cost_brute = cost
xbest_brute = x
print('case = ' + str(x)+ ' cost = ' + str(cost))
colors = ['r' if xbest_brute[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, pos=pos)
print('\nBest solution = ' + str(xbest_brute) + ' cost = ' + str(best_cost_brute))
qubitOp, offset = maxcut.get_maxcut_qubitops(w)
algo_input = get_input_instance('EnergyInput')
algo_input.qubit_op = qubitOp
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
algorithm_cfg = {
'name': 'ExactEigensolver',
}
params = {
'problem': {'name': 'ising'},
'algorithm': algorithm_cfg
}
result = run_algorithm(params,algo_input)
x = maxcut.sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('maxcut objective:', result['energy'] + offset)
print('solution:', maxcut.get_graph_solution(x))
print('solution objective:', maxcut.maxcut_value(x, w))
colors = ['r' if maxcut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'SPSA',
'max_trials': 300
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': 10598},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {'name': 'statevector_simulator'}
}
result = run_algorithm(params, algo_input)
x = maxcut.sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('maxcut objective:', result['energy'] + offset)
print('solution:', maxcut.get_graph_solution(x))
print('solution objective:', maxcut.maxcut_value(x, w))
colors = ['r' if maxcut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
# run quantum algorithm with shots
params['algorithm']['operator_mode'] = 'grouped_paulis'
params['backend']['name'] = 'qasm_simulator'
params['backend']['shots'] = 1024
result = run_algorithm(params, algo_input)
x = maxcut.sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('maxcut objective:', result['energy'] + offset)
print('solution:', maxcut.get_graph_solution(x))
print('solution objective:', maxcut.maxcut_value(x, w))
plot_histogram(result['eigvecs'][0])
colors = ['r' if maxcut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
# Generating a graph of 3 nodes
n = 3
num_qubits = n ** 2
ins = tsp.random_tsp(n)
G = nx.Graph()
G.add_nodes_from(np.arange(0, n, 1))
colors = ['r' for node in G.nodes()]
pos = {k: v for k, v in enumerate(ins.coord)}
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
print('distance\n', ins.w)
from itertools import permutations
def brute_force_tsp(w, N):
a=list(permutations(range(1,N)))
last_best_distance = 1e10
for i in a:
distance = 0
pre_j = 0
for j in i:
distance = distance + w[j,pre_j]
pre_j = j
distance = distance + w[pre_j,0]
order = (0,) + i
if distance < last_best_distance:
best_order = order
last_best_distance = distance
print('order = ' + str(order) + ' Distance = ' + str(distance))
return last_best_distance, best_order
best_distance, best_order = brute_force_tsp(ins.w, ins.dim)
print('Best order from brute force = ' + str(best_order) + ' with total distance = ' + str(best_distance))
def draw_tsp_solution(G, order, colors, pos):
G2 = G.copy()
n = len(order)
for i in range(n):
j = (i + 1) % n
G2.add_edge(order[i], order[j])
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G2, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
draw_tsp_solution(G, best_order, colors, pos)
qubitOp, offset = tsp.get_tsp_qubitops(ins)
algo_input = get_input_instance('EnergyInput')
algo_input.qubit_op = qubitOp
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
algorithm_cfg = {
'name': 'ExactEigensolver',
}
params = {
'problem': {'name': 'ising'},
'algorithm': algorithm_cfg
}
result = run_algorithm(params,algo_input)
print('energy:', result['energy'])
#print('tsp objective:', result['energy'] + offset)
x = tsp.sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'SPSA',
'max_trials': 300
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': 10598},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {'name': 'statevector_simulator'}
}
result = run_algorithm(params,algo_input)
print('energy:', result['energy'])
print('time:', result['eval_time'])
#print('tsp objective:', result['energy'] + offset)
x = tsp.sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
# run quantum algorithm with shots
params['algorithm']['operator_mode'] = 'grouped_paulis'
params['backend']['name'] = 'qasm_simulator'
params['backend']['shots'] = 1024
result = run_algorithm(params,algo_input)
print('energy:', result['energy'])
print('time:', result['eval_time'])
#print('tsp objective:', result['energy'] + offset)
x = tsp.sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
plot_histogram(result['eigvecs'][0])
draw_tsp_solution(G, z, colors, pos)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Your connection object is conn
Step4: SELECT query
|
<ASSISTANT_TASK:>
Python Code:
%run '00_database_connectivity_setup.ipynb'
IPython.display.clear_output()
%%execsql
drop table if exists gp_ds_sample_table;
create temp table gp_ds_sample_table
as
(
select
random() as x,
random() as y
from
generate_series(1, 10) x
) distributed randomly;
%%showsql
select
*
from
gp_ds_sample_table;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lists
Step2: Dictionaries
Step3: Comprehensions
Step4: Level 1
Step5: Mixed types
Step6: Grouping
Step7: Seaborn
Step8: 2D distributions
Step9: All pairwise combinations
Step10: Seaborn
Step11: Level 2
Step12: Advanced example
Step13: Level 3
Step14: Cython
Step15: Numba
Step17: Level 4
Step18: Interactive data visualization with Bokeh
|
<ASSISTANT_TASK:>
Python Code:
3 * 4
x = [1, 2, 3]
print(x)
x.append(4)
print(x)
measurements = {'height': [1.70, 1.80, 1.50], 'weight': [60, 120, 50]}
measurements
measurements['height']
x = [1, 2, 3, 4]
[i**2 for i in x]
def calc_bmi(weight, height):
return weight / height**2
[calc_bmi(w, h) for w, h in zip(measurements['weight'], measurements['height'])]
import pandas as pd
import numpy as np
s = pd.Series([1,3,5,np.nan,6,8])
s
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
df[df.A > 0]
df.mean()
df.mean(axis='columns')
df2 = pd.DataFrame({ 'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo' })
df2
df2.dtypes
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
df
df.groupby('A').sum()
df.groupby(['A', 'B']).sum()
%matplotlib inline
import seaborn as sns
x = np.random.normal(size=100)
sns.distplot(x);
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
df
sns.jointplot(x="x", y="y", data=df, kind="kde");
iris = sns.load_dataset("iris")
sns.pairplot(iris);
tips = sns.load_dataset("tips")
tips.head()
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips);
sns.lmplot(x="total_bill", y="tip", col="day", data=tips,
col_wrap=2, size=3);
sns.factorplot(x="time", y="total_bill", hue="smoker",
col="day", data=tips, kind="box", size=4, aspect=.5);
from sklearn import svm
X = [[0, 0], [1, 1]]
y = [0, 1]
clf = svm.SVC()
clf.fit(X, y)
clf.predict([[0, .5]])
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.svm import SVC
digits = datasets.load_digits()
import matplotlib.pyplot as plt
#Display the first digit
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest')
plt.grid('off')
n_samples = len(digits.images)
X = digits.images.reshape((n_samples, -1))
y = digits.target
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=0)
# Set the parameters by cross-validation
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
clf = GridSearchCV(SVC(C=1), tuned_parameters, cv=5)
clf.fit(X_train, y_train)
print(clf.best_params_)
y_true, y_pred = y_test, clf.predict(X_test)
ax = sns.heatmap(confusion_matrix(y_true, y_pred))
ax.set(xlabel='true label', ylabel='predicted label');
import numpy as np
X = np.random.random((1000, 3))
def pairwise_python(X):
M = X.shape[0]
N = X.shape[1]
D = np.empty((M, M), dtype=np.float)
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = np.sqrt(d)
return D
%timeit pairwise_python(X)
%load_ext cython
%%cython
import numpy as np
cimport cython
from libc.math cimport sqrt
@cython.boundscheck(False)
@cython.wraparound(False)
def pairwise_cython(double[:, ::1] X):
cdef int M = X.shape[0]
cdef int N = X.shape[1]
cdef double tmp, d
cdef double[:, ::1] D = np.empty((M, M), dtype=np.float64)
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = sqrt(d)
return np.asarray(D)
%timeit pairwise_cython(X)
from numba.decorators import jit
pairwise_numba = jit(pairwise_python)
# Run once to compile before timing
pairwise_numba(X)
%timeit pairwise_numba(X)
!ls -lahL POIWorld.csv
from dask import dataframe as dd
columns = ["name", "amenity", "Longitude", "Latitude"]
data = dd.read_csv('POIWorld.csv', usecols=columns)
data
with_name = data[data.name.notnull()]
is_starbucks = with_name.name.str.contains('[Ss]tarbucks')
is_dunkin = with_name.name.str.contains('[Dd]unkin')
starbucks = with_name[is_starbucks]
dunkin = with_name[is_dunkin]
from dask.diagnostics import ProgressBar
with ProgressBar():
starbucks_count, dunkin_count = dd.compute(starbucks.name.count(), dunkin.name.count())
starbucks_count, dunkin_count
locs = dd.compute(starbucks.Longitude,
starbucks.Latitude,
dunkin.Longitude,
dunkin.Latitude)
# extract arrays of values fro the series:
lon_s, lat_s, lon_d, lat_d = [loc.values for loc in locs]
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
def draw_USA():
initialize a basemap centered on the continental USA
plt.figure(figsize=(14, 10))
return Basemap(projection='lcc', resolution='l',
llcrnrlon=-119, urcrnrlon=-64,
llcrnrlat=22, urcrnrlat=49,
lat_1=33, lat_2=45, lon_0=-95,
area_thresh=10000)
m = draw_USA()
# Draw map background
m.fillcontinents(color='white', lake_color='#eeeeee')
m.drawstates(color='lightgray')
m.drawcoastlines(color='lightgray')
m.drawcountries(color='lightgray')
m.drawmapboundary(fill_color='#eeeeee')
# Plot the values in Starbucks Green and Dunkin Donuts Orange
style = dict(s=5, marker='o', alpha=0.5, zorder=2)
m.scatter(lon_s, lat_s, latlon=True,
label="Starbucks", color='#00592D', **style)
m.scatter(lon_d, lat_d, latlon=True,
label="Dunkin' Donuts", color='#FC772A', **style)
plt.legend(loc='lower left', frameon=False);
from bokeh.io import output_notebook
from bokeh.resources import CDN
from bokeh.plotting import figure, show
output_notebook(resources=CDN)
from __future__ import print_function
from math import pi
from bokeh.browserlib import view
from bokeh.document import Document
from bokeh.embed import file_html
from bokeh.models.glyphs import Circle, Text
from bokeh.models import (
BasicTicker, ColumnDataSource, Grid, GridPlot, LinearAxis,
DataRange1d, PanTool, Plot, WheelZoomTool
)
from bokeh.resources import INLINE
from bokeh.sampledata.iris import flowers
from bokeh.plotting import show
colormap = {'setosa': 'red', 'versicolor': 'green', 'virginica': 'blue'}
flowers['color'] = flowers['species'].map(lambda x: colormap[x])
source = ColumnDataSource(
data=dict(
petal_length=flowers['petal_length'],
petal_width=flowers['petal_width'],
sepal_length=flowers['sepal_length'],
sepal_width=flowers['sepal_width'],
color=flowers['color']
)
)
text_source = ColumnDataSource(
data=dict(xcenter=[125], ycenter=[135])
)
xdr = DataRange1d()
ydr = DataRange1d()
def make_plot(xname, yname, xax=False, yax=False, text=None):
plot = Plot(
x_range=xdr, y_range=ydr, background_fill="#efe8e2",
border_fill='white', title="", min_border=2, h_symmetry=False, v_symmetry=False,
plot_width=150, plot_height=150)
circle = Circle(x=xname, y=yname, fill_color="color", fill_alpha=0.2, size=4, line_color="color")
r = plot.add_glyph(source, circle)
xdr.renderers.append(r)
ydr.renderers.append(r)
xticker = BasicTicker()
if xax:
xaxis = LinearAxis()
plot.add_layout(xaxis, 'below')
xticker = xaxis.ticker
plot.add_layout(Grid(dimension=0, ticker=xticker))
yticker = BasicTicker()
if yax:
yaxis = LinearAxis()
plot.add_layout(yaxis, 'left')
yticker = yaxis.ticker
plot.add_layout(Grid(dimension=1, ticker=yticker))
plot.add_tools(PanTool(), WheelZoomTool())
if text:
text = " ".join(text.split('_'))
text = Text(
x={'field':'xcenter', 'units':'screen'},
y={'field':'ycenter', 'units':'screen'},
text=[text], angle=pi/4, text_font_style="bold", text_baseline="top",
text_color="#ffaaaa", text_alpha=0.7, text_align="center", text_font_size="28pt"
)
plot.add_glyph(text_source, text)
return plot
xattrs = ["petal_length", "petal_width", "sepal_width", "sepal_length"]
yattrs = list(reversed(xattrs))
plots = []
for y in yattrs:
row = []
for x in xattrs:
xax = (y == yattrs[-1])
yax = (x == xattrs[0])
text = x if (x==y) else None
plot = make_plot(x, y, xax, yax, text)
row.append(plot)
plots.append(row)
grid = GridPlot(children=plots, title="iris_splom")
show(grid)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
Step2: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
Step3: Polynomial_sframe function
Step4: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call
Step5: Visualizing polynomial regression
Step6: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step7: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
Step8: NOTE
Step9: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
tmp = np.array([1., 2., 3.])
tmp_cubed = tmp**3
print(tmp)
print(tmp_cubed)
ex_dataframe = pd.DataFrame()
ex_dataframe['power_1'] = tmp
print(ex_dataframe)
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe =
# and set poly_sframe['power_1'] equal to the passed feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
return poly_sframe
print polynomial_sframe(tmp, 3)
sales = graphlab.SFrame('kc_house_data.gl/')
sales = sales.sort(['sqft_living', 'price'])
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
poly1_data['price'] = sales['price'] # add price to the data since it's the target
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
my_features = poly2_data.column_names() # get the name of the features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
model2.get("coefficients")
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Raspberry Pi using USB2Dynamixel
Step2: Plots
Step3: Baud rate
|
<ASSISTANT_TASK:>
Python Code:
rdt_dict = {
50: [184, 184, 184, 184, 279, 184, 198, 279, 192, 326],
100: [345, 501, 350, 350, 492, 350, 350, 496, 495, 350],
150: [501, 648, 648, 648, 501, 648, 648, 648, 501, 567],
200: [800, 800, 800, 800, 690, 800, 800, 800, 660, 800],
250: [960, 960, 960, 960, 960, 960, 837, 856, 955, 955]
}
df_gpio = pd.DataFrame(rdt_dict)
rdt_dict = {
50: [190, 190, 185, 187, 246, 190, 190, 190, 190, 332],
100: [480, 492, 492, 346, 346, 346, 495, 450, 487, 421],
150: [510, 650, 650, 506, 645, 496, 496, 645, 525, 503],
200: [800, 660, 660, 800, 800, 800, 800, 742, 800, 800],
250: [965, 955, 955, 965, 816, 965, 965, 955, 955, 965]
}
df_usb2dynamixel = pd.DataFrame(rdt_dict)
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(10, 8))
df_gpio.plot(ax=ax1,
kind="box",
color="r",
legend=True,
label="GPIO")
df_usb2dynamixel.plot(ax=ax2,
kind="box",
color="b",
legend=True,
label="USB2Dynamixel");
ax1.grid(True)
ax2.grid(True)
ax1.set_xlabel("Return Delay Time", fontsize=12)
ax1.set_ylabel("Measured Return Delay Time (µs)", fontsize=12)
ax2.set_xlabel("Return Delay Time", fontsize=12)
ax2.set_ylabel("Measured Return Delay Time (µs)", fontsize=12)
ax1.set_title("Return Delay Time (@57600bps)", fontsize=16)
fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(10, 5))
df_gpio.mean().plot(ax=ax1,
yerr=df_gpio.std(),
legend=True,
label="GPIO",
color="r",
linewidth=1)
df_usb2dynamixel.mean().plot(ax=ax1,
yerr=df_usb2dynamixel.std(),
legend=True,
label="USB2Dynamixel",
color="b",
linewidth=1);
ax1.set_xlim(left=45, right=255)
ax1.grid(True)
ax1.set_xlabel("Return Delay Time", fontsize=12)
ax1.set_ylabel("Measured Return Delay Time (µs)", fontsize=12)
ax1.set_title("Return Delay Time (@57600bps)", fontsize=16)
br_array = np.array([
[9600, 1039, 1040],
[19200, 519, 520],
[57600, 173, 173],
[115200, 87, 86],
[230400, 1039, 43],
[460800, 1039, 21],
[921600, 1039, 10]
])
# Actual dt = 1 / baud_rate * (8 bits + 1 start bit + 1 stop bit) * 1000000 microsec
actual_dt = 1. / br_array[:,0] * 10. * 1000000.
df = pd.DataFrame(data=np.hstack([br_array[:,1:], actual_dt.reshape([-1, 1])]),
index=br_array[:,0],
columns=["delta time per byte using GPIO (µs)",
"delta time per byte using USB2Dynamixel (µs)",
"actual delta time per byte (µs)"])
df.index.name = "baud rate"
df
fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(10, 5))
df.plot(ax=ax1,
color=["red", "blue", "green"],
logx=True,
logy=True,
legend=True);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We specify the check node degree distribution polynomial $\rho(Z)$ by fixing the average check node degree $d_{\mathtt{c},\text{avg}}$ and assuming that the code contains only check nodes with degrees $\tilde{d}{\mathtt{c}}
Step2: The following function solves the optimization problem that returns the best $\lambda(Z)$ for a given BEC erasure probability $\epsilon$, for an average check node degree $d_{\mathtt{c},\text{avg}}$, and for a maximum variable node degree $d_{\mathtt{v},\max}$. This optimization problem is derived in the lecture as
Step3: As an example, we consider the case of optimization carried out in the lecture after 9 iterations, where we have $\epsilon = 0.2949219$ and $d_{\mathtt{c},\text{avg}} = 12.98$ with $d_{\mathtt{v},\max}=16$
Step4: In the following, we provide an interactive widget that allows you to choose the parameters of the optimization yourself and get the best possible $\lambda(Z)$. Additionally, the EXIT chart is plotted to visualize the good fit of the obtained degree distribution.
Step5: Now, we carry out the optimization over a wide range of $d_{\mathtt{c},\text{avg}}$ values for a given $\epsilon$ and find the largest possible rate.
Step6: Run binary search to find best irregular code for a given target rate on the BEC.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plot
from ipywidgets import interactive
import ipywidgets as widgets
import math
from pulp import *
%matplotlib inline
# returns rho polynomial (highest exponents first) corresponding to average check node degree c_avg
def c_avg_to_rho(c_avg):
ct = math.floor(c_avg)
r1 = ct*(ct+1-c_avg)/c_avg
r2 = (c_avg - ct*(ct+1-c_avg))/c_avg
rho_poly = np.concatenate(([r2,r1], np.zeros(ct-1)))
return rho_poly
def find_best_lambda(epsilon, v_max, c_avg):
rho = c_avg_to_rho(c_avg)
# quantization of EXIT chart
D = 500
xi_range = np.arange(1.0, D+1, 1)/D
# Linear Programming model, maximize target expression
model = pulp.LpProblem("Finding best lambda problem", pulp.LpMaximize)
# definition of variables, v_max entries \lambda_i that are between 0 and 1 (implicit declaration of constraint 2)
v_lambda = pulp.LpVariable.dicts("lambda", range(v_max),0,1)
# objective function
cv = 1/np.arange(v_max,0,-1)
model += pulp.lpSum(v_lambda[i]*cv[i] for i in range(v_max))
# constraints
# constraint 1, no variable nodes of degree 1
model += v_lambda[v_max-1] == 0
# constraint 3, sum of lambda_i must be 1
model += pulp.lpSum(v_lambda[i] for i in range(v_max))==1
# constraints 4, fixed point condition for all the descrete xi values (a total number of D, for each \xi)
for xi in xi_range:
model += pulp.lpSum(v_lambda[j] * epsilon * (1-np.polyval(rho,1.0-xi))**(v_max-1-j) for j in range(v_max))-xi <= 0
# constraint 5, stability condition
model += v_lambda[v_max-2] <= 1/epsilon/np.polyval(np.polyder(rho),1.0)
model.solve()
if model.status != 1:
r_lambda = []
else:
r_lambda = [v_lambda[i].varValue for i in range(v_max)]
return r_lambda
best_lambda = find_best_lambda(0.2949219, 16, 12.98)
print(np.poly1d(best_lambda, variable='Z'))
def best_lambda_interactive(epsilon, c_avg, v_max):
# get lambda and rho polynomial from optimization and from c_avg, respectively
p_lambda = find_best_lambda(epsilon, v_max, c_avg)
p_rho = c_avg_to_rho(c_avg)
# if optimization successful, compute rate and show plot
if not p_lambda:
print('Optimization infeasible, no solution found')
else:
design_rate = 1 - np.polyval(np.polyint(p_rho),1)/np.polyval(np.polyint(p_lambda),1)
if design_rate <= 0:
print('Optimization feasible, but no code with positive rate found')
else:
print("Lambda polynomial:")
print(np.poly1d(p_lambda, variable='Z'))
print("Design rate r_d = %1.3f" % design_rate)
# Plot EXIT-Chart
print("EXIT Chart:")
plot.figure(3)
x = np.linspace(0, 1, num=100)
y_v = [1 - epsilon*np.polyval(p_lambda, 1-xv) for xv in x]
y_c = [np.polyval(p_rho,xv) for xv in x]
plot.plot(x, y_v, '#7030A0')
plot.plot(y_c, x, '#008000')
plot.axis('equal')
plot.gca().set_aspect('equal', adjustable='box')
plot.xlim(0,1)
plot.ylim(0,1)
plot.xlabel('$I^{[A,V]}$, $I^{[E,C]}$')
plot.ylabel('$I^{[E,V]}$, $I^{[A,C]}$')
plot.grid()
plot.show()
interactive_plot = interactive(best_lambda_interactive, \
epsilon=widgets.FloatSlider(min=0.01,max=1,step=0.001,value=0.5, continuous_update=False, description=r'\(\epsilon\)',layout=widgets.Layout(width='50%')), \
c_avg = widgets.FloatSlider(min=3,max=20,step=0.1,value=4, continuous_update=False, description=r'\(d_{\mathtt{c},\text{avg}}\)'), \
v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\(d_{\mathtt{v},\max}\)'))
output = interactive_plot.children[-1]
output.layout.height = '400px'
interactive_plot
def find_best_rate(epsilon, v_max, c_max):
c_range = np.linspace(3, c_max, num=100)
rates = np.zeros_like(c_range)
# loop over all c_avg, add progress bar
f = widgets.FloatProgress(min=0, max=np.size(c_range))
display(f)
for index,c_avg in enumerate(c_range):
f.value += 1
p_lambda = find_best_lambda(epsilon, v_max, c_avg)
p_rho = c_avg_to_rho(c_avg)
if p_lambda:
design_rate = 1 - np.polyval(np.polyint(p_rho),1)/np.polyval(np.polyint(p_lambda),1)
if design_rate >= 0:
rates[index] = design_rate
# find largest rate
largest_rate_index = np.argmax(rates)
best_lambda = find_best_lambda(epsilon, v_max, c_range[largest_rate_index])
print("Found best code of rate %1.3f for average check node degree of %1.2f" % (rates[largest_rate_index], c_range[largest_rate_index]))
print("Corresponding lambda polynomial")
print(np.poly1d(best_lambda, variable='Z'))
# Plot curve with all obtained results
plot.figure(4, figsize=(10,3))
plot.plot(c_range, rates, 'b')
plot.plot(c_range[largest_rate_index], rates[largest_rate_index], 'bs')
plot.xlim(3, c_max)
plot.ylim(0, (1.1*(1-epsilon)))
plot.xlabel('$d_{c,avg}$')
plot.ylabel('design rate $r_d$')
plot.grid()
plot.show()
return rates[largest_rate_index]
interactive_optim = interactive(find_best_rate, \
epsilon=widgets.FloatSlider(min=0.01,max=1,step=0.001,value=0.5, continuous_update=False, description=r'\(\epsilon\)',layout=widgets.Layout(width='50%')), \
v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\(d_{\mathtt{v},\max}\)'), \
c_max = widgets.IntSlider(min=3, max=40, step=1, value=22, continuous_update=False, description=r'\(d_{\mathtt{c},\max}\)'))
output = interactive_optim.children[-1]
output.layout.height = '400px'
interactive_optim
target_rate = 0.7
dv_max = 16
dc_max = 22
T_Delta = 0.001
epsilon = 0.5
Delta_epsilon = 0.5
while Delta_epsilon >= T_Delta:
print('Running optimization for epsilon = %1.5f' % epsilon)
rate = find_best_rate(epsilon, dv_max, dc_max)
if rate > target_rate:
epsilon = epsilon + Delta_epsilon / 2
else:
epsilon = epsilon - Delta_epsilon / 2
Delta_epsilon = Delta_epsilon / 2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First let's load our data. In the VoxforgeDataPrep notebook, we created to arrays - inputs and outputs. The input nas the dimensions (num_samples,num_features) and the output is simply 1D vector of ints of length (num_samples). In this step, we split the training data into actual training (90%) and dev (10%) and merge that with the test data. Finally we save the indices for all the sets (instead of actual arrays).
Step2: Next we define some constants for our program. Input and output dimensions can be inferred from the data, but the hidden layer size has to be defined manually.
Step3: Model definition
Step4: After defining the model and all its parameters, we can compile it. This literally means compiling, because the model is converted into C++ code in the background and compiled with lots of optimizations to work as efficiently as possible. The process can take a while, but is worth the added speed in training.
Step5: We can also try and visualize the model using the builtin Dot painter
Step6: Finally, we can start training the model. We provide the training function both training and validation data and define a few parameters
Step7: The training method returns an object that contains the trained model parameters and the training history
Step8: You can get better graphs and more data if you overload the training callback method, which will provide you with the model parameters after each epoch during training.
Step9: One other way to look at this is to check where the errors occur by looking at what's known as the confusion matrix. The confusion matrix counts the number of predicted outputs with respect on how they should have been predicted. All the values on the diagonal (so where the predicted class is equal to the reference) are correct results. Any values outside of the diagonal are the errors, or confusions of one class with another. For example, you can see that 'g' is confused by 'k' (both same phonation place, but different voiceness), 'r' with 'er' (same thing, but the latter is a diphone), 't' with 'ch' (again same phonantion place, but sligthly different pronounciaction) and so on...
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.optimizers import SGD, Adadelta
from keras.callbacks import RemoteMonitor
import sys
sys.path.append('../python')
from data import Corpus
with Corpus('../data/mfcc_train_small.hdf5',load_normalized=True,merge_utts=True) as corp:
train,dev=corp.split(0.9)
test=Corpus('../data/mfcc_test.hdf5',load_normalized=True,merge_utts=True)
tr_in,tr_out_dec=train.get()
dev_in,dev_out_dec=dev.get()
tst_in,tst_out_dec=test.get()
input_dim=tr_in.shape[1]
output_dim=np.max(tr_out_dec)+1
hidden_num=256
batch_size=256
epoch_num=100
def dec2onehot(dec):
num=dec.shape[0]
ret=np.zeros((num,output_dim))
ret[range(0,num),dec]=1
return ret
tr_out=dec2onehot(tr_out_dec)
dev_out=dec2onehot(dev_out_dec)
tst_out=dec2onehot(tst_out_dec)
print 'Samples num: {}'.format(tr_in.shape[0]+dev_in.shape[0]+tst_in.shape[0])
print ' of which: {} in train, {} in dev and {} in test'.format(tr_in.shape[0],dev_in.shape[0],tst_in.shape[0])
print 'Input size: {}'.format(input_dim)
print 'Output size (number of classes): {}'.format(output_dim)
model = Sequential()
model.add(Dense(input_dim=input_dim,output_dim=hidden_num))
model.add(Activation('sigmoid'))
model.add(Dense(output_dim=output_dim))
model.add(Activation('softmax'))
#optimizer = SGD(lr=0.01, momentum=0.9, nesterov=True)
optimizer= Adadelta()
loss='categorical_crossentropy'
model.compile(loss=loss, optimizer=optimizer)
print model.summary()
from keras.utils import visualize_util
from IPython.display import SVG
SVG(visualize_util.to_graph(model,show_shape=True).create(prog='dot', format='svg'))
val=(dev_in,dev_out)
hist=model.fit(tr_in, tr_out, shuffle=True, batch_size=batch_size, nb_epoch=epoch_num, verbose=0, validation_data=val)
import matplotlib.pyplot as P
%matplotlib inline
P.plot(hist.history['loss'])
res=model.evaluate(tst_in,tst_out,batch_size=batch_size,show_accuracy=True,verbose=0)
print 'Loss: {}'.format(res[0])
print 'Accuracy: {:%}'.format(res[1])
out = model.predict_classes(tst_in,batch_size=256,verbose=0)
confusion=np.zeros((output_dim,output_dim))
for s in range(len(out)):
confusion[out[s],tst_out_dec[s]]+=1
#normalize by class - because some classes occur much more often than others
for c in range(output_dim):
confusion[c,:]/=np.sum(confusion[c,:])
with open('../data/phones.list') as f:
ph=f.read().splitlines()
P.figure(figsize=(15,15))
P.pcolormesh(confusion,cmap=P.cm.gray)
P.xticks(np.arange(0,output_dim)+0.5)
P.yticks(np.arange(0,output_dim)+0.5)
ax=P.axes()
ax.set_xticklabels(ph)
ax.set_yticklabels(ph)
print ''
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ThinkStat2 Chapter2 Exerciseのサンプルコード 実行例
Step2: Exercise1
Step3: Exercise2
Step4: thinkstats2モジュールに入っているメソッド値が一致しているか調べる
Step5: Exercise3
Step6: Exercise4
|
<ASSISTANT_TASK:>
Python Code:
#!/usr/bin/python
#-*- encoding: utf-8 -*-
Sample Codes for ThinkStats2 - Chapter3
Copyright 2015 @myuuuuun
URL: https://github.com/myuuuuun/ThinkStats2-Notebook
License: GNU GPLv3 http://www.gnu.org/licenses/gpl.html
%matplotlib inline
from __future__ import division, print_function
import sys
sys.path.append('./code')
sys.path.append('../')
import pandas as pd
import nsfg
import relay
import custom_functions as cf
import sys
import math
import numpy as np
import thinkstats2
import thinkplot
# 実際の1家族あたりの子供の人数の分布を求める
def Pmf(data):
pmf = thinkstats2.Pmf(data, label='actual pmf')
return pmf
# 適当な子どもに対してその家庭の子どもの人数を聞いた時に出てくる、バイアスのかかった子供の人数の分布を求める
def BiasedPmf(data):
pmf = Pmf(data)
new_pmf = pmf.Copy(label='biased pmf')
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
# pmfを与えて、平均を返す
def PmfMean(pmf):
pmf.Normalize()
average = sum([prob * value for value, prob in pmf.Items()])
return average
# 分布を比較
df = cf.ReadFemResp()
numkdhh = df.numkdhh
actual_pmf = Pmf(numkdhh)
biased_pmf = BiasedPmf(numkdhh)
thinkplot.PrePlot(2)
thinkplot.Pmfs([actual_pmf, biased_pmf])
thinkplot.Show(xlabel='class size', ylabel='PMF')
# 平均を比較
print("Actual average: ", PmfMean(actual_pmf))
print("Biased average: ", PmfMean(biased_pmf))
# pmfを与えて、平均を返す
def PmfMean(pmf):
pmf.Normalize()
average = sum([prob * value for value, prob in pmf.Items()])
return average
# pmfを与えて、分散を返す
def PmfVar(pmf):
pmf.Normalize()
average = PmfMean(pmf)
# これは効率が悪い
#variance = sum([prob * pow(value - average, 2) for value, prob in pmf.Items()])
# こっちの方がいい
# Var(x) = E[x^2] - (E[x])^2 を利用
variance = sum([prob * pow(value, 2)]) - pow(average, 2)
return variance
df = cf.ReadFemResp()
numkdhh = df.numkdhh
pmf = Pmf(numkdhh)
print("Average(by my func): ", PmfMean(pmf))
print("Average(by method): ", pmf.Mean())
print("Variance(by my func): ", PmfVar(pmf))
print("Variance(by method): ", pmf.Var())
def ex3():
df = nsfg.ReadFemPreg()
birthord = df['birthord']
prglngth = df['prglngth']
# {caseid: [index, index,...]}という辞書に変換
pregmap = nsfg.MakePregMap(df)
weeks_first = []
weeks_others = []
for caseid, pregs in pregmap.items():
# birthordがnanのケースを除いて、{birthord: index}という辞書を作る
live_pregs = {int(birthord.loc[preg]): preg for preg in pregs if not math.isnan(birthord.loc[preg])}
if len(live_pregs) > 1:
for order, preg_index in live_pregs.items():
if order == 1:
weeks_first.append(prglngth.loc[preg_index])
else:
weeks_others.append(prglngth.loc[preg_index])
return weeks_first, weeks_others
weeks_first, weeks_others = ex3()
first = sum(weeks_first) / len(weeks_first)
others = sum(weeks_others) / len(weeks_others)
print("1人目の妊娠期間の平均は: ", first, "weeks")
print("他の妊娠期間の平均は: ", others, "weeks")
print("Cohenのdは: ", cf.CohenEffectSize(np.array(weeks_first), np.array(weeks_others)))
def ObservedPmf(pmf, myspeed):
new_pmf = pmf.Copy(label='observed pmf')
average = pmf.Mean()
print(average)
for speed, prob in pmf.Items():
new_pmf.Mult(speed, pow(myspeed - speed, 2))
new_pmf.Normalize()
return new_pmf
def ex4():
pmf = relay.pmf()
observed = ObservedPmf(pmf, 7.5)
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, observed])
thinkplot.Show(title='PMF of running speed',
xlabel='speed (mph)',
ylabel='probability')
ex4()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Algoritmo de Regresion Lineal en TensorFlow
Step2: Regresion Lineal en Polinomios de grado N
Step3: Regularizacion
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Regresa 101 numeros igualmmente espaciados en el intervalo[-1,1]
x_train = np.linspace(-1, 1, 101)
# Genera numeros pseudo-aleatorios multiplicando la matriz x_train * 2 y
# sumando a cada elemento un ruido (una matriz del mismo tamanio con puros numeros random)
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
print(np.random.randn(*x_train.shape))
plt.scatter(x_train, y_train)
plt.show()
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 100
x_train = np.linspace(-1,1,101)
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X,w):
return tf.multiply(X,w)
w = tf.Variable(0.0, name="weights")
y_model = model(X,w)
cost = tf.square(Y-y_model)
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs):
for (x,y) in zip(x_train, y_train):
sess.run(train_op, feed_dict={X:x, Y:y})
w_val = sess.run(w)
sess.close()
plt.scatter(x_train, y_train)
y_learned = x_train*w_val
plt.plot(x_train, y_learned, 'r')
plt.show()
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 40
trX = np.linspace(-1, 1, 101)
num_coeffs = 6
trY_coeffs = [1, 2, 3, 4, 5, 6]
trY = 0
#Construir datos polinomiales pseudo-aleatorios para probar el algoritmo
for i in range(num_coeffs):
trY += trY_coeffs[i] * np.power(trX, i)
trY += np.random.randn(*trX.shape) * 1.5
plt.scatter(trX, trY)
plt.show()
# Construir el grafo para TensorFlow
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X, w):
terms = []
for i in range(num_coeffs):
term = tf.multiply(w[i], tf.pow(X, i))
terms.append(term)
return tf.add_n(terms)
w = tf.Variable([0.] * num_coeffs, name="parameters")
y_model = model(X, w)
cost = (tf.pow(Y-y_model, 2))
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#Correr el Algoritmo en TensorFlow
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
w_val = sess.run(w)
print(w_val)
sess.close()
# Mostrar el modelo construido
plt.scatter(trX, trY)
trY2 = 0
for i in range(num_coeffs):
trY2 += w_val[i] * np.power(trX, i)
plt.plot(trX, trY2, 'r')
plt.show()
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
def split_dataset(x_dataset, y_dataset, ratio):
arr = np.arange(x_dataset.size)
np.random.shuffle(arr)
num_train = int(ratio* x_dataset.size)
x_train = x_dataset[arr[0:num_train]]
y_train = y_dataset[arr[0:num_train]]
x_test = x_dataset[arr[num_train:x_dataset.size]]
y_test = y_dataset[arr[num_train:x_dataset.size]]
return x_train, x_test, y_train, y_test
learning_rate = 0.001
training_epochs = 1000
reg_lambda = 0.
x_dataset = np.linspace(-1, 1, 100)
num_coeffs = 9
y_dataset_params = [0.] * num_coeffs
y_dataset_params[2] = 1
y_dataset = 0
for i in range(num_coeffs):
y_dataset += y_dataset_params[i] * np.power(x_dataset, i)
y_dataset += np.random.randn(*x_dataset.shape) * 0.3
(x_train, x_test, y_train, y_test) = split_dataset(x_dataset, y_dataset, 0.7)
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X, w):
terms = []
for i in range(num_coeffs):
term = tf.multiply(w[i], tf.pow(X,i))
terms.append(term)
return tf.add_n(terms)
w = tf.Variable([0.] * num_coeffs, name="parameters")
y_model = model(X, w)
cost = tf.div(tf.add(tf.reduce_sum(tf.square(Y-y_model)),
tf.multiply(reg_lambda, tf.reduce_sum(tf.square(w)))),
2*x_train.size)
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
i,stop_iters = 0,15
for reg_lambda in np.linspace(0,1,100):
i += 1
for epoch in range(training_epochs):
sess.run(train_op, feed_dict={X: x_train, Y: y_train})
final_cost = sess.run(cost, feed_dict={X: x_test, Y:y_test})
print('reg lambda', reg_lambda)
print('final cost', final_cost)
if i > stop_iters: break
sess.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 初始化权重
Step2: 第一个卷积层
Step3: 第二个卷积层
Step4: 第一个全连接层
Step5: 第二个全连接层
Step6: 输出层
Step7: 使用in_top_k来输出top k的准确率,默认使用top 1。常用的可以是top 5。
Step8: 启动caifar_input中需要用的线程队列。主要用途是图片数据增强。这里总共使用了16个线程来处理图片。
Step9: 每次在计算之前,先执行image_train,label_train来获取一个batch_size大小的训练数据。然后,feed到train_op和loss中,训练样本。每10次迭代计算就会输出一些必要的信息。
|
<ASSISTANT_TASK:>
Python Code:
max_steps = 3000
batch_size = 128
data_dir = 'data/cifar10/cifar-10-batches-bin/'
model_dir = 'model/_cifar10_v2/'
X_train, y_train = cifar10_input.distorted_inputs(data_dir, batch_size)
X_test, y_test = cifar10_input.inputs(eval_data=True, data_dir=data_dir, batch_size=batch_size)
image_holder = tf.placeholder(tf.float32, [batch_size, 24, 24, 3])
label_holder = tf.placeholder(tf.int32, [batch_size])
weight1 = variable_with_weight_loss([5, 5, 3, 64], stddev=0.05, lambda_value=0)
kernel1 = tf.nn.conv2d(image_holder, weight1, [1, 1, 1, 1], padding='SAME')
bias1 = tf.Variable(tf.constant(0.0, shape=[64]))
conv1 = tf.nn.relu(tf.nn.bias_add(kernel1, bias1))
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
weight2 = variable_with_weight_loss(shape=[5, 5, 64, 64], stddev=5e-2, lambda_value=0.0)
kernel2 = tf.nn.conv2d(norm1, weight2, strides=[1, 1, 1, 1], padding='SAME')
bias2 = tf.Variable(tf.constant(0.1, shape=[64]))
conv2 = tf.nn.relu(tf.nn.bias_add(kernel2, bias2))
norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001/9.0, beta=0.75)
pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
flattern = tf.reshape(pool2, [batch_size, -1])
dim = flattern.get_shape()[1].value
weight3 = variable_with_weight_loss(shape=[dim, 384], stddev=0.04, lambda_value=0.04)
bias3 = tf.Variable(tf.constant(0.1, shape=[384]))
local3 = tf.nn.relu(tf.matmul(flattern, weight3) + bias3)
weight4 = variable_with_weight_loss(shape=[384, 192], stddev=0.04, lambda_value=0.04)
bias4 = tf.Variable(tf.constant(0.1, shape=[192]))
local4 = tf.nn.relu(tf.matmul(local3, weight4) + bias4)
weight5 = variable_with_weight_loss(shape=[192, 10], stddev=1/192.0, lambda_value=0.0)
bias5 = tf.Variable(tf.constant(0.0, shape=[10]))
logits = tf.add(tf.matmul(local4, weight5), bias5)
def loss(logits, labels):
labels = tf.cast(labels, tf.int64)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels,
name = 'cross_entropy_per_example'
)
cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
tf.add_to_collection('losses', cross_entropy_mean)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
loss = loss(logits, label_holder)
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)
top_k_op = tf.nn.in_top_k(logits, label_holder, 1)
sess = tf.InteractiveSession()
saver = tf.train.Saver()
tf.global_variables_initializer().run()
tf.train.start_queue_runners()
for step in range(max_steps):
start_time = time.time()
image_batch, label_batch = sess.run([X_train, y_train])
_, loss_value = sess.run([train_op, loss],
feed_dict={image_holder: image_batch, label_holder: label_batch})
duration = time.time() - start_time
if step % 10 == 0:
examples_per_sec = batch_size / duration
sec_this_batch = float(duration)
format_str = ('step %d, loss = %.2f (%.1f examples/sec; %.3f sec/batch)')
print(format_str % (step, loss_value, examples_per_sec, sec_this_batch))
saver.save(sess, save_path=os.path.join(model_dir, 'model.chpt'), global_step=max_steps)
num_examples = 10000
num_iter = int(math.ceil(num_examples / batch_size))
ture_count = 0
total_sample_count = num_iter * batch_size
step = 0
while step < num_iter:
image_batch, label_batch = sess.run([X_test, y_test])
predictions = sess.run([top_k_op],
feed_dict={image_holder: image_batch, label_holder: label_batch})
true_count += np.sum(predictions)
step += 1
precision = ture_count / total_sample_count
print("Precision @ 1 = %.3f" % precision)
sess.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build LSI Model
Step2: LsiModel的參數
Step3: Build Word2Vec Model
Step4: Build Doc2Vec Model
Step5: Build Doc2Vec Model from 2013 USPTO Patents
|
<ASSISTANT_TASK:>
Python Code:
import re, json, os, nltk, string, gensim, bz2
from gensim import corpora, models, similarities, utils
from nltk.corpus import stopwords
from os import listdir
from datetime import datetime as dt
import numpy as np
import codecs
import sys
stdin, stdout, stderr = sys.stdin, sys.stdout, sys.stderr
reload(sys)
sys.stdin, sys.stdout, sys.stderr = stdin, stdout, stderr
sys.setdefaultencoding('utf-8')
import logging
fmtstr = '%(asctime)s [%(levelname)s][%(name)s] %(message)s'
datefmtstr = '%Y/%m/%d %H:%M:%S'
log_fn = str(dt.now().date()) + '.txt'
logger = logging.getLogger()
if len(logger.handlers) >= 1:
logger.removeHandler(a.handlers[0])
logger.addHandler(logging.FileHandler(log_fn))
logger.handlers[0].setFormatter(logging.Formatter(fmtstr, datefmtstr))
else:
logging.basicConfig(filename=log_fn, format=fmtstr,
datefmt=datefmtstr, level=logging.NOTSET)
stop_words = set(stopwords.words('english'))
def docs_out(line):
j = json.loads(line)
tmp = j.get('brief') + j.get('claim') + j.get('description')
tmp = re.sub('([,?!:;%$&*#~\<\>=+/"(){}\[\]\'])',' ',tmp)
tmp = tmp.replace(u"\u2018", " ").replace(u"\u2019", " ").replace(u"\u201c"," ").replace(u"\u201d", " ")
tmp = tmp.replace(u"\u2022", " ").replace(u"\u2013", " ").replace(u"\u2014", " ").replace(u"\u2026", " ")
tmp = tmp.replace(u"\u20ac", " ").replace(u"\u201a", " ").replace(u"\u201e", " ").replace(u"\u2020", " ")
tmp = tmp.replace(u"\u2021", " ").replace(u"\u02C6", " ").replace(u"\u2030", " ").replace(u"\u2039", " ")
tmp = tmp.replace(u"\u02dc", " ").replace(u"\u203a", " ").replace(u"\ufffe", " ").replace(u"\u00b0", " ")
tmp = tmp.replace(u"\u00b1", " ").replace(u"\u0020", " ").replace(u"\u00a0", " ").replace(u"\u1680", " ")
tmp = tmp.replace(u"\u2000", " ").replace(u"\u2001", " ").replace(u"\u2002", " ").replace(u"\u2003", " ")
tmp = tmp.replace(u"\u2004", " ").replace(u"\u2005", " ").replace(u"\u2006", " ").replace(u"\u2007", " ")
tmp = tmp.replace(u"\u2008", " ").replace(u"\u2009", " ").replace(u"\u200a", " ").replace(u"\u202f", " ")
tmp = tmp.replace(u"\u205f", " ").replace(u"\u3000", " ").replace(u"\u20ab", " ").replace(u"\u201b", " ")
tmp = tmp.replace(u"\u201f", " ").replace(u"\u2e02", " ").replace(u"\u2e04", " ").replace(u"\u2e09", " ")
tmp = tmp.replace(u"\u2e0c", " ").replace(u"\u2e1c", " ").replace(u"\u2e20", " ").replace(u"\u00bb", " ")
tmp = tmp.replace(u"\u2e03", " ").replace(u"\u2e05", " ").replace(u"\u2e0a", " ").replace(u"\u2e0d", " ")
tmp = tmp.replace(u"\u2e1d", " ").replace(u"\u2e21", " ").replace(u"\u2032", " ").replace(u"\u2031", " ")
tmp = tmp.replace(u"\u2033", " ").replace(u"\u2034", " ").replace(u"\u2035", " ").replace(u"\u2036", " ")
tmp = tmp.replace(u"\u2037", " ").replace(u"\u2038", " ")
tmp = re.sub('[.] ',' ',tmp)
return tmp, j.get('patentNumber')
documents = []
f = codecs.open('/share/USPatentData/tokenized_appDate_2013/2013USPTOPatents_by_skip_1.txt.tokenized','r', 'UTF-8')
for line in f:
documents.append(''.join(docs_out(line)[0]) + '\n')
dictionary = corpora.Dictionary([doc.split() for doc in documents])
stop_ids = [dictionary.token2id[stopword] for stopword in stop_words
if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in dictionary.dfs.iteritems() if docfreq <= 1]
dictionary.filter_tokens(stop_ids + once_ids)
dictionary.compactify()
#dictionary.save('USPTO_2013.dict')
corpus = [dictionary.doc2bow(doc.split()) for doc in documents]
model_tfidf = models.TfidfModel(corpus)
corpus_tfidf = model_tfidf[corpus]
model_lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=200)
corpus_lsi = model_lsi[corpus_tfidf]
# 計算V的方法,可以作為document vector
docvec_lsi = gensim.matutils.corpus2dense(corpus_lsi, len(model_lsi.projection.s)).T / model_lsi.projection.s
# word vector直接用U的column vector
wordsim_lsi = similarities.MatrixSimilarity(model_lsi.projection.u, num_features=model_lsi.projection.u.shape[1])
# 第二個版本,word vector用U*S
wordsim_lsi2 = similarities.MatrixSimilarity(model_lsi.projection.u * model_lsi.projection.s,
num_features=model_lsi.projection.u.shape[1])
def lsi_query(query, use_ver2=False):
qvec = model_lsi[model_tfidf[dictionary.doc2bow(query.split())]]
if use_ver2:
s = wordsim_lsi2[qvec]
else:
s = wordsim_lsi[qvec]
return [dictionary[i] for i in s.argsort()[-10:]]
print lsi_query('energy')
print lsi_query('energy', True)
all_text = [doc.split() for doc in documents]
model_w2v = models.Word2Vec(size=200, sg=1)
%timeit model_w2v.build_vocab(all_text)
%timeit model_w2v.train(all_text)
model_w2v.most_similar_cosmul(['deep','learning'])
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
class PatentDocGenerator(object):
def __init__(self, filename):
self.filename = filename
def __iter__(self):
f = codecs.open(self.filename, 'r', 'UTF-8')
for line in f:
text, appnum = docs_out(line)
yield TaggedDocument(text.split(), appnum.split())
doc = PatentDocGenerator('/share/USPatentData/tokenized_appDate_2013/2013USPTOPatents_by_skip_1.txt.tokenized')
%timeit model_d2v = Doc2Vec(doc, size=200, window=8, sample=1e-5, hs=0, negative=5)
doc = PatentDocGenerator('/share/USPatentData/tokenized_appDate_2013/2013USPTOPatents_by_skip_1.txt.tokenized')
model_d2v = Doc2Vec(doc, size=200, window=8, sample=1e-5, hs=0, negative=5)
model_d2v.docvecs.most_similar(['20140187118'])
m = Doc2Vec(size=200, window=8, sample=1e-5, hs=0, negative=5)
m.build_vocab(doc)
m.train(doc)
m.docvecs.most_similar(['20140187118'])
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
class PatentDocGenerator(object):
def __init__(self, filename):
self.filename = filename
def __iter__(self):
f = codecs.open(self.filename, 'r', 'UTF-8')
for line in f:
text, appnum = docs_out(line)
yield TaggedDocument(text.split(), appnum.split())
model_d2v = Doc2Vec(size=200, window=8, sample=1e-5, hs=0, negative=5)
root = '/share/USPatentData/tokenized_appDate_2013/'
for fn in sorted(listdir(root)):
doc = PatentDocGenerator(os.path.join(root, fn))
start = dt.now()
model_d2v.build_vocab(doc)
model_d2v.train(doc)
logging.info('{} training time: {}'.format(fn, str(dt.now() - start)))
model_d2v.save("doc2vec_uspto_2013.model")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the component using KFP SDK
Step2: Sample
Step3: Example pipeline that uses the component
Step4: Compile the pipeline
Step5: Submit the pipeline for execution
|
<ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
import kfp.components as comp
dataproc_submit_pig_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/submit_pig_job/component.yaml')
help(dataproc_submit_pig_job_op)
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
QUERY = '''
natality_csv = load 'gs://public-datasets/natality/csv' using PigStorage(':');
top_natality_csv = LIMIT natality_csv 10;
dump natality_csv;'''
EXPERIMENT_NAME = 'Dataproc - Submit Pig Job'
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Pig job pipeline',
description='Dataproc submit Pig job pipeline'
)
def dataproc_submit_pig_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
queries = json.dumps([QUERY]),
query_file_uri = '',
script_variables = '',
pig_job='',
job='',
wait_interval='30'
):
dataproc_submit_pig_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
queries=queries,
query_file_uri=query_file_uri,
script_variables=script_variables,
pig_job=pig_job,
job=job,
wait_interval=wait_interval)
pipeline_func = dataproc_submit_pig_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 기본 분류
Step2: 패션 MNIST 데이터셋 임포트하기
Step3: load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다
Step4: 데이터 탐색
Step5: 비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다
Step6: 각 레이블은 0과 9사이의 정수입니다
Step7: 테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다
Step8: 테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다
Step9: 데이터 전처리
Step10: 신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. 훈련 세트와 테스트 세트를 동일한 방식으로 전처리하는 것이 중요합니다
Step11: 훈련 세트에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
Step12: 모델 구성
Step13: 이 네트워크의 첫 번째 층인 tf.keras.layers.Flatten은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.
Step14: 모델 훈련
Step15: 모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다.
Step16: 테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 과대적합(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다.
Step17: 여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠
Step18: 이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠
Step19: 모델은 이 이미지가 앵클 부츠(class_name[9])라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠
Step20: 10개 클래스에 대한 예측을 모두 그래프로 표현해 보겠습니다
Step21: 예측 확인
Step22: 몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
Step23: 훈련된 모델 사용하기
Step24: tf.keras 모델은 한 번에 샘플의 묶음 또는 배치(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다
Step25: 이제 이 이미지의 예측을 만듭니다
Step26: tf.keras.Model.predict는 데이터 배치의 각 이미지에 대해 하나의 목록씩 목록의 목록을 반환합니다. 배치에서 (유일한) 이미지에 대한 예측을 가져옵니다.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# TensorFlow and tf.keras
import tensorflow as tf
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
train_images = train_images / 255.0
test_images = test_images / 255.0
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
predictions[0]
np.argmax(predictions[0])
test_labels[0]
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
plt.show()
np.argmax(predictions_single[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Construct LOW core configuration
Step2: We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table
Step3: Read the venerable test image, constructing an image
Step4: Predict the visibility from this image
Step5: Create a gain table with modest amplitude and phase errors, smootheed over 16 channels
Step6: Plot the gains applied
Step7: Solve for the gains
Step8: Plot the solved relative to the applied. Declare antenna 0 to be the reference.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
sys.path.append(os.path.join('..', '..'))
from data_models.parameters import arl_path
results_dir = arl_path('test_results')
from matplotlib import pylab
import numpy
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.wcs.utils import pixel_to_skycoord
from matplotlib import pyplot as plt
from wrappers.serial.visibility.base import create_blockvisibility
from wrappers.serial.calibration.operations import apply_gaintable
from wrappers.serial.visibility.operations import copy_visibility
from wrappers.serial.calibration.calibration import solve_gaintable
from wrappers.serial.visibility.coalesce import convert_blockvisibility_to_visibility, \
convert_visibility_to_blockvisibility
from wrappers.serial.calibration.operations import create_gaintable_from_blockvisibility
from wrappers.serial.image.operations import show_image
from wrappers.serial.simulation.testing_support import create_test_image, simulate_gaintable
from wrappers.serial.simulation.configurations import create_named_configuration
from wrappers.serial.imaging.base import create_image_from_visibility
from workflows.serial.imaging.imaging_serial import predict_list_serial_workflow
from data_models.polarisation import PolarisationFrame
pylab.rcParams['figure.figsize'] = (8.0, 8.0)
pylab.rcParams['image.cmap'] = 'rainbow'
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(logging.StreamHandler(sys.stdout))
lowcore = create_named_configuration('LOWBD2-CORE')
times = numpy.zeros([1])
vnchan = 128
frequency = numpy.linspace(0.8e8, 1.2e8, vnchan)
channel_bandwidth = numpy.array(vnchan*[frequency[1]-frequency[0]])
phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000')
bvt = create_blockvisibility(lowcore, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame('stokesI'))
m31image = create_test_image(frequency=frequency, cellsize=0.0005)
nchan, npol, ny, nx = m31image.data.shape
m31image.wcs.wcs.crval[0] = bvt.phasecentre.ra.deg
m31image.wcs.wcs.crval[1] = bvt.phasecentre.dec.deg
m31image.wcs.wcs.crpix[0] = float(nx // 2)
m31image.wcs.wcs.crpix[1] = float(ny // 2)
fig=show_image(m31image)
vt = convert_blockvisibility_to_visibility(bvt)
vt = predict_list_serial_workflow(bvt, m31image, context='timeslice')
bvt = convert_visibility_to_blockvisibility(vt)
gt = create_gaintable_from_blockvisibility(bvt)
gt = simulate_gaintable(gt, phase_error=1.0, amplitude_error=0.1, smooth_channels=16)
plt.clf()
for ant in range(4):
amp = numpy.abs(gt.gain[0,ant,:,0,0])
plt.plot(amp)
plt.title('Amplitude of bandpass')
plt.xlabel('channel')
plt.show()
plt.clf()
for ant in range(4):
phase = numpy.angle(gt.gain[0,ant,:,0,0])
plt.plot(phase)
plt.title('Phase of bandpass')
plt.xlabel('channel')
plt.show()
cbvt = copy_visibility(bvt)
cbvt = apply_gaintable(cbvt, gt)
gtsol=solve_gaintable(cbvt, bvt, phase_only=False)
plt.clf()
for ant in range(4):
amp = numpy.abs(gtsol.gain[0,ant,:,0,0]/gt.gain[0,ant,:,0,0])
plt.plot(amp)
plt.title('Relative amplitude of bandpass')
plt.xlabel('channel')
plt.show()
plt.clf()
for ant in range(4):
refphase = numpy.angle(gtsol.gain[0,0,:,0,0]/gt.gain[0,0,:,0,0])
phase = numpy.angle(gtsol.gain[0,ant,:,0,0]/gt.gain[0,ant,:,0,0])
plt.plot(phase-refphase)
plt.title('Relative phase of bandpass')
plt.xlabel('channel')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's set up REBOUNDx and add radiation_forces. We also have to set the speed of light in the units we want to use.
Step2: By default, the radiation_forces effect assumes the particle at index 0 is the source of the radiation. If you'd like to use a different one, or it's possible that the radiation source might move to a different index (e.g. with a custom merger routine), you can add a radiation_source flag to the appropriate particle like this
Step3: We imagine a debris disk of dust particles all with a beta parameter of 0.1 (ratio of radiation pressure force to gravitational force) that have semimajor axes uniformly distributed between 100 and 120 AU. We initialize them all with 0.01 eccentricity, and random pericenters and azimuths. We further consider a 10 AU vertical thickness, which for simplicity we model as a uniform random inclination between 0 and $\sin^{-1}(10AU / 100 AU)$ with random longitudes of node.
Step4: Now we would run our REBOUND simulation as usual. Let's check that our disk looks how we'd expect
|
<ASSISTANT_TASK:>
Python Code:
import rebound
import reboundx
import numpy as np
sim = rebound.Simulation()
rebound.G = 6.674e-11 # SI units
sim.integrator = "whfast"
sim.dt = 1.e8 # At ~100 AU, orbital periods ~1000 yrs, so use a timestep of 1% of that, in sec.
sim.N_active = 1 # Make it so dust particles don't interact with one another gravitationally
sim.add(m=1.99e30) # add Sun with mass in kg
ps = sim.particles
rebx = reboundx.Extras(sim)
rf = rebx.load_force("radiation_forces")
rebx.add_force(rf)
rf.params["c"] = 3.e8
ps[0].params["radiation_source"] = 1
AU = 1.5e11 # in m
amin = 100.*AU
awidth = 20.*AU
e = 0.01
incmax = np.arcsin(0.1)
beta = 0.1
Ndust = 1000
import random
seed = 3
random.seed(seed)
for i in range(1,Ndust+1):
a = amin + awidth*random.random() # Semimajor axis
pomega = 2*np.pi*random.random() # Longitude of pericenter
f = 2*np.pi*random.random() # True anomaly
Omega = 2*np.pi*random.random() # Longitude of node
inc = incmax*random.random() # Inclination
sim.add(a=a, e=e, inc=inc, Omega=Omega, pomega=pomega, f=f)
sim.particles[i].beta = beta
xs = [ps[i].x for i in range(sim.N)]
ys = [ps[i].y for i in range(sim.N)]
zs = [ps[i].z for i in range(sim.N)]
%matplotlib inline
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(15,5))
ax1.scatter(xs, ys)
ax1.set_aspect('equal')
ax1.set_title('Top-down view')
ax2.scatter(xs, zs)
ax2.set_ylim(ax2.get_xlim())
ax2.set_title('Edge-on view')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Text
Step3: Text as output
Step4: Standard error
Step5: HTML
Step7: Markdown
Step9: LaTeX
Step11: SVG
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from IPython.display import (
HTML, Image, Latex, Math, Markdown, SVG
)
text = Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam urna
libero, dictum a egestas non, placerat vel neque. In imperdiet iaculis fermentum.
Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia
Curae; Cras augue tortor, tristique vitae varius nec, dictum eu lectus. Pellentesque
id eleifend eros. In non odio in lorem iaculis sollicitudin. In faucibus ante ut
arcu fringilla interdum. Maecenas elit nulla, imperdiet nec blandit et, consequat
ut elit.
print(text)
text
import sys; print('this is stderr', file=sys.stderr)
div = HTML('<div style="width:100px;height:100px;background:grey;" />')
div
for i in range(3):
print(7**10)
display(div)
md = Markdown(
### Subtitle
This is some *markdown* text with math $F=ma$.
)
md
display(md)
math = Latex("$F=ma$")
math
maxwells = Latex(r
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
)
maxwells
svg_source =
<svg width="400" height="110">
<rect width="300" height="100" style="fill:#E0E0E0;" />
</svg>
svg = SVG(svg_source)
svg
for i in range(3):
print(10**i)
display(svg)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute covariance using automated regularization
Step3: Show the evoked data
Step4: We can then show whitening for our various noise covariance estimates.
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, n_jobs=1, fir_design='firwin')
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, exclude='bads')
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
evoked = epochs.average()
evoked.plot(time_unit='s') # plot evoked response
evoked.plot_white(noise_covs, time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Automatically 'discovering' the heat equation with reverse finite differencing
Step3: Okay, so now that we have our data to work on, we need to form a system of equations $KM=0$ to solve for the coefficients $M$
Step6: This method tells us that our data fits the equation
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from numpy.testing import assert_almost_equal
# Specify diffusion coefficient
nu = 0.1
def analytical_soln(xmax=1.0, tmax=0.2, nx=1000, nt=1000):
Compute analytical solution.
x = np.linspace(0, xmax, num=nx)
t = np.linspace(0, tmax, num=nt)
u = np.zeros((len(t), len(x))) # rows are timesteps
for n, t_ind in enumerate(t):
u[n, :] = np.sin(4*np.pi*x)*np.exp(-16*np.pi**2*nu*t_ind)
return u, x, t
u, x, t = analytical_soln()
# Create vectors for analytical partial derivatives
u_t = np.zeros(u.shape)
u_x = np.zeros(u.shape)
u_xx = np.zeros(u.shape)
for n in range(len(t)):
u_t[n, :] = -16*np.pi**2*nu*np.sin(4*np.pi*x)*np.exp(-16*np.pi**2*nu*t[n])
u_x[n, :] = 4*np.pi*np.cos(4*np.pi*x)*np.exp(-16*np.pi**2*nu*t[n])
u_xx[n, :] = -16*np.pi**2*np.sin(4*np.pi*x)*np.exp(-16*np.pi**2*nu*t[n])
# Compute the nonlinear convective term (that we know should have no effect)
uu_x = u*u_x
# Check to make sure some random point satisfies the PDE
i, j = 15, 21
assert_almost_equal(u_t[i, j] - nu*u_xx[i, j], 0.0)
# Create K matrix from the input data using random indices
nterms = 5 # total number of terms in the equation
ni, nj = u.shape
K = np.zeros((5, 5))
# Pick data from different times and locations for each row
for n in range(nterms):
i = int(np.random.rand()*(ni - 1)) # time index
j = int(np.random.rand()*(nj - 1)) # space index
K[n, 0] = u_t[i, j]
K[n, 1] = -u_x[i, j]
K[n, 2] = -u_xx[i, j]
K[n, 3] = -uu_x[i, j]
K[n, 4] = -1.0
# We can't solve this matrix because it's singular, but we can try singular value decomposition
# I found this solution somewhere on Stack Overflow but can't find the URL now; sorry!
def null(A, eps=1e-15):
Find the null space of a matrix using singular value decomposition.
u, s, vh = np.linalg.svd(A)
null_space = np.compress(s <= eps, vh, axis=0)
return null_space.T
M = null(K, eps=1e-5)
coeffs = (M.T/M[0])[0]
for letter, coeff in zip("ABCDE", coeffs):
print(letter, "=", np.round(coeff, decimals=5))
# Create a helper function compute derivatives with the finite difference method
def diff(dept_var, indept_var, index=None, n_deriv=1):
Compute the derivative of the dependent variable w.r.t. the independent at the
specified array index. Uses NumPy's `gradient` function, which uses second order
central differences if possible, and can use second order forward or backward
differences. Input values must be evenly spaced.
Parameters
----------
dept_var : array of floats
indept_var : array of floats
index : int
Index at which to return the numerical derivative
n_deriv : int
Order of the derivative (not the numerical scheme)
# Rename input variables
u = dept_var.copy()
x = indept_var.copy()
dx = x[1] - x[0]
for n in range(n_deriv):
dudx = np.gradient(u, dx, edge_order=2)
u = dudx.copy()
if index is not None:
return dudx[index]
else:
return dudx
# Test this with a sine
x = np.linspace(0, 6.28, num=1000)
u = np.sin(x)
dudx = diff(u, x)
d2udx2 = diff(u, x, n_deriv=2)
assert_almost_equal(dudx, np.cos(x), decimal=5)
assert_almost_equal(d2udx2, -u, decimal=2)
def detect_coeffs(noise_amplitude=0.0):
Detect coefficients from analytical solution.
u, x, t = analytical_soln(nx=500, nt=500)
# Add Gaussian noise to u
u += np.random.randn(*u.shape) * noise_amplitude
nterms = 5
ni, nj = u.shape
K = np.zeros((5, 5))
for n in range(nterms):
i = int(np.random.rand()*(ni - 1))
j = int(np.random.rand()*(nj - 1))
u_t = diff(u[:, j], t, index=i)
u_x = diff(u[i, :], x, index=j)
u_xx = diff(u[i, :], x, index=j, n_deriv=2)
uu_x = u[i, j] * u_x
K[n, 0] = u_t
K[n, 1] = -u_x
K[n, 2] = -u_xx
K[n, 3] = -uu_x
K[n, 4] = -1.0
M = null(K, eps=1e-3)
coeffs = (M.T/M[0])[0]
for letter, coeff in zip("ABCDE", coeffs):
print(letter, "=", np.round(coeff, decimals=3))
for noise_level in np.logspace(-10, -6, num=5):
print("Coefficients for noise amplitude:", noise_level)
try:
detect_coeffs(noise_amplitude=noise_level)
except ValueError:
print("FAILED")
print("")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Same viewport
Step2: Different viewports
|
<ASSISTANT_TASK:>
Python Code:
from cartoframes.auth import set_default_credentials
set_default_credentials('cartoframes')
from cartoframes.viz import Map, Layer, Layout, basic_style
Layout([
Map(Layer('select * from drought_wk_1 where dm = 3', basic_style(color='#e15383'))),
Map(Layer('select * from drought_wk_2 where dm = 3', basic_style(color='#e15383'))),
Map(Layer('select * from drought_wk_3 where dm = 3', basic_style(color='#e15383'))),
Map(Layer('select * from drought_wk_4 where dm = 3', basic_style(color='#e15383'))),
], is_static=True, viewport={'zoom': 3, 'lat': 33.4706, 'lng': -98.3457})
from cartoframes.viz import Map, Layer, Layout, basic_style
Layout([
Map(Layer('drought_wk_1'), viewport={ 'zoom': 0.5 }),
Map(Layer('select * from drought_wk_1 where dm = 1', basic_style(color='#ffc285'))),
Map(Layer('select * from drought_wk_1 where dm = 2', basic_style(color='#fa8a76'))),
Map(Layer('select * from drought_wk_1 where dm = 3', basic_style(color='#e15383'))),
], is_static=True, viewport={'zoom': 3, 'lat': 33.4706, 'lng': -98.3457})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If we have a specific point in space and time we wish to test, it can be
Step2: Absent specific hypotheses, we can also conduct an exploratory
Step3: The results of these mass univariate analyses can be visualised by plotting
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
import numpy as np
import mne
from mne.channels import find_layout, find_ch_connectivity
from mne.stats import spatio_temporal_cluster_test
np.random.seed(0)
# Load the data
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
name = "NumberOfLetters"
# Split up the data by the median length in letters via the attached metadata
median_value = str(epochs.metadata[name].median())
long = epochs[name + " > " + median_value]
short = epochs[name + " < " + median_value]
time_windows = ((200, 250), (350, 450))
elecs = ["Fz", "Cz", "Pz"]
# display the EEG data in Pandas format (first 5 rows)
print(epochs.to_data_frame()[elecs].head())
report = "{elec}, time: {tmin}-{tmax} msec; t({df})={t_val:.3f}, p={p:.3f}"
print("\nTargeted statistical test results:")
for (tmin, tmax) in time_windows:
for elec in elecs:
# extract data
time_win = "{} < time < {}".format(tmin, tmax)
A = long.to_data_frame().query(time_win)[elec].groupby("condition")
B = short.to_data_frame().query(time_win)[elec].groupby("condition")
# conduct t test
t, p = ttest_ind(A.mean(), B.mean())
# display results
format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,
df=len(epochs.events) - 2, t_val=t, p=p)
print(report.format(**format_dict))
# Calculate statistical thresholds
con = find_ch_connectivity(epochs.info, "eeg")
# Extract data: transpose because the cluster test requires channels to be last
# In this case, inference is done over items. In the same manner, we could
# also conduct the test over, e.g., subjects.
X = [long.get_data().transpose(0, 2, 1),
short.get_data().transpose(0, 2, 1)]
tfce = dict(start=.2, step=.2)
t_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(
X, tfce, n_permutations=100)
significant_points = cluster_pv.reshape(t_obs.shape).T < .05
print(str(significant_points.sum()) + " points selected by TFCE ...")
# We need an evoked object to plot the image to be masked
evoked = mne.combine_evoked([long.average(), -short.average()],
weights='equal') # calculate difference wave
time_unit = dict(time_unit="s")
evoked.plot_joint(title="Long vs. short words", ts_args=time_unit,
topomap_args=time_unit) # show difference wave
# Create ROIs by checking channel labels
pos = find_layout(epochs.info).pos
rois = dict()
for pick, channel in enumerate(epochs.ch_names):
last_char = channel[-1] # for 10/20, last letter codes the hemisphere
roi = ("Midline" if last_char in "z12" else
("Left" if int(last_char) % 2 else "Right"))
rois[roi] = rois.get(roi, list()) + [pick]
# sort channels from front to center
# (y-coordinate of the position info in the layout)
rois = {roi: np.array(picks)[pos[picks, 1].argsort()]
for roi, picks in rois.items()}
# Visualize the results
fig, axes = plt.subplots(nrows=3, figsize=(8, 8))
vmax = np.abs(evoked.data).max() * 1e6
# Iterate over ROIs and axes
axes = axes.ravel().tolist()
for roi_name, ax in zip(sorted(rois.keys()), axes):
picks = rois[roi_name]
evoked.plot_image(picks=picks, axes=ax, colorbar=False, show=False,
clim=dict(eeg=(-vmax, vmax)), mask=significant_points,
**time_unit)
evoked.nave = None
ax.set_yticks((np.arange(len(picks))) + .5)
ax.set_yticklabels([evoked.ch_names[idx] for idx in picks])
if not ax.is_last_row(): # remove xticklabels for all but bottom axis
ax.set(xlabel='', xticklabels=[])
ax.set(ylabel='', title=roi_name)
fig.colorbar(ax.images[-1], ax=axes, fraction=.1, aspect=20,
pad=.05, shrink=2 / 3, label="uV", orientation="vertical")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters and read data
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Barachant <alexandre.barachant@gmail.com>
#
# License: BSD (3-clause)
from mne import (io, compute_raw_covariance, read_events, pick_types, Epochs)
from mne.datasets import sample
from mne.preprocessing import Xdawn
from mne.viz import plot_epochs_image
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(vis_r=4)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin') # replace baselining with high-pass
events = read_events(event_fname)
raw.info['bads'] = ['MEG 2443'] # set bad channels
picks = pick_types(raw.info, meg=True, eeg=False, stim=False, eog=False,
exclude='bads')
# Epoching
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
# Plot image epoch before xdawn
plot_epochs_image(epochs['vis_r'], picks=[230], vmin=-500, vmax=500)
# Estimates signal covariance
signal_cov = compute_raw_covariance(raw, picks=picks)
# Xdawn instance
xd = Xdawn(n_components=2, signal_cov=signal_cov)
# Fit xdawn
xd.fit(epochs)
# Denoise epochs
epochs_denoised = xd.apply(epochs)
# Plot image epoch after Xdawn
plot_epochs_image(epochs_denoised['vis_r'], picks=[230], vmin=-500, vmax=500)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: Setup source space and compute forward
Step3: From here on, standard inverse imaging methods can be used!
Step4: Get an infant MRI template
Step5: It comes with several helpful built-in files, including a 10-20 montage
Step6: There are also BEM and source spaces
Step7: You can ensure everything is as expected by plotting the result
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Joan Massich <mailsik@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import eegbci
from mne.datasets import fetch_fsaverage
# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)
# The files live in:
subject = 'fsaverage'
trans = 'fsaverage' # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')
raw_fname, = eegbci.load_data(subject=1, runs=[6])
raw = mne.io.read_raw_edf(raw_fname, preload=True)
# Clean channel names to be able to use a standard 1005 montage
new_names = dict(
(ch_name,
ch_name.rstrip('.').upper().replace('Z', 'z').replace('FP', 'Fp'))
for ch_name in raw.ch_names)
raw.rename_channels(new_names)
# Read and set the EEG electrode locations, which are already in fsaverage's
# space (MNI space) for standard_1020:
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)
raw.set_eeg_reference(projection=True) # needed for inverse modeling
# Check that the locations of EEG electrodes is correct with respect to MRI
mne.viz.plot_alignment(
raw.info, src=src, eeg=['original', 'projected'], trans=trans,
show_axes=True, mri_fiducials=True, dig='fiducials')
fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,
bem=bem, eeg=True, mindist=5.0, n_jobs=1)
print(fwd)
ch_names = \
'Fz Cz Pz Oz Fp1 Fp2 F3 F4 F7 F8 C3 C4 T7 T8 P3 P4 P7 P8 O1 O2'.split()
data = np.random.RandomState(0).randn(len(ch_names), 1000)
info = mne.create_info(ch_names, 1000., 'eeg')
raw = mne.io.RawArray(data, info)
subject = mne.datasets.fetch_infant_template('6mo', subjects_dir, verbose=True)
fname_1020 = op.join(subjects_dir, subject, 'montages', '10-20-montage.fif')
mon = mne.channels.read_dig_fif(fname_1020)
mon.rename_channels(
{f'EEG{ii:03d}': ch_name for ii, ch_name in enumerate(ch_names, 1)})
trans = mne.channels.compute_native_head_t(mon)
raw.set_montage(mon)
print(trans)
bem_dir = op.join(subjects_dir, subject, 'bem')
fname_src = op.join(bem_dir, f'{subject}-oct-6-src.fif')
src = mne.read_source_spaces(fname_src)
print(src)
fname_bem = op.join(bem_dir, f'{subject}-5120-5120-5120-bem-sol.fif')
bem = mne.read_bem_solution(fname_bem)
fig = mne.viz.plot_alignment(
raw.info, subject=subject, subjects_dir=subjects_dir, trans=trans,
src=src, bem=bem, coord_frame='mri', mri_fiducials=True, show_axes=True,
surfaces=('white', 'outer_skin', 'inner_skull', 'outer_skull'))
mne.viz.set_3d_view(fig, 25, 70, focalpoint=[0, -0.005, 0.01])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's load up some data. In this tutorial we will be working with the emotions data set introduced in emotions.
Step2: The feature_names variable contains list of pairs (feature name, type) that were provided in the original data set. In the case of emotions data the authors write
Step3: The label_names variable contains list of pairs (label name, type) of labels that were used to annotate the music. The paper states that
Step4: On a side note, Binary Relevance trains a classifier per each of the labels, we can see that the classifier hasn't been trained yet
Step5: Scikit-learn introduces a convention of how classifiers are organized. The typical usage of classifier is
Step6: The base classifiers have been trained now
Step7: Scikit-learn provides a set of metrics useful for evaluating the quality of the model. They are most often used by providing the true assignment matrix/array as the first argument, and the prediction matrix/array as the second argument.
|
<ASSISTANT_TASK:>
Python Code:
from skmultilearn.dataset import load_dataset
X_train, y_train, feature_names, label_names = load_dataset('emotions', 'train')
X_test, y_test, _, _ = load_dataset('emotions', 'test')
feature_names[:10]
label_names
from skmultilearn.problem_transform import BinaryRelevance
from sklearn.svm import SVC
clf = BinaryRelevance(
classifier=SVC(),
require_dense=[False, True]
)
clf.classifiers_
clf.fit(X_train, y_train)
clf.classifiers_
prediction = clf.predict(X_test)
prediction
## Measure the quality
import sklearn.metrics as metrics
metrics.hamming_loss(y_test, prediction)
metrics.accuracy_score(y_test, prediction)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: useful functions
Step2: .name
Step3: Find the Last Modified File
Step4: Create a Unique File Name
Step5: dir exist and then glob with multiple extensions
Step6: shutil
Step7: collections Counter
|
<ASSISTANT_TASK:>
Python Code:
from pathlib import Path
import pathlib
save_dir = "./test_dir"
Path(save_dir).mkdir(parents=True, exist_ok=True)
### get current directory
print(Path.cwd())
print(Path.home())
print(pathlib.Path.home().joinpath('python', 'scripts', 'test.py'))
# Reading and Writing Files
path = pathlib.Path.cwd() / 'test.txt'
with open(path, mode='r') as fid:
headers = [line.strip() for line in fid if line.startswith('#')]
print('\n'.join(headers))
print('full text', path.read_text())
print(path.resolve().parent == pathlib.Path.cwd())
print('path', path)
print('stem', path.stem)
print('suffix', path.suffix)
print('parent', path.parent)
print('parent of parent', path.parent.parent)
print('anchor', path.anchor)
# move or replace file
path.with_suffix('.py')
path.replace(path.with_suffix('.md')) # 改后缀
path.with_suffix('.md').replace(path.with_suffix('.txt'))
# Display a Directory Tree
def tree(directory):
print(f'+ {directory}')
for path in sorted(directory.rglob('*')):
depth = len(path.relative_to(directory).parts)
spacer = ' ' * depth
print(f'{spacer}+ {path.name}')
tree(pathlib.Path.cwd())
from datetime import datetime
directory = pathlib.Path.cwd()
time, file_path = max((f.stat().st_mtime, f) for f in directory.iterdir())
print(datetime.fromtimestamp(time), file_path)
directory = pathlib.Path.home()
file_list = list(directory.glob('*.*'))
print(file_list)
def unique_path(directory, name_pattern):
counter = 0
while True:
counter += 1
path = directory / name_pattern.format(counter)
if not path.exists():
return path
path = unique_path(pathlib.Path.cwd(), 'test{:03d}.txt')
print(path)
input_path = Path("/mnt/d/code/image/hedian-demo/data/test/220425")
file_list = []
if input_path.exists():
if input_path.is_dir():
# for a in input_path.glob("*"):
# print(a)
file_list = [p.resolve() for p in input_path.glob("*") if
p.suffix in {".png", ".jpg", ".JPG", ".PNG"}]
print(len(file_list), file_list)
else:
print(p)
# PosixPath as str: str(p.resolve())
# move all .txt file to achive fold
import glob
import os
import shutil
for file_name in glob.glob('*.txt'): # return a list of
new_path = os.path.join('archive', file_name)
shutil.move(file_name, new_path)
# counting files
import collections
print(collections.Counter(p.suffix for p in pathlib.Path.cwd().iterdir()))
print('漂亮', collections.Counter(p.suffix for p in pathlib.Path.cwd().glob('*.t*')))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Consider the lognormal distribution.
Step2: In scipy, you need an additional shape parameter (s), plus the usual loc and scale. Aside from the mystery behind what s might bem that seems straight-forward enough.
Step3: A new challenger appears
Step4: Hopefully that's much more readable and straight-forward.
Step5: Other distributions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from scipy import stats
np.random.seed(0)
mu = 0
sigma = 1
N = 3
np.random.lognormal(mean=mu, sigma=sigma, size=N)
np.random.seed(0)
stats.lognorm(sigma, loc=0, scale=np.exp(mu)).rvs(size=N)
import paramnormal
np.random.seed(0)
paramnormal.lognormal(mu=mu, sigma=sigma).rvs(size=N)
np.random.seed(0)
paramnormal.lognormal(μ=mu, σ=sigma).rvs(size=N)
for d in paramnormal.dist.__all__:
print(d)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Process Model
Step3: Engine model
Step4: Torque curves for a typical car engine. The graph on the left shows the torque generated by the engine as a function of the angular velocity of the engine, while the curve on the right shows torque as a function of car speed for different gears.
Step5: Input/ouput model for the vehicle system
Step6: State space controller
Step7: Pole/zero cancellation
Step8: PI Controller
Step9: Robustness to change in mass
Step10: PI controller with antiwindup protection
Step11: Response to a small hill
Step12: Effect of Windup
Step13: PI controller with anti-windup compensation
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from math import pi
import control as ct
def vehicle_update(t, x, u, params={}):
Vehicle dynamics for cruise control system.
Parameters
----------
x : array
System state: car velocity in m/s
u : array
System input: [throttle, gear, road_slope], where throttle is
a float between 0 and 1, gear is an integer between 1 and 5,
and road_slope is in rad.
Returns
-------
float
Vehicle acceleration
from math import copysign, sin
sign = lambda x: copysign(1, x) # define the sign() function
# Set up the system parameters
m = params.get('m', 1600.) # vehicle mass, kg
g = params.get('g', 9.8) # gravitational constant, m/s^2
Cr = params.get('Cr', 0.01) # coefficient of rolling friction
Cd = params.get('Cd', 0.32) # drag coefficient
rho = params.get('rho', 1.3) # density of air, kg/m^3
A = params.get('A', 2.4) # car area, m^2
alpha = params.get(
'alpha', [40, 25, 16, 12, 10]) # gear ratio / wheel radius
# Define variables for vehicle state and inputs
v = x[0] # vehicle velocity
throttle = np.clip(u[0], 0, 1) # vehicle throttle
gear = u[1] # vehicle gear
theta = u[2] # road slope
# Force generated by the engine
omega = alpha[int(gear)-1] * v # engine angular speed
F = alpha[int(gear)-1] * motor_torque(omega, params) * throttle
# Disturbance forces
#
# The disturbance force Fd has three major components: Fg, the forces due
# to gravity; Fr, the forces due to rolling friction; and Fa, the
# aerodynamic drag.
# Letting the slope of the road be \theta (theta), gravity gives the
# force Fg = m g sin \theta.
Fg = m * g * sin(theta)
# A simple model of rolling friction is Fr = m g Cr sgn(v), where Cr is
# the coefficient of rolling friction and sgn(v) is the sign of v (±1) or
# zero if v = 0.
Fr = m * g * Cr * sign(v)
# The aerodynamic drag is proportional to the square of the speed: Fa =
# 1/2 \rho Cd A |v| v, where \rho is the density of air, Cd is the
# shape-dependent aerodynamic drag coefficient, and A is the frontal area
# of the car.
Fa = 1/2 * rho * Cd * A * abs(v) * v
# Final acceleration on the car
Fd = Fg + Fr + Fa
dv = (F - Fd) / m
return dv
def motor_torque(omega, params={}):
# Set up the system parameters
Tm = params.get('Tm', 190.) # engine torque constant
omega_m = params.get('omega_m', 420.) # peak engine angular speed
beta = params.get('beta', 0.4) # peak engine rolloff
return np.clip(Tm * (1 - beta * (omega/omega_m - 1)**2), 0, None)
# Figure 4.2
fig, axes = plt.subplots(1, 2, figsize=(7, 3))
# (a) - single torque curve as function of omega
ax = axes[0]
omega = np.linspace(0, 700, 701)
ax.plot(omega, motor_torque(omega))
ax.set_xlabel(r'Angular velocity $\omega$ [rad/s]')
ax.set_ylabel('Torque $T$ [Nm]')
ax.grid(True, linestyle='dotted')
# (b) - torque curves in different gears, as function of velocity
ax = axes[1]
v = np.linspace(0, 70, 71)
alpha = [40, 25, 16, 12, 10]
for gear in range(5):
omega = alpha[gear] * v
T = motor_torque(omega)
plt.plot(v, T, color='#1f77b4', linestyle='solid')
# Set up the axes and style
ax.axis([0, 70, 100, 200])
ax.grid(True, linestyle='dotted')
# Add labels
plt.text(11.5, 120, '$n$=1')
ax.text(24, 120, '$n$=2')
ax.text(42.5, 120, '$n$=3')
ax.text(58.5, 120, '$n$=4')
ax.text(58.5, 185, '$n$=5')
ax.set_xlabel('Velocity $v$ [m/s]')
ax.set_ylabel('Torque $T$ [Nm]')
plt.suptitle('Torque curves for typical car engine')
plt.tight_layout()
vehicle = ct.NonlinearIOSystem(
vehicle_update, None, name='vehicle',
inputs = ('u', 'gear', 'theta'), outputs = ('v'), states=('v'))
# Define a function for creating a "standard" cruise control plot
def cruise_plot(sys, t, y, label=None, t_hill=None, vref=20, antiwindup=False,
linetype='b-', subplots=None, legend=None):
if subplots is None:
subplots = [None, None]
# Figure out the plot bounds and indices
v_min = vref - 1.2; v_max = vref + 0.5; v_ind = sys.find_output('v')
u_min = 0; u_max = 2 if antiwindup else 1; u_ind = sys.find_output('u')
# Make sure the upper and lower bounds on v are OK
while max(y[v_ind]) > v_max: v_max += 1
while min(y[v_ind]) < v_min: v_min -= 1
# Create arrays for return values
subplot_axes = list(subplots)
# Velocity profile
if subplot_axes[0] is None:
subplot_axes[0] = plt.subplot(2, 1, 1)
else:
plt.sca(subplots[0])
plt.plot(t, y[v_ind], linetype)
plt.plot(t, vref*np.ones(t.shape), 'k-')
if t_hill:
plt.axvline(t_hill, color='k', linestyle='--', label='t hill')
plt.axis([0, t[-1], v_min, v_max])
plt.xlabel('Time $t$ [s]')
plt.ylabel('Velocity $v$ [m/s]')
# Commanded input profile
if subplot_axes[1] is None:
subplot_axes[1] = plt.subplot(2, 1, 2)
else:
plt.sca(subplots[1])
plt.plot(t, y[u_ind], 'r--' if antiwindup else linetype, label=label)
# Applied input profile
if antiwindup:
plt.plot(t, np.clip(y[u_ind], 0, 1), linetype, label='Applied')
if t_hill:
plt.axvline(t_hill, color='k', linestyle='--')
if legend:
plt.legend(frameon=False)
plt.axis([0, t[-1], u_min, u_max])
plt.xlabel('Time $t$ [s]')
plt.ylabel('Throttle $u$')
return subplot_axes
def sf_update(t, z, u, params={}):
y, r = u[1], u[2]
return y - r
def sf_output(t, z, u, params={}):
# Get the controller parameters that we need
K = params.get('K', 0)
ki = params.get('ki', 0)
kf = params.get('kf', 0)
xd = params.get('xd', 0)
yd = params.get('yd', 0)
ud = params.get('ud', 0)
# Get the system state and reference input
x, y, r = u[0], u[1], u[2]
return ud - K * (x - xd) - ki * z + kf * (r - yd)
# Create the input/output system for the controller
control_sf = ct.NonlinearIOSystem(
sf_update, sf_output, name='control',
inputs=('x', 'y', 'r'),
outputs=('u'),
states=('z'))
# Create the closed loop system for the state space controller
cruise_sf = ct.InterconnectedSystem(
(vehicle, control_sf), name='cruise',
connections=(
('vehicle.u', 'control.u'),
('control.x', 'vehicle.v'),
('control.y', 'vehicle.v')),
inplist=('control.r', 'vehicle.gear', 'vehicle.theta'),
outlist=('control.u', 'vehicle.v'), outputs=['u', 'v'])
# Define the time and input vectors
T = np.linspace(0, 25, 501)
vref = 20 * np.ones(T.shape)
gear = 4 * np.ones(T.shape)
theta0 = np.zeros(T.shape)
# Find the equilibrium point for the system
Xeq, Ueq = ct.find_eqpt(
vehicle, [vref[0]], [0, gear[0], theta0[0]], y0=[vref[0]], iu=[1, 2])
print("Xeq = ", Xeq)
print("Ueq = ", Ueq)
# Compute the linearized system at the eq pt
cruise_linearized = ct.linearize(vehicle, Xeq, [Ueq[0], gear[0], 0])
# Construct the gain matrices for the system
A, B, C = cruise_linearized.A, cruise_linearized.B[0, 0], cruise_linearized.C
K = 0.5
kf = -1 / (C * np.linalg.inv(A - B * K) * B)
# Compute the steady state velocity and throttle setting
xd = Xeq[0]
ud = Ueq[0]
yd = vref[-1]
# Response of the system with no integral feedback term
plt.figure()
theta_hill = [
0 if t <= 5 else
4./180. * pi * (t-5) if t <= 6 else
4./180. * pi for t in T]
t, y_sfb = ct.input_output_response(
cruise_sf, T, [vref, gear, theta_hill], [Xeq[0], 0],
params={'K':K, 'ki':0.0, 'kf':kf, 'xd':xd, 'ud':ud, 'yd':yd})
subplots = cruise_plot(cruise_sf, t, y_sfb, t_hill=5, linetype='b--')
# Response of the system with state feedback + integral action
t, y_sfb_int = ct.input_output_response(
cruise_sf, T, [vref, gear, theta_hill], [Xeq[0], 0],
params={'K':K, 'ki':0.1, 'kf':kf, 'xd':xd, 'ud':ud, 'yd':yd})
cruise_plot(cruise_sf, t, y_sfb_int, t_hill=5, linetype='b-', subplots=subplots)
# Add title and legend
plt.suptitle('Cruise control with state feedback, integral action')
import matplotlib.lines as mlines
p_line = mlines.Line2D([], [], color='blue', linestyle='--', label='State feedback')
pi_line = mlines.Line2D([], [], color='blue', linestyle='-', label='w/ integral action')
plt.legend(handles=[p_line, pi_line], frameon=False, loc='lower right');
# Get the transfer function from throttle input + hill to vehicle speed
P = ct.ss2tf(cruise_linearized[0, 0])
# Construction a controller that cancels the pole
kp = 0.5
a = -P.pole()[0]
b = np.real(P(0)) * a
ki = a * kp
C = ct.tf2ss(ct.TransferFunction([kp, ki], [1, 0]))
control_pz = ct.LinearIOSystem(C, name='control', inputs='u', outputs='y')
print("system: a = ", a, ", b = ", b)
print("pzcancel: kp =", kp, ", ki =", ki, ", 1/(kp b) = ", 1/(kp * b))
print("sfb_int: K = ", K, ", ki = 0.1")
# Construct the closed loop system and plot the response
# Create the closed loop system for the state space controller
cruise_pz = ct.InterconnectedSystem(
(vehicle, control_pz), name='cruise_pz',
connections = (
('control.u', '-vehicle.v'),
('vehicle.u', 'control.y')),
inplist = ('control.u', 'vehicle.gear', 'vehicle.theta'),
inputs = ('vref', 'gear', 'theta'),
outlist = ('vehicle.v', 'vehicle.u'),
outputs = ('v', 'u'))
# Find the equilibrium point
X0, U0 = ct.find_eqpt(
cruise_pz, [vref[0], 0], [vref[0], gear[0], theta0[0]],
iu=[1, 2], y0=[vref[0], 0], iy=[0])
# Response of the system with PI controller canceling process pole
t, y_pzcancel = ct.input_output_response(
cruise_pz, T, [vref, gear, theta_hill], X0)
subplots = cruise_plot(cruise_pz, t, y_pzcancel, t_hill=5, linetype='b-')
cruise_plot(cruise_sf, t, y_sfb_int, t_hill=5, linetype='b--', subplots=subplots);
# Values of the first order transfer function P(s) = b/(s + a) are set above
# Define the input that we want to track
T = np.linspace(0, 40, 101)
vref = 20 * np.ones(T.shape)
gear = 4 * np.ones(T.shape)
theta_hill = np.array([
0 if t <= 5 else
4./180. * pi * (t-5) if t <= 6 else
4./180. * pi for t in T])
# Fix \omega_0 and vary \zeta
w0 = 0.5
subplots = [None, None]
for zeta in [0.5, 1, 2]:
# Create the controller transfer function (as an I/O system)
kp = (2*zeta*w0 - a)/b
ki = w0**2 / b
control_tf = ct.tf2io(
ct.TransferFunction([kp, ki], [1, 0.01*ki/kp]),
name='control', inputs='u', outputs='y')
# Construct the closed loop system by interconnecting process and controller
cruise_tf = ct.InterconnectedSystem(
(vehicle, control_tf), name='cruise',
connections = [('control.u', '-vehicle.v'), ('vehicle.u', 'control.y')],
inplist = ('control.u', 'vehicle.gear', 'vehicle.theta'),
inputs = ('vref', 'gear', 'theta'),
outlist = ('vehicle.v', 'vehicle.u'), outputs = ('v', 'u'))
# Plot the velocity response
X0, U0 = ct.find_eqpt(
cruise_tf, [vref[0], 0], [vref[0], gear[0], theta_hill[0]],
iu=[1, 2], y0=[vref[0], 0], iy=[0])
t, y = ct.input_output_response(cruise_tf, T, [vref, gear, theta_hill], X0)
subplots = cruise_plot(cruise_tf, t, y, t_hill=5, subplots=subplots)
# Fix \zeta and vary \omega_0
zeta = 1
subplots = [None, None]
for w0 in [0.2, 0.5, 1]:
# Create the controller transfer function (as an I/O system)
kp = (2*zeta*w0 - a)/b
ki = w0**2 / b
control_tf = ct.tf2io(
ct.TransferFunction([kp, ki], [1, 0.01*ki/kp]),
name='control', inputs='u', outputs='y')
# Construct the closed loop system by interconnecting process and controller
cruise_tf = ct.InterconnectedSystem(
(vehicle, control_tf), name='cruise',
connections = [('control.u', '-vehicle.v'), ('vehicle.u', 'control.y')],
inplist = ('control.u', 'vehicle.gear', 'vehicle.theta'),
inputs = ('vref', 'gear', 'theta'),
outlist = ('vehicle.v', 'vehicle.u'), outputs = ('v', 'u'))
# Plot the velocity response
X0, U0 = ct.find_eqpt(
cruise_tf, [vref[0], 0], [vref[0], gear[0], theta_hill[0]],
iu=[1, 2], y0=[vref[0], 0], iy=[0])
t, y = ct.input_output_response(cruise_tf, T, [vref, gear, theta_hill], X0)
subplots = cruise_plot(cruise_tf, t, y, t_hill=5, subplots=subplots)
# Nominal controller design for remaining analyses
# Construct a PI controller with rolloff, as a transfer function
Kp = 0.5 # proportional gain
Ki = 0.1 # integral gain
control_tf = ct.tf2io(
ct.TransferFunction([Kp, Ki], [1, 0.01*Ki/Kp]),
name='control', inputs='u', outputs='y')
cruise_tf = ct.InterconnectedSystem(
(vehicle, control_tf), name='cruise',
connections = [('control.u', '-vehicle.v'), ('vehicle.u', 'control.y')],
inplist = ('control.u', 'vehicle.gear', 'vehicle.theta'), inputs = ('vref', 'gear', 'theta'),
outlist = ('vehicle.v', 'vehicle.u'), outputs = ('v', 'u'))
# Define the time and input vectors
T = np.linspace(0, 25, 101)
vref = 20 * np.ones(T.shape)
gear = 4 * np.ones(T.shape)
theta0 = np.zeros(T.shape)
# Now simulate the effect of a hill at t = 5 seconds
plt.figure()
plt.suptitle('Response to change in road slope')
theta_hill = np.array([
0 if t <= 5 else
4./180. * pi * (t-5) if t <= 6 else
4./180. * pi for t in T])
subplots = [None, None]
linecolor = ['red', 'blue', 'green']
handles = []
for i, m in enumerate([1200, 1600, 2000]):
# Compute the equilibrium state for the system
X0, U0 = ct.find_eqpt(
cruise_tf, [vref[0], 0], [vref[0], gear[0], theta0[0]],
iu=[1, 2], y0=[vref[0], 0], iy=[0], params={'m':m})
t, y = ct.input_output_response(
cruise_tf, T, [vref, gear, theta_hill], X0, params={'m':m})
subplots = cruise_plot(cruise_tf, t, y, t_hill=5, subplots=subplots,
linetype=linecolor[i][0] + '-')
handles.append(mlines.Line2D([], [], color=linecolor[i], linestyle='-',
label="m = %d" % m))
# Add labels to the plots
plt.sca(subplots[0])
plt.ylabel('Speed [m/s]')
plt.legend(handles=handles, frameon=False, loc='lower right');
plt.sca(subplots[1])
plt.ylabel('Throttle')
plt.xlabel('Time [s]');
def pi_update(t, x, u, params={}):
# Get the controller parameters that we need
ki = params.get('ki', 0.1)
kaw = params.get('kaw', 2) # anti-windup gain
# Assign variables for inputs and states (for readability)
v = u[0] # current velocity
vref = u[1] # reference velocity
z = x[0] # integrated error
# Compute the nominal controller output (needed for anti-windup)
u_a = pi_output(t, x, u, params)
# Compute anti-windup compensation (scale by ki to account for structure)
u_aw = kaw/ki * (np.clip(u_a, 0, 1) - u_a) if ki != 0 else 0
# State is the integrated error, minus anti-windup compensation
return (vref - v) + u_aw
def pi_output(t, x, u, params={}):
# Get the controller parameters that we need
kp = params.get('kp', 0.5)
ki = params.get('ki', 0.1)
# Assign variables for inputs and states (for readability)
v = u[0] # current velocity
vref = u[1] # reference velocity
z = x[0] # integrated error
# PI controller
return kp * (vref - v) + ki * z
control_pi = ct.NonlinearIOSystem(
pi_update, pi_output, name='control',
inputs = ['v', 'vref'], outputs = ['u'], states = ['z'],
params = {'kp':0.5, 'ki':0.1})
# Create the closed loop system
cruise_pi = ct.InterconnectedSystem(
(vehicle, control_pi), name='cruise',
connections=(
('vehicle.u', 'control.u'),
('control.v', 'vehicle.v')),
inplist=('control.vref', 'vehicle.gear', 'vehicle.theta'),
outlist=('control.u', 'vehicle.v'), outputs=['u', 'v'])
# Compute the equilibrium throttle setting for the desired speed
X0, U0, Y0 = ct.find_eqpt(
cruise_pi, [vref[0], 0], [vref[0], gear[0], theta0[0]],
y0=[0, vref[0]], iu=[1, 2], iy=[1], return_y=True)
# Now simulate the effect of a hill at t = 5 seconds
plt.figure()
plt.suptitle('Car with cruise control encountering sloping road')
theta_hill = [
0 if t <= 5 else
4./180. * pi * (t-5) if t <= 6 else
4./180. * pi for t in T]
t, y = ct.input_output_response(
cruise_pi, T, [vref, gear, theta_hill], X0)
cruise_plot(cruise_pi, t, y, t_hill=5);
plt.figure()
plt.suptitle('Cruise control with integrator windup')
T = np.linspace(0, 50, 101)
vref = 20 * np.ones(T.shape)
theta_hill = [
0 if t <= 5 else
6./180. * pi * (t-5) if t <= 6 else
6./180. * pi for t in T]
t, y = ct.input_output_response(
cruise_pi, T, [vref, gear, theta_hill], X0,
params={'kaw':0})
cruise_plot(cruise_pi, t, y, label='Commanded', t_hill=5,
antiwindup=True, legend=True);
plt.figure()
plt.suptitle('Cruise control with integrator anti-windup protection')
t, y = ct.input_output_response(
cruise_pi, T, [vref, gear, theta_hill], X0,
params={'kaw':2.})
cruise_plot(cruise_pi, t, y, label='Commanded', t_hill=5,
antiwindup=True, legend=True);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {
'.': '||Period||',
',': '||Comma||',
'"': '||QuotationMark||',
';': '||Semicolon||',
'!': '||Exclamationmark||',
'?': '||Questionmark||',
'(': '||LeftParentheses||',
')': '||RightParentheses||',
'--': '||Dash||',
'\n': '||Return||',
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, shape=[None, None], name="input")
targets = tf.placeholder(tf.int32, shape=[None, None])
learning_rate = tf.placeholder(tf.float32)
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
n_layers = 1
# keep_prob = 0.5
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# dorp = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * n_layers)
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embeddings = tf.Variable(tf.random_uniform([vocab_size, embed_dim], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size,
weights_initializer=tf.truncated_normal_initializer(mean=0.0,stddev=0.01),
biases_initializer=tf.zeros_initializer(),
activation_fn=None)
# logits = tf.nn.sigmoid(logits)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
n_batches = len(int_text) // (batch_size * seq_length)
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 30
# Batch Size
batch_size = 500
# RNN Size
rnn_size = 500
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.03
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return (loaded_graph.get_tensor_by_name('input:0'),
loaded_graph.get_tensor_by_name('initial_state:0'),
loaded_graph.get_tensor_by_name('final_state:0'),
loaded_graph.get_tensor_by_name('probs:0'))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
choice = np.random.choice(len(int_to_vocab), 1, p=probabilities)
return int_to_vocab[choice[0]]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We demonstrate sampling from this as follows,
Step2: We now define our parametrized function $f(\theta, x)$, and choose a random initial value for the parameter $\theta$.
Step3: For the internal optimizer, which will fit $\theta$, we will use RMSProp. For the external optimizer, which we will use to fit the learning rate, we will use Adam. In optax, we must use optax.inject_hyperparams in order to allow the outer optimizer to modify the learning rate of the inner optimizer.
Step4: In the following code, we implement a step of gradient descent using the computed loss.
Step5: For the meta-learning part of the problem, we will use the inner step to compute an outer loss value, and an outer step.
Step6: In the following, we put all of the code above together in order to fit a value for $\theta$.
Step7: We can now plot the learning rates and values for $\theta$ that we computed during our optimization,
|
<ASSISTANT_TASK:>
Python Code:
from typing import Callable, Iterator, Tuple
import chex
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import optax
def generator() -> Iterator[Tuple[chex.Array, chex.Array]]:
rng = jax.random.PRNGKey(0)
while True:
rng, k1, k2 = jax.random.split(rng, num=3)
x = jax.random.uniform(k1, minval=0.0, maxval=10.0)
y = 10.0 * x + jax.random.normal(k2)
yield x, y
g = generator()
for _ in range(5):
x, y = next(g)
print(f"Sampled y = {y:.3f}, x = {x:.3f}")
def f(theta: chex.Array, x: chex.Array) -> chex.Array:
return x * theta
theta = jax.random.normal(jax.random.PRNGKey(42))
init_learning_rate = jnp.array(0.1)
meta_learning_rate = jnp.array(0.03)
opt = optax.inject_hyperparams(optax.rmsprop)(learning_rate=init_learning_rate)
meta_opt = optax.adam(learning_rate=meta_learning_rate)
def loss(theta, x, y):
return optax.l2_loss(y, f(theta, x))
def step(theta, state, x, y):
grad = jax.grad(loss)(theta, x, y)
updates, state = opt.update(grad, state)
theta = optax.apply_updates(theta, updates)
return theta, state
@jax.jit
def outer_loss(eta, theta, state, samples):
state.hyperparams['learning_rate'] = jax.nn.sigmoid(eta)
for x, y in samples[:-1]:
theta, state = step(theta, state, x, y)
x, y = samples[-1]
return loss(theta, x, y), (theta, state)
@jax.jit
def outer_step(eta, theta, meta_state, state, samples):
grad, (theta, state) = jax.grad(
outer_loss, has_aux=True)(eta, theta, state, samples)
meta_updates, meta_state = meta_opt.update(grad, meta_state)
eta = optax.apply_updates(eta, meta_updates)
return eta, theta, meta_state, state
state = opt.init(theta)
# inverse sigmoid, to match the value we initialized the inner optimizer with.
eta = -np.log(1. / init_learning_rate - 1)
meta_state = meta_opt.init(eta)
N = 7
learning_rates = []
thetas = []
for i in range(2000):
samples = [next(g) for i in range(N)]
eta, theta, meta_state, state = outer_step(eta, theta, meta_state, state, samples)
learning_rates.append(jax.nn.sigmoid(eta))
thetas.append(theta)
fig, (ax1, ax2) = plt.subplots(2);
fig.suptitle('Meta-learning RMSProp\'s learning rate');
plt.xlabel('Step');
ax1.semilogy(range(len(learning_rates)), learning_rates);
ax1.set(ylabel='Learning rate');
ax1.label_outer();
plt.xlabel('Number of updates');
ax2.semilogy(range(len(thetas)), thetas);
ax2.label_outer();
ax2.set(ylabel='Theta');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us import annotations
Step2: Candidates have two classes, one with nodules, one without
Step3: Classes are heaviliy unbalanced, hardly 0.2% percent are positive.
Step4: Check if my class works
Step5: Try it on a test set you know works
Step6: Ok the class to get image data works
Step7: Now split it into test train set
Step8: Create a validation dataset
Step9: Focus on training data
Step10: There are 845 positive cases out of 5187 cases in the training set. We will need to augment the positive dataset like mad.
Step11: Preprocessing
Step12: Convnet stuff
Step13: loading image data on the fly is inefficient. So I am us
Step14: loading tflearn packages
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import glob
import SimpleITK as sitk
from PIL import Image
from scipy.misc import imread
%matplotlib inline
from IPython.display import clear_output
pd.options.mode.chained_assignment = None
annotations = pd.read_csv('../src/data/annotations.csv')
candidates = pd.read_csv('../src/data/candidates.csv')
annotations.head()
candidates['class'].sum()
len(annotations)
candidates.info()
print len(candidates[candidates['class'] == 1])
print len(candidates[candidates['class'] == 0])
import multiprocessing
num_cores = multiprocessing.cpu_count()
print num_cores
class CTScan(object):
def __init__(self, filename = None, coords = None):
self.filename = filename
self.coords = coords
self.ds = None
self.image = None
def reset_coords(self, coords):
self.coords = coords
def read_mhd_image(self):
path = glob.glob('../data/raw/*/'+ self.filename + '.mhd')
self.ds = sitk.ReadImage(path[0])
self.image = sitk.GetArrayFromImage(self.ds)
def get_resolution(self):
return self.ds.GetSpacing()
def get_origin(self):
return self.ds.GetOrigin()
def get_ds(self):
return self.ds
def get_voxel_coords(self):
origin = self.get_origin()
resolution = self.get_resolution()
voxel_coords = [np.absolute(self.coords[j]-origin[j])/resolution[j] \
for j in range(len(self.coords))]
return tuple(voxel_coords)
def get_image(self):
return self.image
def get_subimage(self, width):
self.read_mhd_image()
x, y, z = self.get_voxel_coords()
subImage = self.image[z, y-width/2:y+width/2, x-width/2:x+width/2]
return subImage
def normalizePlanes(self, npzarray):
maxHU = 400.
minHU = -1000.
npzarray = (npzarray - minHU) / (maxHU - minHU)
npzarray[npzarray>1] = 1.
npzarray[npzarray<0] = 0.
return npzarray
def save_image(self, filename, width):
image = self.get_subimage(width)
image = self.normalizePlanes(image)
Image.fromarray(image*255).convert('L').save(filename)
positives = candidates[candidates['class']==1].index
negatives = candidates[candidates['class']==0].index
scan = CTScan(np.asarray(candidates.iloc[negatives[600]])[0], \
np.asarray(candidates.iloc[negatives[600]])[1:-1])
scan.read_mhd_image()
x, y, z = scan.get_voxel_coords()
image = scan.get_image()
dx, dy, dz = scan.get_resolution()
x0, y0, z0 = scan.get_origin()
filename = '1.3.6.1.4.1.14519.5.2.1.6279.6001.100398138793540579077826395208'
coords = (70.19, -140.93, 877.68)#[877.68, -140.93, 70.19]
scan = CTScan(filename, coords)
scan.read_mhd_image()
x, y, z = scan.get_voxel_coords()
image = scan.get_image()
dx, dy, dz = scan.get_resolution()
x0, y0, z0 = scan.get_origin()
positives
np.random.seed(42)
negIndexes = np.random.choice(negatives, len(positives)*5, replace = False)
candidatesDf = candidates.iloc[list(positives)+list(negIndexes)]
from sklearn.cross_validation import train_test_split
X = candidatesDf.iloc[:,:-1]
y = candidatesDf.iloc[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size = 0.20, random_state = 42)
len(X_train)
X_train.to_pickle('traindata')
X_test.to_pickle('testdata')
X_val.to_pickle('valdata')
def normalizePlanes(npzarray):
maxHU = 400.
minHU = -1000.
npzarray = (npzarray - minHU) / (maxHU - minHU)
npzarray[npzarray>1] = 1.
npzarray[npzarray<0] = 0.
return npzarray
print 'number of positive cases are ' + str(y_train.sum())
print 'total set size is ' + str(len(y_train))
print 'percentage of positive cases are ' + str(y_train.sum()*1.0/len(y_train))
tempDf = X_train[y_train == 1]
tempDf = tempDf.set_index(X_train[y_train == 1].index + 1000000)
X_train_new = X_train.append(tempDf)
tempDf = tempDf.set_index(X_train[y_train == 1].index + 2000000)
X_train_new = X_train_new.append(tempDf)
ytemp = y_train.reindex(X_train[y_train == 1].index + 1000000)
ytemp.loc[:] = 1
y_train_new = y_train.append(ytemp)
ytemp = y_train.reindex(X_train[y_train == 1].index + 2000000)
ytemp.loc[:] = 1
y_train_new = y_train_new.append(ytemp)
print len(X_train_new), len(y_train_new)
X_train_new.index
from scipy.misc import imresize
from PIL import ImageEnhance
class PreProcessing(object):
def __init__(self, image = None):
self.image = image
def subtract_mean(self):
self.image = (self.image/255.0 - 0.25)*255
return self.image
def downsample_data(self):
self.image = imresize(self.image, size = (40, 40), interp='bilinear', mode='L')
return self.image
def enhance_contrast(self):
self.image = ImageEnhance.Contrast(self.image)
return self.image
dirName = '../src/data/train/'
plt.figure(figsize = (10,10))
inp = imread(dirName + 'image_'+ str(30517) + '.jpg')
plt.subplot(221)
plt.imshow(inp)
plt.grid(False)
Pp = PreProcessing(inp)
inp2 = Pp.subtract_mean()
plt.subplot(222)
plt.imshow(inp2)
plt.grid(False)
#inp4 = Pp.enhance_contrast()
#plt.subplot(224)
#plt.imshow(inp4)
#plt.grid(False)
inp3 = Pp.downsample_data()
plt.subplot(223)
plt.imshow(inp3)
plt.grid(False)
#inp4 = Pp.enhance_contrast()
#plt.subplot(224)
#plt.imshow(inp4)
#plt.grid(False)
dirName
import tflearn
y_train_new.values.astype(int)
train_filenames =\
X_train_new.index.to_series().apply(lambda x:\
'../src/data/train/image_'+str(x)+'.jpg')
train_filenames.values.astype(str)
dataset_file = 'traindatalabels.txt'
train_filenames =\
X_train_new.index.to_series().apply(lambda x:\
filenames = train_filenames.values.astype(str)
labels = y_train_new.values.astype(int)
traindata = np.zeros(filenames.size,\
dtype=[('var1', 'S36'), ('var2', int)])
traindata['var1'] = filenames
traindata['var2'] = labels
np.savetxt(dataset_file, traindata, fmt="%10s %d")
# Build a HDF5 dataset (only required once)
from tflearn.data_utils import build_hdf5_image_dataset
build_hdf5_image_dataset(dataset_file, image_shape=(50, 50), mode='file', output_path='traindataset.h5', categorical_labels=True, normalize=True)
# Load HDF5 dataset
import h5py
h5f = h5py.File('traindataset.h5', 'r')
X_train_images = h5f['X']
Y_train_labels = h5f['Y']
h5f2 = h5py.File('../src/data/valdataset.h5', 'r')
X_val_images = h5f2['X']
Y_val_labels = h5f2['Y']
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.estimator import regression
from tflearn.data_preprocessing import ImagePreprocessing
from tflearn.data_augmentation import ImageAugmentation
# Make sure the data is normalized
img_prep = ImagePreprocessing()
img_prep.add_featurewise_zero_center()
img_prep.add_featurewise_stdnorm()
# Create extra synthetic training data by flipping, rotating and blurring the
# images on our data set.
img_aug = ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_rotation(max_angle=25.)
img_aug.add_random_blur(sigma_max=3.)
# Input is a 50x50 image with 1 color channels (grayscale)
network = input_data(shape=[None, 50, 50, 1],
data_preprocessing=img_prep,
data_augmentation=img_aug)
# Step 1: Convolution
network = conv_2d(network, 50, 3, activation='relu')
# Step 2: Max pooling
network = max_pool_2d(network, 2)
# Step 3: Convolution again
network = conv_2d(network, 64, 3, activation='relu')
# Step 4: Convolution yet again
network = conv_2d(network, 64, 3, activation='relu')
# Step 5: Max pooling again
network = max_pool_2d(network, 2)
# Step 6: Fully-connected 512 node neural network
network = fully_connected(network, 512, activation='relu')
# Step 7: Dropout - throw away some data randomly during training to prevent over-fitting
network = dropout(network, 0.5)
# Step 8: Fully-connected neural network with two outputs (0=isn't a nodule, 1=is a nodule) to make the final prediction
network = fully_connected(network, 2, activation='softmax')
# Tell tflearn how we want to train the network
network = regression(network, optimizer='adam',
loss='categorical_crossentropy',
learning_rate=0.001)
# Wrap the network in a model object
model = tflearn.DNN(network, tensorboard_verbose=0, checkpoint_path='nodule-classifier.tfl.ckpt')
# Train it! We'll do 100 training passes and monitor it as it goes.
model.fit(X_train_images, Y_train_labels, n_epoch=100, shuffle=True, validation_set=(X_val_images, Y_val_labels),
show_metric=True, batch_size=96,
snapshot_epoch=True,
run_id='nodule-classifier')
# Save model when training is complete to a file
model.save("nodule-classifier.tfl")
print("Network trained and saved as nodule-classifier.tfl!")
h5f2 = h5py.File('../src/data/testdataset.h5', 'r')
X_test_images = h5f2['X']
Y_test_labels = h5f2['Y']
model.predict(X_test_images)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get Daily price data for Caltex.
Step2: Transform to Weekly
Step3: All Ords Moving Average
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import pandas.io.data as web
import datetime
# date ranges
end = datetime.date(2015, 1, 26)
start = end + datetime.timedelta(weeks=-21)
start
# Get daily price data for Caltex
ctx = web.DataReader('ctx.ax', 'yahoo', start, end)
ctx
# resample to weekly ohlc data starting on Monday
weekly = pd.DataFrame()
weekly["Open"] = ctx['Open'].resample('W-FRI', how='first')
weekly["High"] = ctx['High'].resample('W-FRI', how='max')
weekly["Low"] = ctx['Low'].resample('W-FRI', how='min')
weekly["Close"] = ctx['Close'].resample('W-FRI', how='last')
weekly["Volume"] = ctx['Volume'].resample('W-FRI', how='mean')
weekly
# Work out 20 week Rate of Change in price.
round((weekly["Close"][-1] - weekly["Close"][-21]) / weekly["Close"][-21], 2)
# Work out 20 week high of closing prices
weekly["Close"][-21:-1]
weekly["Close"][-21:-1].max()
# date ranges
end = datetime.date(2015, 1, 26)
start = end + datetime.timedelta(weeks=-12)
all_ords = web.DataReader('^AORD', 'yahoo', start, end)
all_ords
weekly_ords = pd.DataFrame()
weekly_ords["Close"] = all_ords["Close"].resample("W-FRI", how="last")
weekly_ords
# 10 week moving average
pd.rolling_mean(weekly_ords["Close"], 10)
weekly_ords.plot()
nums = pd.Series([1,2,3,4,5,6,7,8,9,10,11])
pd.rolling_mean(nums,5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We're going to use some examples from https
Step2: Typically it's good practice to specify your parameters together
Step3: In this case we already know something about the shape of the input data!
Step4: Keras has many different backends that can be used (we're using TensorFlow).
Step5: As before we'll set our data to be float32 and rescale
Step6: And yet again we're going to do the same thing with our $y$ labels
Step7: OK now we're going to define a model with some new layers
Step8: The Conv2D and MaxPooling2D layers are new.
Step9: Now we'll compile as before.
Step10: Now fit the model
Step11: Why was that so much slower?
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
import keras
from keras.datasets import mnist # load up the training data!
from keras.models import Sequential # our model
from keras.layers import Dense, Dropout, Flatten # layers we've seen
from keras.layers import Conv2D, MaxPooling2D # new layers
from keras import backend as K # see later
batch_size = 128
num_classes = 10
epochs = 12
img_rows, img_cols = 28, 28
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
SVG(model_to_dot(model).create(prog='dot', format='svg'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
model.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load a lens file
Step2: Perform a quick-focus
Step3: Example of a Layout plot
Step4: Why do we need to set gamma?
Step5: Now that we have the pixel array, we can either use the convenience function provided in PyZDDE to make a quick plot, or make our own figure and plot as we want it.
Step6: Next, we will create a figure and direct PyZDDE to render the Layout plot in the provided figure and axes. We can then annotate the figure as we like.
Step7: Example of Ray Fan plot
Step8: Example of Spot diagram
Step9: Examples of using ipzCaptureWindowLQ() function in Zemax 13.2 or earlier
Step10: Note that the above command didn't work, because we need to push the lens from the DDE server to the Zemax main window first. Then we also need to open each window.
Step11: Now open the layout analysis window in Zemax. Assuming that this is the first analysis window that has been open, Zemax would have assigned the number 1 to it.
Step12: Open the MTF analysis window in Zemax now.
Step13: Examples of using ipzCaptureWindowLQ() function in Zemax 14 or later (OpticStudio)
Step14: Now open the layout analysis window in OpticStudio as before.
Step15: Open FFT MTF analysis window
Step16: Next, the FFT PSF analysis window was opened
Step17: A few others .... just for show
|
<ASSISTANT_TASK:>
Python Code:
import os
import matplotlib.pyplot as plt
import pyzdde.zdde as pyz
%matplotlib inline
l = pyz.createLink() # create a DDE link object for communication
zfile = os.path.join(l.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx')
l.zLoadFile(zfile)
l.zQuickFocus()
l.ipzCaptureWindow('Lay', percent=15, gamma=0.4)
arr = l.ipzCaptureWindow('Lay', percent=15, gamma=0.08, retArr=True)
pyz.imshow(arr, cropBorderPixels=(5, 5, 1, 90), figsize=(10,10), title='Layout Plot')
l.ipzGetFirst()
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
# Render the array
pyz.imshow(arr, cropBorderPixels=(5, 5, 1, 90), fig=fig, faxes=ax)
ax.set_title('Layout plot', fontsize=16)
# Annotate Lens numbers
ax.text(41, 70, "L1", fontsize=12)
ax.text(98, 105, "L2", fontsize=12)
ax.text(149, 89, "L3", fontsize=12)
# Annotate the lens with radius of curvature information
col = (0.08,0.08,0.08)
s1_r = 1.0/l.zGetSurfaceData(1,2)
ax.annotate("{:0.2f}".format(s1_r), (37, 232), (8, 265), fontsize=12,
arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5)))
s2_r = 1.0/l.zGetSurfaceData(2,2)
ax.annotate("{:0.2f}".format(s2_r), (47, 232), (50, 265), fontsize=12,
arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5)))
s6_r = 1.0/l.zGetSurfaceData(6,2)
ax.annotate("{:0.2f}".format(s6_r), (156, 218), (160, 251), fontsize=12,
arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5)))
ax.text(5, 310, "Cooke Triplet, EFL = {} mm, F# = {}, Total track length = {} mm"
.format(50, 5, 60.177), fontsize=14)
plt.show()
l.ipzCaptureWindow('Ray', percent=17, gamma=0.55)
rarr = l.ipzCaptureWindow('Ray', percent=25, gamma=0.15, retArr=True)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
pyz.imshow(rarr, cropBorderPixels=(5, 5, 48, 170), fig=fig, faxes=ax)
ax.set_title('Transverse Ray Fan Plot for OBJ: 20.00 (deg)', fontsize=14)
plt.show()
l.ipzCaptureWindow('Spt', percent=16, gamma=0.5)
sptd = l.ipzCaptureWindow('Spt', percent=25, gamma=0.15, retArr=True)
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
pyz.imshow(sptd, cropBorderPixels=(150, 150, 30, 180), fig=fig, faxes=ax)
ax.set_title('Spot diagram for OBJ: 20.00 (deg)', fontsize=14)
plt.show()
l.zSetMacroPath(r"C:\PROGRAMSANDEXPERIMENTS\ZEMAX\Macros")
l.ipzCaptureWindowLQ(1)
l.zPushLens()
l.ipzCaptureWindowLQ(1)
l.ipzCaptureWindowLQ(2)
pyz.closeLink()
l = pyz.createLink()
zfile = os.path.join(l.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx')
l.zLoadFile(zfile)
l.zPushLens()
# Set the macro path
l.zSetMacroPath(r"C:\PROGRAMSANDEXPERIMENTS\ZEMAX\Macros")
l.ipzCaptureWindowLQ(1)
l.ipzCaptureWindowLQ(2)
l.ipzCaptureWindowLQ(3)
l.ipzCaptureWindowLQ(4) # Shaded Model
l.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If we use a simple binomial model, which assumes independent samples from a binomial distribution with probability of mortality $p$, we can use MLE to obtain an estimate of this probability.
Step2: However, if we compare the variation of $y$ under this model, it is to small relative to the observed variation
Step3: Hence, the data are strongly overdispersed relative to what is predicted under a model with a fixed probability of death. A more realistic model would allow for these probabilities to vary among the cities. One way of representing this is conjugating the binomial distribution with another distribution that describes the variation in the binomial probability. A sensible choice for this is the beta distribution
Step4: Now, by multiplying these quantities together, we can obtain a non-normalized posterior.
Step5: To deal with the extreme skewness in the precision parameter $K$ and to facilitate modeling, we can transform the beta-binomial parameters to the real line via
Step6: Approximation Methods
Step7: Thus, our approximated mode is $\log(K)=7.6$, $\text{logit}(\eta)=-6.8$. We can plug this value, along with the variance-covariance matrix, into a function that returns the kernel of a multivariate normal distribution, and use this to plot the approximate posterior
Step8: Along with this, we can estimate a 95% probability interval for the estimated mode
Step9: Of course, this approximation is only reasonable for posteriors that are not strongly skewed, bimodal, or leptokurtic (heavy-tailed).
Step10: This approach is useful, for example, in estimating the normalizing constant for posterior distributions.
Step11: Finally, we need an implementation of the multivariate T probability distribution function, which is as follows
Step12: The next step is to find the constant $c$ that ensures
Step13: We can calculate an appropriate value of $c'$ by simply using the approximation method described above on calc_diff (tweaked to produce a negative value for minimization)
Step14: Now we can execute a rejection sampling algorithm
Step15: Notice that the efficiency of rejection sampling is not very high for this problem.
Step16: Rejection sampling is usually subject to declining performance as the dimension of the parameter space increases. Further improvement is gained by using optimized algorithms such as importance sampling which, as the name implies, samples more frequently from important areas of the distribution.
Step17: We can obtain the probability of these values under the posterior density
Step18: and under the T distribution
Step19: This allows us to calculate the importance weights
Step20: notice that we have subtracted the maximum value of the differences, which normalizes the weights.
Step21: Finally, the standard error of the estimates
Step22: Sampling Importance Resampling
Step23: The choice function in numpy.random can be used to generate a random sample from an arbitrary 1-D array.
Step24: One advantage of this approach is that one can easily extract a posterior probability interval for each parameter, simply by extracting quantiles from the resampled values.
Step25: Exercise
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
cancer = pd.read_csv('../data/cancer.csv')
cancer
ytotal, ntotal = cancer.sum().astype(float)
p_hat = ytotal/ntotal
p_hat
p_hat*(1.-p_hat)*ntotal
cancer.y.var()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 2, figsize=(10,4))
K_x = np.linspace(0, 10)
K_prior = lambda K: 1./(1. + K)**2
axes[0].plot(K_x, K_prior(K_x))
axes[0].set_xlabel('K')
axes[0].set_ylabel('p(K)')
eta_x = np.linspace(0, 1)
eta_prior = lambda eta: 1./(eta*(1.-eta))
axes[1].plot(eta_x, eta_prior(eta_x))
axes[1].set_xlabel(r'$\eta$')
axes[1].set_ylabel(r'p($\eta$)')
from scipy.special import betaln
def betabin_post(params, n, y):
K, eta = params
post = betaln(K*eta + y, K*(1.-eta) + n - y).sum()
post -= len(y)*betaln(K*eta, K*(1.-eta))
post -= np.log(eta*(1.-eta))
post -= 2.*np.log(1.+K)
return post
betabin_post((15000, 0.003), cancer.n, cancer.y)
# Create grid
K_x = np.linspace(1, 20000)
eta_x = np.linspace(0.0001, 0.003)
# Calculate posterior on grid
z = np.array([[betabin_post((K, eta), cancer.n, cancer.y)
for eta in eta_x] for K in K_x])
# Plot posterior
x, y = np.meshgrid(eta_x, K_x)
cplot = plt.contour(x, y, z-z.max(), [-0.5, -1, -2, -3, -4], cmap=plt.cm.RdBu)
plt.ylabel('K');plt.xlabel('$\eta$');
def betabin_trans(theta, n, y):
K = np.exp(theta[0])
eta = 1./(1. + np.exp(-theta[1]))
post = betaln(K*eta + y, K*(1.-eta) + n - y).sum()
post -= len(y)*betaln(K*eta, K*(1.-eta))
post += theta[0]
post -= 2.*np.log(1.+np.exp(theta[0]))
return post
betabin_trans((10, -7.5), cancer.n, cancer.y)
# Create grid
log_K_x = np.linspace(0, 20)
logit_eta_x = np.linspace(-8, -5)
# Calculate posterior on grid
z = np.array([[betabin_trans((t1, t2), cancer.n, cancer.y)
for t2 in logit_eta_x] for t1 in log_K_x])
# Plot posterior
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), [-0.5, -1, -2, -4, -8], cmap=plt.cm.RdBu)
plt.clabel(cplot, inline=1, fontsize=10, fmt='%1.1f')
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)');
from scipy.optimize import fmin_bfgs
betabin_trans_min = lambda *args: -betabin_trans(*args)
init_value = (10, -7.5)
opt = fmin_bfgs(betabin_trans_min, init_value,
args=(cancer.n, cancer.y), full_output=True)
mode, var = opt[0], opt[3]
mode, var
det = np.linalg.det
inv = np.linalg.inv
def lmvn(value, mu, Sigma):
# Log kernel of multivariate normal
delta = np.array(value) - mu
return 1 / (2. * (np.log(det(Sigma))) - np.dot(delta.T, np.dot(inv(Sigma), delta)))
z = np.array([[lmvn((t1, t2), mode, var)
for t2 in logit_eta_x] for t1 in log_K_x])
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), cmap=plt.cm.RdBu)
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)');
from scipy.stats.distributions import norm
se = np.sqrt(np.diag(var))
mode[0] + norm.ppf(0.025)*se[0], mode[0] + norm.ppf(0.975)*se[0]
mode[1] + norm.ppf(0.025)*se[1], mode[1] + norm.ppf(0.975)*se[1]
def rtriangle(low, high, mode):
alpha = -1
while np.random.random() > alpha:
u = np.random.uniform(low, high)
if u < mode:
alpha = (u - low) / (mode - low)
else:
alpha = (high - u) / (high - mode)
return(u)
_ = plt.hist([rtriangle(0, 7, 2) for t in range(10000)], bins=100)
chi2 = np.random.chisquare
mvn = np.random.multivariate_normal
rmvt = lambda nu, S, mu=0, size=1: (np.sqrt(nu) * (mvn(np.zeros(len(S)), S, size).T
/ chi2(nu, size))).T + mu
from scipy.special import gammaln
def mvt(x, nu, S, mu=0):
d = len(S)
n = len(x)
X = np.atleast_2d(x) - mu
Q = X.dot(np.linalg.inv(S)).dot(X.T).sum()
log_det = np.log(np.linalg.det(S))
log_pdf = gammaln((nu + d)/2.) - 0.5 * (d*np.log(np.pi*nu) + log_det) - gammaln(nu/2.)
log_pdf -= 0.5*(nu + d)*np.log(1 + Q/nu)
return(np.exp(log_pdf))
def calc_diff(theta, n, y, nu, S, mu):
return betabin_trans(theta, n, y) - np.log(mvt(theta, nu, S, mu))
calc_diff_min = lambda *args: -calc_diff(*args)
opt = fmin_bfgs(calc_diff_min,
(12, -7),
args=(cancer.n, cancer.y, 4, 2*var, mode),
full_output=True)
c = opt[1]
c
def reject(post, nu, S, mu, n, data, c):
k = len(mode)
# Draw samples from g(theta)
theta = rmvt(nu, S, mu, size=n)
# Calculate probability under g(theta)
gvals = np.array([np.log(mvt(t, nu, S, mu)) for t in theta])
# Calculate probability under f(theta)
fvals = np.array([post(t, data.n, data.y) for t in theta])
# Calculate acceptance probability
p = np.exp(fvals - gvals + c)
return theta[np.random.random(n) < p]
nsamples = 1000
sample = reject(betabin_trans, 4, var, mode, nsamples, cancer, c)
z = np.array([[betabin_trans((t1, t2), cancer.n, cancer.y)
for t2 in logit_eta_x] for t1 in log_K_x])
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), [-0.5, -1, -2, -4, -8], cmap=plt.cm.RdBu)
plt.clabel(cplot, inline=1, fontsize=10, fmt='%1.1f')
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)')
plt.scatter(*sample.T[[1,0]])
float(sample.size)/nsamples
theta = rmvt(4, var, mode, size=1000)
f_theta = np.array([betabin_trans(t, cancer.n, cancer.y) for t in theta])
q_theta = mvt(theta, 4, var, mode)
w = np.exp(f_theta - q_theta - max(f_theta - q_theta))
theta_si = [(w*t).sum()/w.sum() for t in theta.T]
theta_si
se = [np.sqrt((((theta.T[i] - theta_si[i])* w)**2).sum()/w.sum()) for i in (0,1)]
se
p_sir = w/w.sum()
theta_sir = theta[np.random.choice(range(len(theta)), size=10000, p=p_sir)]
fig, axes = plt.subplots(2)
_ = axes[0].hist(theta_sir.T[0], bins=30)
_ = axes[1].hist(theta_sir.T[1], bins=30)
logK_sample = theta_sir[:,0]
logK_sample.sort()
logK_sample[[250, 9750]]
# Write your answer here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the Data
Step2: Check the head of customers, and check out its info() and describe() methods.
Step3: Exploratory Data Analysis
Step4: Do the same but with the Time on App column instead.
Step5: Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.
Step6: Let's explore these types of relationships across the entire data set. Use pairplot to recreate the plot below.(Don't worry about the the colors)
Step7: Atma
Step8: Training and Testing Data
Step9: Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101
Step10: Training the Model
Step11: Create an instance of a LinearRegression() model named lm.
Step12: Train/fit lm on the training data.
Step13: Print out the coefficients of the model
Step14: Predicting Test Data
Step15: Create a scatterplot of the real test values versus the predicted values.
Step16: Evaluating the Model
Step17: Residuals
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy, matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
customers = pd.read_csv('Ecommerce Customers')
customers.head()
customers.describe()
customers.info()
sns.jointplot(customers['Time on Website'], customers['Yearly Amount Spent'])
sns.jointplot(customers['Time on App'], customers['Yearly Amount Spent'])
sns.jointplot(customers['Time on App'], customers['Length of Membership'], kind='hex')
sns.pairplot(data=customers)
sns.lmplot('Length of Membership', 'Yearly Amount Spent', data=customers)
customers.columns
x = customers[['Avg. Session Length', 'Time on App',
'Time on Website', 'Length of Membership']]
y = customers['Yearly Amount Spent']
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.3, random_state=101)
x_train.shape
y_test.shape
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(x_train, y_train)
lm.coef_
pd.DataFrame(lm.coef_, index=x_train.columns, columns=['Coefficients'])
lm.intercept_
y_predicted = lm.predict(x_test)
plt.scatter(y_test, y_predicted)
# plt.title='Fitted vs predicted'
plt.xlabel ='Fitted - yearly purchases'
plt.ylabel ='Predicted - yearly purchases'
plt.scatter()
from sklearn.metrics import mean_absolute_error, mean_squared_error
import numpy as np
print("MAE: " + str(mean_absolute_error(y_test, y_predicted)))
print("MSE: " + str(mean_squared_error(y_test, y_predicted)))
print("RMSE: " + str(np.sqrt(mean_squared_error(y_test, y_predicted))))
sns.distplot((y_test - y_predicted), bins=50)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-2', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this case study we'll develop a model of Spider-Man swinging from a springy cable of webbing attached to the top of the Empire State Building. Initially, Spider-Man is at the top of a nearby building, as shown in this diagram.
Step3: Compute the initial position
Step5: Now here's a version of make_system that takes a Params object as a parameter.
Step6: Let's make a System
Step8: Drag and spring forces
Step10: And here's the 2-D version of spring force. We saw the 1-D version in Chapter 21.
Step12: Here's the slope function, including acceleration due to gravity, drag, and the spring force of the webbing.
Step13: As always, let's test the slope function with the initial conditions.
Step14: And then run the simulation.
Step15: Visualizing the results
Step16: We can plot the velocities the same way.
Step17: Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion.
Step18: Letting go
Step19: The final conditions from Phase 1 are the initial conditions for Phase 2.
Step20: Here is the System for Phase 2. We can turn off the spring force by setting k=0, so we don't have to write a new slope function.
Step22: Here's an event function that stops the simulation when Spider-Man reaches the ground.
Step23: Run Phase 2.
Step24: Plot the results.
Step26: Now we can gather all that into a function that takes t_release and V_0, runs both phases, and returns the results.
Step27: And here's a test run.
Step28: Animation
Step30: Maximizing range
Step31: We can test it.
Step32: And run it for a few values.
Step33: Now we can use maximize_scalar to find the optimum.
Step34: Finally, we can run the simulation with the optimal value.
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
params = Params(height = 381, # m,
g = 9.8, # m/s**2,
mass = 75, # kg,
area = 1, # m**2,
rho = 1.2, # kg/m**3,
v_term = 60, # m / s,
length = 100, # m,
angle = (270 - 45), # degree,
k = 40, # N / m,
t_0 = 0, # s,
t_end = 30, # s
)
def initial_condition(params):
Compute the initial position and velocity.
params: Params object
H⃗ = Vector(0, params.height)
theta = np.deg2rad(params.angle)
x, y = pol2cart(theta, params.length)
L⃗ = Vector(x, y)
P⃗ = H⃗ + L⃗
V⃗ = Vector(0, 0)
return State(x=P⃗.x, y=P⃗.y, vx=V⃗.x, vy=V⃗.y)
initial_condition(params)
def make_system(params):
Makes a System object for the given conditions.
params: Params object
returns: System object
init = initial_condition(params)
mass, g = params.mass, params.g
rho, area, v_term = params.rho, params.area, params.v_term
C_d = 2 * mass * g / (rho * area * v_term**2)
return System(params, init=init, C_d=C_d)
system = make_system(params)
system.init
def drag_force(V⃗, system):
Compute drag force.
V⃗: velocity Vector
system: `System` object
returns: force Vector
rho, C_d, area = system.rho, system.C_d, system.area
mag = rho * vector_mag(V⃗)**2 * C_d * area / 2
direction = -vector_hat(V⃗)
f_drag = direction * mag
return f_drag
V⃗_test = Vector(10, 10)
drag_force(V⃗_test, system)
def spring_force(L⃗, system):
Compute drag force.
L⃗: Vector representing the webbing
system: System object
returns: force Vector
extension = vector_mag(L⃗) - system.length
if extension < 0:
mag = 0
else:
mag = system.k * extension
direction = -vector_hat(L⃗)
f_spring = direction * mag
return f_spring
L⃗_test = Vector(0, -system.length-1)
f_spring = spring_force(L⃗_test, system)
f_spring
def slope_func(t, state, system):
Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with g, rho, C_d, area, mass
returns: sequence (vx, vy, ax, ay)
x, y, vx, vy = state
P⃗ = Vector(x, y)
V⃗ = Vector(vx, vy)
g, mass = system.g, system.mass
H⃗ = Vector(0, system.height)
L⃗ = P⃗ - H⃗
a_grav = Vector(0, -g)
a_spring = spring_force(L⃗, system) / mass
a_drag = drag_force(V⃗, system) / mass
A⃗ = a_grav + a_drag + a_spring
return V⃗.x, V⃗.y, A⃗.x, A⃗.y
slope_func(0, system.init, system)
results, details = run_solve_ivp(system, slope_func)
details.message
def plot_position(results):
results.x.plot(label='x')
results.y.plot(label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
def plot_velocity(results):
results.vx.plot(label='vx')
results.vy.plot(label='vy')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
def plot_trajectory(results, label):
x = results.x
y = results.y
make_series(x, y).plot(label=label)
decorate(xlabel='x position (m)',
ylabel='y position (m)')
plot_trajectory(results, label='trajectory')
params1 = params.set(t_end=9)
system1 = make_system(params1)
results1, details1 = run_solve_ivp(system1, slope_func)
plot_trajectory(results1, label='phase 1')
t_0 = results1.index[-1]
t_0
init = results1.iloc[-1]
init
t_end = t_0 + 10
system2 = system1.set(init=init, t_0=t_0, t_end=t_end, k=0)
def event_func(t, state, system):
Stops when y=0.
state: State object
t: time
system: System object
returns: height
x, y, vx, vy = state
return y
results2, details2 = run_solve_ivp(system2, slope_func,
events=event_func)
details2.message
plot_trajectory(results1, label='phase 1')
plot_trajectory(results2, label='phase 2')
def run_two_phase(t_release, params):
Run both phases.
t_release: time when Spider-Man lets go of the webbing
params1 = params.set(t_end=t_release)
system1 = make_system(params1)
results1, details1 = run_solve_ivp(system1, slope_func)
t_0 = results1.index[-1]
t_end = t_0 + 10
init = results1.iloc[-1]
system2 = system1.set(init=init, t_0=t_0, t_end=t_end, k=0)
results2, details2 = run_solve_ivp(system2, slope_func,
events=event_func)
return results1.append(results2)
t_release = 9
results = run_two_phase(t_release, params)
plot_trajectory(results, 'trajectory')
x_final = results.iloc[-1].x
x_final
from matplotlib.pyplot import plot
xlim = results.x.min(), results.x.max()
ylim = results.y.min(), results.y.max()
def draw_func(t, state):
plot(state.x, state.y, 'bo')
decorate(xlabel='x position (m)',
ylabel='y position (m)',
xlim=xlim,
ylim=ylim)
# animate(results, draw_func)
def range_func(t_release, params):
Compute the final value of x.
t_release: time to release web
params: Params object
results = run_two_phase(t_release, params)
x_final = results.iloc[-1].x
print(t_release, x_final)
return x_final
range_func(9, params)
for t_release in linrange(3, 15, 3):
range_func(t_release, params)
bounds = [6, 12]
res = maximize_scalar(range_func, params, bounds=bounds)
best_time = res.x
results = run_two_phase(best_time, params)
plot_trajectory(results, label='trajectory')
x_final = results.iloc[-1].x
x_final
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 2
Step2: A recursive Implementation of the knapsack algorithm with caching
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
file = "knapsack1.txt"
fp = open(file, 'r+')
data = fp.readlines()
W, n = data[0].split(" ")
W, n = int(W), int(n)
v = []
w = []
for r in data[1:]:
v_i, w_i = r.split(" ")
v.append(int(v_i))
w.append(int(w_i))
A = np.zeros([n, W+1])
for i in range(n):
for x in range(W+1):
if x >= w[i]:
A[i,x]= max(A[i-1,x], A[i-1,x-w[i]]+v[i])
else:
A[i,x]= A[i-1,x]
print (A)
file = "knapsack_big.txt"
fp = open(file, 'r+')
data = fp.readlines()
W, n = data[0].split(" ")
W, n = int(W), int(n)
v = []
w = []
for r in data[1:]:
v_i, w_i = r.split(" ")
v.append(int(v_i))
w.append(int(w_i))
import sys
sys.setrecursionlimit(2500)
cache = dict()
def knap(i, _w):
# print (i, _w)
key = str(i)+"-"+str(_w)
if i == 0:
cache[key] = 0
return 0
if _w > w[i]:
key1 = str(i-1)+"-"+str(_w - w[i])
key2 = str(i-1)+"-"+str(_w)
if key1 in cache and key2 in cache:
a1 = cache[key1]
a2 = cache[key2]
cache[key] = max(v[i]+a1, a2)
elif key1 in cache:
a1 = cache[key1]
cache[key] = max(v[i]+a1, knap(i-1, _w))
elif key2 in cache:
a2 = cache[key2]
cache[key] = max(v[i]+knap(i-1,_w-w[i]), a2)
else:
cache[key] = max(v[i]+knap(i-1,_w-w[i]), knap(i-1, _w))
else:
key2 = str(i-1)+"-"+str(_w)
if key2 in cache:
cache[key] = cache[key2]
else:
cache[key] = knap(i-1,_w)
return cache[key]
knap(n-1,W)
print (cache[str(n-1)+"-"+str(W)])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Para simplificar el sistema a resolver, cambiaremos los índices dobles por indices lineales mediante la conversión
Step2: Luego debemos construir una matriz $A$ y un vector $b$ bajo esta nueva numeración tal que el sistema $Av=b$ sea resoluble y el resultado podamos trasladarlo de vuelta al sistema de $w_{ij}$. Esta matriz naturalmente será de tamaño $mn \times mn$ y cada punto de la grilla tendrá su propia ecuación, como uno podría pensar.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.mlab import griddata
m, n = 10, 4
xl, xr = (0.0, 1.0)
yb, yt = (0.0, 1.0)
h = (xr - xl) / (m - 1.0)
k = (yt - yb) / (n - 1.0)
xx = [xl + (i - 1)*h for i in range(1, m+1)]
yy = [yb + (i - 1)*k for i in range(1, n+1)]
plt.figure(figsize=(10, 5))
for y in yy:
plt.plot(xx, [y for _x in xx], 'co')
plt.xlim(xl-0.1, xr+0.1)
plt.ylim(yb-0.1, yt+0.1)
plt.xticks([xl, xr], ['$x_l$', '$x_r$'], fontsize=20)
plt.yticks([yb, yt], ['$y_b$', '$y_t$'], fontsize=20)
plt.text(xl, yb, "$w_{11}$", fontsize=20)
plt.text(xl+h, yb, "$w_{21}$", fontsize=20)
plt.text(xl, yb+k, "$w_{12}$", fontsize=20)
plt.text(xl, yt, "$w_{1n}$", fontsize=20)
plt.text(xr, yt, "$w_{mn}$", fontsize=20)
plt.text(xr, yb, "$w_{m1}$", fontsize=20)
plt.title("Mesh para coordenadas en dos dimensiones")
plt.show()
plt.figure(figsize=(10,5))
plt.title("Mesh para coordenadas lineales")
for y in yy:
plt.plot(xx, [y for _x in xx], 'co')
plt.xlim(xl-0.1, xr+0.1)
plt.ylim(yb-0.1, yt+0.1)
plt.xticks([xl, xr], ['$x_l$', '$x_r$'], fontsize=20)
plt.yticks([yb, yt], ['$y_b$', '$y_t$'], fontsize=20)
plt.text(xl, yb, "$v_{1}$", fontsize=20)
plt.text(xl+h, yb, "$v_{2}$", fontsize=20)
plt.text(xl, yb+k, "$v_{m+1}$", fontsize=20)
plt.text(xr, yb+k, "$v_{2m}$", fontsize=20)
plt.text(xl, yt, "$v_{(n-1)m+1}$", fontsize=20)
plt.text(xr, yt, "$v_{mn}$", fontsize=20)
plt.text(xr, yb, "$v_{m}$", fontsize=20)
plt.title("Mesh para coordenadas en dos dimensiones")
plt.show()
plt.show()
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
from mpl_toolkits.mplot3d import Axes3D
def f(x,y):
return 0.0
# Condiciones de borde
def g1(x):
return np.log(x**2 + 1)
def g2(x):
return np.log(x**2 + 4)
def g3(y):
return 2*np.log(y)
def g4(y):
return np.log(y**2 + 1)
# Puntos de la grilla
m, n = 30, 30
# Precálculo de m*n
mn = m * n
# Cantidad de steps
M = m - 1
N = n - 1
# Limites del dominio, x_left, x_right, y_bottom, y_top
xl, xr = (0.0, 1.0)
yb, yt = (1.0, 2.0)
# Tamaño de stepsize por dimensión
h = (xr - xl) / float(M)
k = (yt - yb) / float(N)
# Precálculo de h**2 y k**2
h2 = h**2.0
k2 = k**2.0
# Generar arreglos para dimension...
x = [xl + (i - 1)*h for i in range(1, m+1)]
y = [yb + (i - 1)*k for i in range(1, n+1)]
A = np.zeros((mn, mn))
b = np.zeros((mn))
for i in range(1, m-1):
for j in range(1, n-1):
A[i+(j-1)*m, i-1+(j-1)*m] = 1.0/h2
A[i+(j-1)*m, i+1+(j-1)*m] = 1.0/h2
A[i+(j-1)*m, i+(j-1)*m] = -2.0/h2 -2.0/k2
A[i+(j-1)*m, i+(j-2)*m] = 1.0/k2
A[i+(j-1)*m, i+j*m] = 1.0/k2
b[i+(j-1)*m] = f(x[i], y[j])
for i in range(0,m):
j = 0
A[i+(j-1)*m, i+(j-1)*m] = 1.0
b[i+(j-1)*m] = g1(x[i])
j = n-1
A[i+(j-1)*m, i+(j-1)*m] = 1.0
b[i+(j-1)*m] = g2(x[i])
for j in range(1, n-1):
i = 0
A[i+(j-1)*m, i+(j-1)*m] = 1.0
b[i+(j-1)*m] = g3(y[j])
i = m-1
A[i+(j-1)*m, i+(j-1)*m] = 1.0
b[i+(j-1)*m] = g4(y[j])
v = linalg.solve(A, b)
w = np.reshape(v, (m,n))
fig = plt.figure(figsize=(10,7))
ax = fig.add_subplot(111, projection='3d')
xv, yv = np.meshgrid(x, y)
ax.plot_surface(xv, yv, w, rstride=1, cstride=1)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
fig = plt.figure(figsize=(10,7))
ax = fig.add_subplot(111, projection='3d')
xv, yv = np.meshgrid(x, y)
zv = np.log(xv**2 + yv**2)
ax.plot_surface(xv, yv, zv, rstride=1, cstride=1)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Environment
Step3: Try out Environment
Step4: Baseline
Step5: Train model
Step7: Step 1
Step8: Step 2
Step9: Visualizing Results
Step10: Enjoy model
Step11: Evaluation
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/openai/baselines >/dev/null
!pip install gym >/dev/null
import numpy as np
import random
import gym
from gym.utils import seeding
from gym import spaces
def state_name_to_int(state):
state_name_map = {
'S': 0,
'A': 1,
'B': 2,
'C': 3,
'D': 4,
'E': 5,
'F': 6,
'G': 7,
'H': 8,
'K': 9,
'L': 10,
'M': 11,
'N': 12,
'O': 13
}
return state_name_map[state]
def int_to_state_name(state_as_int):
state_map = {
0: 'S',
1: 'A',
2: 'B',
3: 'C',
4: 'D',
5: 'E',
6: 'F',
7: 'G',
8: 'H',
9: 'K',
10: 'L',
11: 'M',
12: 'N',
13: 'O'
}
return state_map[state_as_int]
class BeraterEnv(gym.Env):
The Berater Problem
Actions:
There are 4 discrete deterministic actions, each choosing one direction
metadata = {'render.modes': ['ansi']}
showStep = False
showDone = True
envEpisodeModulo = 100
def __init__(self):
# self.map = {
# 'S': [('A', 100), ('B', 400), ('C', 200 )],
# 'A': [('B', 250), ('C', 400), ('S', 100 )],
# 'B': [('A', 250), ('C', 250), ('S', 400 )],
# 'C': [('A', 400), ('B', 250), ('S', 200 )]
# }
self.map = {
'S': [('A', 300), ('B', 100), ('C', 200 )],
'A': [('S', 300), ('B', 100), ('E', 100 ), ('D', 100 )],
'B': [('S', 100), ('A', 100), ('C', 50 ), ('K', 200 )],
'C': [('S', 200), ('B', 50), ('M', 100 ), ('L', 200 )],
'D': [('A', 100), ('F', 50)],
'E': [('A', 100), ('F', 100), ('H', 100)],
'F': [('D', 50), ('E', 100), ('G', 200)],
'G': [('F', 200), ('O', 300)],
'H': [('E', 100), ('K', 300)],
'K': [('B', 200), ('H', 300)],
'L': [('C', 200), ('M', 50)],
'M': [('C', 100), ('L', 50), ('N', 100)],
'N': [('M', 100), ('O', 100)],
'O': [('N', 100), ('G', 300)]
}
max_paths = 4
self.action_space = spaces.Discrete(max_paths)
positions = len(self.map)
# observations: position, reward of all 4 local paths, rest reward of all locations
# non existing path is -1000 and no position change
# look at what #getObservation returns if you are confused
low = np.append(np.append([0], np.full(max_paths, -1000)), np.full(positions, 0))
high = np.append(np.append([positions - 1], np.full(max_paths, 1000)), np.full(positions, 1000))
self.observation_space = spaces.Box(low=low,
high=high,
dtype=np.float32)
self.reward_range = (-1, 1)
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.envReward = 0
self.envEpisodeCount = 0
self.envStepCount = 0
self.reset()
self.optimum = self.calculate_customers_reward()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def iterate_path(self, state, action):
paths = self.map[state]
if action < len(paths):
return paths[action]
else:
# sorry, no such action, stay where you are and pay a high penalty
return (state, 1000)
def step(self, action):
destination, cost = self.iterate_path(self.state, action)
lastState = self.state
customerReward = self.customer_reward[destination]
reward = (customerReward - cost) / self.optimum
self.state = destination
self.customer_visited(destination)
done = destination == 'S' and self.all_customers_visited()
stateAsInt = state_name_to_int(self.state)
self.totalReward += reward
self.stepCount += 1
self.envReward += reward
self.envStepCount += 1
if self.showStep:
print( "Episode: " + ("%4.0f " % self.envEpisodeCount) +
" Step: " + ("%4.0f " % self.stepCount) +
lastState + ' --' + str(action) + '-> ' + self.state +
' R=' + ("% 2.2f" % reward) + ' totalR=' + ("% 3.2f" % self.totalReward) +
' cost=' + ("%4.0f" % cost) + ' customerR=' + ("%4.0f" % customerReward) + ' optimum=' + ("%4.0f" % self.optimum)
)
if done and not self.isDone:
self.envEpisodeCount += 1
if BeraterEnv.showDone:
episodes = BeraterEnv.envEpisodeModulo
if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0):
episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo
print( "Done: " +
("episodes=%6.0f " % self.envEpisodeCount) +
("avgSteps=%6.2f " % (self.envStepCount/episodes)) +
("avgTotalReward=% 3.2f" % (self.envReward/episodes) )
)
if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0:
self.envReward = 0
self.envStepCount = 0
self.isDone = done
observation = self.getObservation(stateAsInt)
info = {"from": self.state, "to": destination}
return observation, reward, done, info
def getObservation(self, position):
result = np.array([ position,
self.getPathObservation(position, 0),
self.getPathObservation(position, 1),
self.getPathObservation(position, 2),
self.getPathObservation(position, 3)
],
dtype=np.float32)
all_rest_rewards = list(self.customer_reward.values())
result = np.append(result, all_rest_rewards)
return result
def getPathObservation(self, position, path):
source = int_to_state_name(position)
paths = self.map[self.state]
if path < len(paths):
target, cost = paths[path]
reward = self.customer_reward[target]
result = reward - cost
else:
result = -1000
return result
def customer_visited(self, customer):
self.customer_reward[customer] = 0
def all_customers_visited(self):
return self.calculate_customers_reward() == 0
def calculate_customers_reward(self):
sum = 0
for value in self.customer_reward.values():
sum += value
return sum
def modulate_reward(self):
number_of_customers = len(self.map) - 1
number_per_consultant = int(number_of_customers/2)
number_per_consultant = int(number_of_customers/1.5)
self.customer_reward = {
'S': 0
}
for customer_nr in range(1, number_of_customers + 1):
self.customer_reward[int_to_state_name(customer_nr)] = 0
# every consultant only visits a few random customers
samples = random.sample(range(1, number_of_customers + 1), k=number_per_consultant)
key_list = list(self.customer_reward.keys())
for sample in samples:
self.customer_reward[key_list[sample]] = 1000
def reset(self):
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.modulate_reward()
self.state = 'S'
return self.getObservation(state_name_to_int(self.state))
def render(self):
print(self.customer_reward)
env = BeraterEnv()
print(env.reset())
print(env.customer_reward)
BeraterEnv.showStep = True
BeraterEnv.showDone = True
env = BeraterEnv()
print(env)
observation = env.reset()
print(observation)
for t in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
print(observation)
from copy import deepcopy
import json
class Baseline():
def __init__(self, env, max_reward, verbose=1):
self.env = env
self.max_reward = max_reward
self.verbose = verbose
self.reset()
def reset(self):
self.map = self.env.map
self.rewards = self.env.customer_reward.copy()
def as_string(self, state):
# reward/cost does not hurt, but is useless, path obsucres same state
new_state = {
'rewards': state['rewards'],
'position': state['position']
}
return json.dumps(new_state, sort_keys=True)
def is_goal(self, state):
if state['position'] != 'S': return False
for reward in state['rewards'].values():
if reward != 0: return False
return True
def expand(self, state):
states = []
for position, cost in self.map[state['position']]:
new_state = deepcopy(state)
new_state['position'] = position
new_state['rewards'][position] = 0
reward = state['rewards'][position]
new_state['reward'] += reward
new_state['cost'] += cost
new_state['path'].append(position)
states.append(new_state)
return states
def search(self, root, max_depth = 25):
closed = set()
open = [root]
while open:
state = open.pop(0)
if self.as_string(state) in closed: continue
closed.add(self.as_string(state))
depth = len(state['path'])
if depth > max_depth:
if self.verbose > 0:
print("Visited:", len(closed))
print("Reached max depth, without reaching goal")
return None
if self.is_goal(state):
scaled_reward = (state['reward'] - state['cost']) / self.max_reward
state['scaled_reward'] = scaled_reward
if self.verbose > 0:
print("Scaled reward:", scaled_reward)
print("Perfect path", state['path'])
return state
expanded = self.expand(state)
open += expanded
# make this best first
open.sort(key=lambda state: state['cost'])
def find_optimum(self):
initial_state = {
'rewards': self.rewards.copy(),
'position': 'S',
'reward': 0,
'cost': 0,
'path': ['S']
}
return self.search(initial_state)
def benchmark(self, model, sample_runs=100):
self.verbose = 0
BeraterEnv.showStep = False
BeraterEnv.showDone = False
perfect_rewards = []
model_rewards = []
for run in range(sample_runs):
observation = self.env.reset()
self.reset()
optimum_state = self.find_optimum()
perfect_rewards.append(optimum_state['scaled_reward'])
state = np.zeros((1, 2*128))
dones = np.zeros((1))
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = self.env.step(actions[0])
if done:
break
model_rewards.append(env.totalReward)
return perfect_rewards, model_rewards
def score(self, model, sample_runs=100):
perfect_rewards, model_rewards = self.benchmark(model, sample_runs=100)
perfect_score_mean, perfect_score_std = np.array(perfect_rewards).mean(), np.array(perfect_rewards).std()
test_score_mean, test_score_std = np.array(model_rewards).mean(), np.array(model_rewards).std()
return perfect_score_mean, perfect_score_std, test_score_mean, test_score_std
!rm -r logs
!mkdir logs
!mkdir logs/berater
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
# copied from https://github.com/openai/baselines/blob/master/baselines/a2c/utils.py
def ortho_init(scale=1.0):
def _ortho_init(shape, dtype, partition_info=None):
#lasagne ortho init for tf
shape = tuple(shape)
if len(shape) == 2:
flat_shape = shape
elif len(shape) == 4: # assumes NHWC
flat_shape = (np.prod(shape[:-1]), shape[-1])
else:
raise NotImplementedError
a = np.random.normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
q = u if u.shape == flat_shape else v # pick the one with the correct shape
q = q.reshape(shape)
return (scale * q[:shape[0], :shape[1]]).astype(np.float32)
return _ortho_init
def fc(x, scope, nh, *, init_scale=1.0, init_bias=0.0):
with tf.variable_scope(scope):
nin = x.get_shape()[1].value
w = tf.get_variable("w", [nin, nh], initializer=ortho_init(init_scale))
b = tf.get_variable("b", [nh], initializer=tf.constant_initializer(init_bias))
return tf.matmul(x, w)+b
# copied from https://github.com/openai/baselines/blob/master/baselines/common/models.py#L31
def mlp(num_layers=2, num_hidden=64, activation=tf.tanh, layer_norm=False):
Stack of fully-connected layers to be used in a policy / q-function approximator
Parameters:
----------
num_layers: int number of fully-connected layers (default: 2)
num_hidden: int size of fully-connected layers (default: 64)
activation: activation function (default: tf.tanh)
Returns:
-------
function that builds fully connected network with a given input tensor / placeholder
def network_fn(X):
# print('network_fn called')
# Tensor("ppo2_model_4/Ob:0", shape=(1, 19), dtype=float32)
# Tensor("ppo2_model_4/Ob_1:0", shape=(512, 19), dtype=float32)
# print (X)
h = tf.layers.flatten(X)
for i in range(num_layers):
h = fc(h, 'mlp_fc{}'.format(i), nh=num_hidden, init_scale=np.sqrt(2))
if layer_norm:
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
# Tensor("ppo2_model_4/pi/Tanh_2:0", shape=(1, 500), dtype=float32)
# Tensor("ppo2_model_4/pi_2/Tanh_2:0", shape=(512, 500), dtype=float32)
# print(h)
return h
return network_fn
# first the dense layer
def mlp(num_layers=2, num_hidden=64, activation=tf.tanh, layer_norm=False):
def network_fn(X):
h = tf.layers.flatten(X)
for i in range(num_layers):
h = tf.layers.dense(h, units=num_hidden, kernel_initializer=ortho_init(np.sqrt(2)))
# h = fc(h, 'mlp_fc{}'.format(i), nh=num_hidden, init_scale=np.sqrt(2))
if layer_norm:
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
return h
return network_fn
# then initializer, relu activations
def mlp(num_layers=2, num_hidden=64, activation=tf.nn.relu, layer_norm=False):
def network_fn(X):
h = tf.layers.flatten(X)
for i in range(num_layers):
h = tf.layers.dense(h, units=num_hidden, kernel_initializer=tf.initializers.glorot_uniform(seed=17))
if layer_norm:
# h = tf.layers.batch_normalization(h, center=True, scale=True)
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
return h
return network_fn
%%time
# https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py
# log_dir = logger.get_dir()
log_dir = '/content/logs/berater/'
import gym
from baselines import bench
from baselines import logger
from baselines.common.vec_env.dummy_vec_env import DummyVecEnv
from baselines.common.vec_env.vec_monitor import VecMonitor
from baselines.ppo2 import ppo2
BeraterEnv.showStep = False
BeraterEnv.showDone = False
env = BeraterEnv()
wrapped_env = DummyVecEnv([lambda: BeraterEnv()])
monitored_env = VecMonitor(wrapped_env, log_dir)
# https://github.com/openai/baselines/blob/master/baselines/ppo2/ppo2.py
# https://github.com/openai/baselines/blob/master/baselines/common/models.py#L30
# https://arxiv.org/abs/1607.06450 for layer_norm
# lr linear from lr=1e-2 to lr=1e-4 (default lr=3e-4)
def lr_range(frac):
# we get the remaining updates between 1 and 0
start_lr = 1e-2
end_lr = 1e-4
diff_lr = start_lr - end_lr
lr = end_lr + diff_lr * frac
return lr
network = mlp(num_hidden=500, num_layers=3, layer_norm=True)
model = ppo2.learn(
env=monitored_env,
network=network,
lr=lr_range,
gamma=1.0,
ent_coef=0.05,
total_timesteps=1000000)
# model = ppo2.learn(
# env=monitored_env,
# network='mlp',
# num_hidden=500,
# num_layers=3,
# layer_norm=True,
# lr=lr_range,
# gamma=1.0,
# ent_coef=0.05,
# total_timesteps=500000)
# model.save('berater-ppo-v11.pkl')
monitored_env.close()
# !ls -l $log_dir
from baselines.common import plot_util as pu
results = pu.load_results(log_dir)
import matplotlib.pyplot as plt
import numpy as np
r = results[0]
plt.ylim(0, .75)
# plt.plot(np.cumsum(r.monitor.l), r.monitor.r)
plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100))
import numpy as np
observation = env.reset()
env.render()
baseline = Baseline(env, max_reward=8000)
state = np.zeros((1, 2*128))
dones = np.zeros((1))
BeraterEnv.showStep = True
BeraterEnv.showDone = False
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = env.step(actions[0])
if done:
print("Episode finished after {} timesteps, reward={}".format(t+1, env.totalReward))
break
env.close()
%time baseline.find_optimum()
baseline = Baseline(env, max_reward=8000)
perfect_score_mean, perfect_score_std, test_score_mean, test_score_std = baseline.score(model, sample_runs=100)
# perfect scores
perfect_score_mean, perfect_score_std
# test scores for our model
test_score_mean, test_score_std
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SystemML Read/Write data from local file system
Step3: Generate Data and write out to file.
Step4: Alternatively to passing in/out file names, use Python variables.
|
<ASSISTANT_TASK:>
Python Code:
!pip show systemml
import pandas as pd
from systemml import MLContext, dml
ml = MLContext(sc)
print(ml.info())
sc.version
FsPath = "/tmp/data/"
inp = FsPath + "Input/"
outp = FsPath + "Output/"
import numpy as np
X_pd = pd.DataFrame(np.arange(1,2001, dtype=np.float)).values.reshape(100,20)
# X_pd = pd.DataFrame(range(1, 2001,1),dtype=float).values.reshape(100,20)
script =
write(X, $Xfile)
prog = dml(script).input(X=X_pd).input(**{"$Xfile":inp+"X.csv"})
ml.execute(prog)
!ls -l /tmp/data/Input
autoencoderURL = "https://raw.githubusercontent.com/apache/systemml/master/scripts/staging/autoencoder-2layer.dml"
rets = ("iter", "num_iters_per_epoch", "beg", "end", "o")
prog = dml(autoencoderURL).input(**{"$X":inp+"X.csv"}) \
.input(**{"$H1":500, "$H2":2, "$BATCH":36, "$EPOCH":5 \
, "$W1_out":outp+"W1_out", "$b1_out":outp+"b1_out" \
, "$W2_out":outp+"W2_out", "$b2_out":outp+"b2_out" \
, "$W3_out":outp+"W3_out", "$b3_out":outp+"b3_out" \
, "$W4_out":outp+"W4_out", "$b4_out":outp+"b4_out" \
}).output(*rets)
iter, num_iters_per_epoch, beg, end, o = ml.execute(prog).get(*rets)
print (iter, num_iters_per_epoch, beg, end, o)
!ls -l /tmp/data/Output
autoencoderURL = "https://raw.githubusercontent.com/apache/systemml/master/scripts/staging/autoencoder-2layer.dml"
rets = ("iter", "num_iters_per_epoch", "beg", "end", "o")
rets2 = ("W1", "b1", "W2", "b2", "W3", "b3", "W4", "b4")
prog = dml(autoencoderURL).input(X=X_pd) \
.input(**{ "$H1":500, "$H2":2, "$BATCH":36, "$EPOCH":5}) \
.output(*rets) \
.output(*rets2)
result = ml.execute(prog)
iter, num_iters_per_epoch, beg, end, o = result.get(*rets)
W1, b1, W2, b2, W3, b3, W4, b4 = result.get(*rets2)
print (iter, num_iters_per_epoch, beg, end, o)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we start with straightforward fibonacci generator function.
Step2: Next, we unroll the loop. Note that there are no assignments that just move things around. There is no wasted motion inside the loop.
Step3: Next, we unroll the loop more and more to see if that makes the generator faster.
|
<ASSISTANT_TASK:>
Python Code:
from itertools import islice
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
n = 45
known_good_output = tuple(islice(fibonacci(), n))
# known_good_output
%timeit sum(islice(fibonacci(), n))
def fibonacci():
a, b = 0, 1
while True:
yield a
c = a + b
yield b
a = b + c
yield c
b = c + a
assert(known_good_output == tuple(islice(fibonacci(), n)))
%timeit sum(islice(fibonacci(), n))
def fibonacci():
a, b = 0, 1
while True:
yield a
c = a + b
yield b
a = b + c
yield c
b = c + a
yield a
c = a + b
yield b
a = b + c
yield c
b = c + a
assert(known_good_output == tuple(islice(fibonacci(), n)))
%timeit sum(islice(fibonacci(), n))
def fibonacci():
a, b = 0, 1
yield a
yield b
while True:
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
assert(known_good_output == tuple(islice(fibonacci(), n)))
%timeit sum(islice(fibonacci(), n))
def fibonacci():
a, b = 0, 1
yield a
yield b
while True:
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
assert(known_good_output == tuple(islice(fibonacci(), n)))
%timeit sum(islice(fibonacci(), n))
def fibonacci():
a, b = 0, 1
yield a
yield b
while True:
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
assert(known_good_output == tuple(islice(fibonacci(), n)))
%timeit sum(islice(fibonacci(), n))
def fibonacci():
a, b = 0, 1
yield a
yield b
while True:
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
c = a + b
yield c
a = b + c
yield a
b = c + a
yield b
assert(known_good_output == tuple(islice(fibonacci(), n)))
%timeit sum(islice(fibonacci(), n))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PCA
Step2: PCA Algorithm Basics
Step3: Looks like we'll have to cheat a bit.
Step4: This.... actually makes sense. Lets compare the other multiplication just to see whats going on.
Step5: Huh.... looks like they share some eigenvalues
Step6: Discussion
Step7: Recall the Covariance Formular For Two Variables
Step8: The Trace of a Matrix
|
<ASSISTANT_TASK:>
Python Code:
# Can't find good material for this...
# Can't find good material for this.
# Let us see what this would look like in numpy.
# First make choose m and n such that m != n
m = 5
n = 10
# Make the matrix A
A = np.random.rand(m, n)
print(A)
# Now compute its eigenvalues.
try:
vals, vecs = np.linalg.eig(A)
print(vals)
except:
print("Uh Oh we caused a linear algebra error!")
print("The last two dimensions must be square!")
print("This means we can't compute the eigenvalues of the matrix.")
# Let's double check that real fast.
print("The shape of A is: {}".format(A.shape))
print("A^T has shape: {}".format(A.transpose().shape))
# Let's see what the spectrum looks like.
A_T = A.transpose()
vals, vecs = np.linalg.eig(A_T)
print(vals)
# Darn it it still isn't square!
# What about.... A * A^T
A_AT = np.matmul(A, A_T)
vals, vecs = np.linalg.eig(A_AT)
print(vals)
AT_A = np.matmul(A_T, A)
vals, vecs = np.linalg.eig(AT_A)
print(vals)
# Exercise try it! Extract an eigenvector from A x A^T and left multiply it by A.
# Check the resulting eigenvector is in A^T x A.
# Why should the covariance matrix be a square matrix in the number of features?
# Is the name Covariance Matrix justified?
# What are the values on the Diagonal of the Covariance Matrix?
def gen_noisy_line(n_samples=50):
'''
This function generates a noisy line of slope 1 and returns the
matrix associated with these n_samples, with noise +- 1 from a
straight line.
This matrix follows the convention that
rows are features, and columns are samples.
'''
return matrix_A
def make_B_from_A(matrix_A):
'''
This function generates the B matrix from the sample matrix A.
'''
return matrix_B
def make_S_from_B(matrix_B):
'''
This function generates the matrix S from B.
'''
return matrix_S
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Thresholding
Step2: ITK has a number of histogram based automatic thresholding filters including Huang, MaximumEntropy, Triangle, and the popular Otsu's method. These methods create a histogram then use a heuristic to determine a threshold value.
Step3: Region Growing Segmentation
Step4: Improving upon this is the ConfidenceConnected filter, which uses the initial seed or current segmentation to estimate the threshold range.
Step5: Fast Marching Segmentation
Step6: The output of the FastMarchingImageFilter is a <b>time-crossing map</b> that indicates, for each pixel, how much time it would take for the front to arrive at the pixel location.
Step7: Level-Set Segmentation
Step8: Use the seed to estimate a reasonable threshold range.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider
import SimpleITK as sitk
# Download data to work on
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
from myshow import myshow, myshow3d
img_T1 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd"))
img_T2 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT2.nrrd"))
# To visualize the labels image in RGB with needs a image with 0-255 range
img_T1_255 = sitk.Cast(sitk.RescaleIntensity(img_T1), sitk.sitkUInt8)
img_T2_255 = sitk.Cast(sitk.RescaleIntensity(img_T2), sitk.sitkUInt8)
myshow3d(img_T1)
seg = img_T1 > 200
myshow(sitk.LabelOverlay(img_T1_255, seg), "Basic Thresholding")
seg = sitk.BinaryThreshold(
img_T1, lowerThreshold=100, upperThreshold=400, insideValue=1, outsideValue=0
)
myshow(sitk.LabelOverlay(img_T1_255, seg), "Binary Thresholding")
otsu_filter = sitk.OtsuThresholdImageFilter()
otsu_filter.SetInsideValue(0)
otsu_filter.SetOutsideValue(1)
seg = otsu_filter.Execute(img_T1)
myshow(sitk.LabelOverlay(img_T1_255, seg), "Otsu Thresholding")
print(otsu_filter.GetThreshold())
seed = (132, 142, 96)
seg = sitk.Image(img_T1.GetSize(), sitk.sitkUInt8)
seg.CopyInformation(img_T1)
seg[seed] = 1
seg = sitk.BinaryDilate(seg, [3] * 3)
myshow(sitk.LabelOverlay(img_T1_255, seg), "Initial Seed")
seg = sitk.ConnectedThreshold(img_T1, seedList=[seed], lower=100, upper=190)
myshow(sitk.LabelOverlay(img_T1_255, seg), "Connected Threshold")
seg = sitk.ConfidenceConnected(
img_T1,
seedList=[seed],
numberOfIterations=1,
multiplier=2.5,
initialNeighborhoodRadius=1,
replaceValue=1,
)
myshow(sitk.LabelOverlay(img_T1_255, seg), "ConfidenceConnected")
img_multi = sitk.Compose(img_T1, img_T2)
seg = sitk.VectorConfidenceConnected(
img_multi,
seedList=[seed],
numberOfIterations=1,
multiplier=2.5,
initialNeighborhoodRadius=1,
)
myshow(sitk.LabelOverlay(img_T2_255, seg))
seed = (132, 142, 96)
feature_img = sitk.GradientMagnitudeRecursiveGaussian(img_T1, sigma=0.5)
speed_img = sitk.BoundedReciprocal(
feature_img
) # This is parameter free unlike the Sigmoid
myshow(speed_img)
fm_filter = sitk.FastMarchingBaseImageFilter()
fm_filter.SetTrialPoints([seed])
fm_filter.SetStoppingValue(1000)
fm_img = fm_filter.Execute(speed_img)
myshow(
sitk.Threshold(
fm_img,
lower=0.0,
upper=fm_filter.GetStoppingValue(),
outsideValue=fm_filter.GetStoppingValue() + 1,
)
)
def fm_callback(img, time, z):
seg = img < time
myshow(sitk.LabelOverlay(img_T1_255[:, :, z], seg[:, :, z]))
interact(
lambda **kwargs: fm_callback(fm_img, **kwargs),
time=FloatSlider(min=0.05, max=1000.0, step=0.05, value=100.0),
z=(0, fm_img.GetSize()[2] - 1),
)
seed = (132, 142, 96)
seg = sitk.Image(img_T1.GetSize(), sitk.sitkUInt8)
seg.CopyInformation(img_T1)
seg[seed] = 1
seg = sitk.BinaryDilate(seg, [3] * 3)
stats = sitk.LabelStatisticsImageFilter()
stats.Execute(img_T1, seg)
factor = 3.5
lower_threshold = stats.GetMean(1) - factor * stats.GetSigma(1)
upper_threshold = stats.GetMean(1) + factor * stats.GetSigma(1)
print(lower_threshold, upper_threshold)
init_ls = sitk.SignedMaurerDistanceMap(seg, insideIsPositive=True, useImageSpacing=True)
lsFilter = sitk.ThresholdSegmentationLevelSetImageFilter()
lsFilter.SetLowerThreshold(lower_threshold)
lsFilter.SetUpperThreshold(upper_threshold)
lsFilter.SetMaximumRMSError(0.02)
lsFilter.SetNumberOfIterations(1000)
lsFilter.SetCurvatureScaling(0.5)
lsFilter.SetPropagationScaling(1)
lsFilter.ReverseExpansionDirectionOn()
ls = lsFilter.Execute(init_ls, sitk.Cast(img_T1, sitk.sitkFloat32))
print(lsFilter)
myshow(sitk.LabelOverlay(img_T1_255, ls > 0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Triangular mesh generation
Step2: This quad mesh is already able to accurately describe the free-surface topography.
Step3: Next, we compute the Voronoi diagram for the mesh points. This describes the partitioning of a plane with n points into convex polygons such that each polygon contains exactly one generating point and every point in a given polygon is closer to its generating point than to any other.
Step4: The Delaunay triangulation creates triangles by connecting the points in neighbouring Voronoi cells.
Step5: Let's take a look at the final mesh for the Yigma Tepe model
Step6: The regular triangulation within the tumulus looks reasonable. However, the Delaunay triangulation also added unwanted triangles above the topography. To solve this problem we have to use constrained Delaunay triangulation in order to restrict the triangulation to the model below the free-surface topography. Unfortunately, constrained Delaunay triangulation is not available within SciPy.
Step7: In order to use the constrained Delaunay triangulation, we obviously have to define the constraining vertex points lying on the boundaries of our model. In this case it is quite easy, because the TFI mesh is regular.
Step8: The above code looks a little bit chaotic, but you can check that the points in the resulting array model_bound are correctly sorted and contains no redundant points.
Step9: Good, now we have defined the model boundary points. Time for some constrained Delaunay triangulation ...
Step10: Very good, compared to the SciPy Delaunay triangulation, no triangles are added above the topography. However, most triangles have very small minimum angles, which would lead to serious numerical issues in later finite element modelling runs. So in the next step we restrict the minimum angle to 20° using the option q20.
Step11: Finally, we want a more evenly distribution of the triangle sizes. This can be achieved by imposing a maximum area to the triangles with the option a20.
|
<ASSISTANT_TASK:>
Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
# Import Libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Here, I introduce a new library, which is useful
# to define the fonts and size of a figure in a notebook
from pylab import rcParams
# Get rid of a Matplotlib deprecation warning
import warnings
warnings.filterwarnings("ignore")
# Load Yigma Tepe quad mesh created by TFI
X = np.loadtxt('data/yigma_tepe_TFI_mesh_X.dat', delimiter=' ', skiprows=0, unpack='True')
Z = np.loadtxt('data/yigma_tepe_TFI_mesh_Z.dat', delimiter=' ', skiprows=0, unpack='True')
# number of grid points in each spatial direction
NZ, NX = np.shape(X)
print("NX = ", NX)
print("NZ = ", NZ)
# Define figure size
rcParams['figure.figsize'] = 10, 7
# Plot Yigma Tepe TFI mesh
plt.plot(X, Z, 'k')
plt.plot(X.T, Z.T, 'k')
plt.plot(X, Z, 'bo', markersize=4)
plt.title("Yigma Tepe TFI mesh" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
#plt.savefig('yigma_tepe_TFI.pdf', bbox_inches='tight', format='pdf')
plt.show()
# Reshape X and Z vector
x = X.flatten()
z = Z.flatten()
# Assemble x and z vector into NX*NZ x 2 matrix
points = np.vstack([x,z]).T
# calculate and plot Voronoi diagram for mesh points
from scipy.spatial import Voronoi, voronoi_plot_2d
vor = Voronoi(points)
plt.figure(figsize=(12,6))
ax = plt.subplot(111, aspect='equal')
voronoi_plot_2d(vor, ax=ax)
plt.title("Part of Yigma Tepe (Voronoi diagram)" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.xlim( 25, 75)
plt.ylim(10, 35)
plt.show()
# Apply Delaunay triangulation to the quad mesh node points
from scipy.spatial import Delaunay
tri = Delaunay(points)
plt.figure(figsize=(12,6))
ax = plt.subplot(111, aspect='equal')
voronoi_plot_2d(vor, ax=ax)
plt.triplot(points[:,0], points[:,1], tri.simplices.copy(), linewidth=3, color='b')
plt.title("Part of Yigma Tepe (Voronoi diagram & Delaunay triangulation)" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.xlim( 25, 75)
plt.ylim(10, 35)
plt.show()
# Plot triangular mesh
plt.triplot(points[:,0], points[:,1], tri.simplices.copy())
plt.title("Yigma Tepe Delaunay mesh" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
plt.show()
# import triangulate library
from triangle import triangulate, show_data, plot as tplot
import triangle
# Estimate boundary points
# surface topography
surf = np.vstack([X[9,:-2],Z[9,:-2]]).T
# right model boundary
right = np.vstack([X[1:,69],Z[1:,69]]).T
# bottom model boundary
bottom = np.vstack([X[0,1:],Z[0,1:]]).T
# left model boundary
left = np.vstack([X[:-2,0],Z[:-2,0]]).T
# assemble model boundary
model_stack = np.vstack([surf,np.flipud(right)])
model_stack1 = np.vstack([model_stack,np.flipud(bottom)])
model_bound = np.vstack([model_stack1,left])
plt.plot(model_bound[:,0],model_bound[:,1],'bo')
plt.title("Yigma Tepe model boundary" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
plt.show()
# define vertices (no redundant points)
vert = model_bound
# apply Delaunay triangulation to vertices
tri = triangle.delaunay(vert)
# define vertex markers
vertm = np.array(np.zeros((len(vert),1)),dtype='int32')
# define how the vertices are connected, e.g. point 0 is connected to point 1,
# point 1 to point 2 and so on ...
points1 = np.arange(len(vert))
points2 = np.arange(len(vert))+1
# last point is connected to the first point
points2[-1] = 0
# define connectivity of boundary polygon
seg = np.array(np.vstack([points1,points2]).T,dtype='int32')
# define marker for boundary polygon
segm = np.array(np.ones((len(seg),1)),dtype='int32')
# assemble dictionary for triangle optimisation
A = dict(vertices=vert, vertex_markers=vertm, segments=seg, segment_markers=segm,triangles=tri)
# Optimise initial triangulation
cndt = triangle.triangulate(A,'pD')
ax = plt.subplot(111, aspect='equal')
tplot.plot(ax,**cndt)
cncfq20dt = triangulate(A,'pq20D')
ax = plt.subplot(111, aspect='equal')
tplot.plot(ax,**cncfq20dt)
cncfq20adt = triangulate(A,'pq20a20D')
ax = plt.subplot(111, aspect='equal')
tplot.plot(ax,**cncfq20adt)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the BigBang modules as needed. These should be in your Python environment if you've installed BigBang correctly.
Step2: Now let's load the data for analysis.
Step3: For each of our lists, we'll clean up the names, find the first name if there is one, and guess its gender. Pandas groups the data together for comparison. We keep count of the names we find that are ambiguous, for the next step.
Step4: Let's quickly visualize the names that couldn't be guessed with our estimator and their distribution.
Step5: This distribution may vary by the particular list, but it seems to be a power distribution. That is, with a fairly small supplement of manually providing genders for the names/identities on the list, we can very signficantly improve the fraction of messages with an estimated gender.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import bigbang.mailman as mailman
import bigbang.graph as graph
import bigbang.process as process
from bigbang.parse import get_date
from bigbang.archive import Archive
reload(process)
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import numpy as np
import math
import pytz
import pickle
import os
from bigbang import parse
from gender_detector import GenderDetector
pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options
urls = ["http://www.ietf.org/mail-archive/text/ietf-privacy/",
"http://lists.w3.org/Archives/Public/public-privacy/"]
mlists = [(url, mailman.open_list_archives(url,"../../archives")) for url in urls]
#activities = [Archive.get_activity(Archive(ml)) for ml in mlists]
detector = GenderDetector('us')
gender_ambiguous_names = {}
def guess_gender(name):
if not name:
return 'name unknown'
try:
if detector.guess(name) == 'unknown':
if name in gender_ambiguous_names:
gender_ambiguous_names[name] += 1
else:
gender_ambiguous_names[name] = 1
return detector.guess(name)
except:
return 'error'
def ml_shortname(url):
return url.rstrip("/").split("/")[-1]
series = []
for (url, ml) in mlists:
activity = Archive.get_activity(Archive(ml)).sum(0)
activityFrame = pd.DataFrame(activity, columns=['Message Count'])
activityFrame['Name'] = activityFrame.index.map(lambda x: parse.clean_from(x))
activityFrame['First Name'] = activityFrame['Name'].map(lambda x: parse.guess_first_name(x))
activityFrame['Guessed Gender'] = activityFrame['First Name'].map(guess_gender)
activityFrame.to_csv(('senders_guessed_gender-%s.csv' % ml_shortname(url)),encoding='utf-8')
counts = activityFrame.groupby('Guessed Gender')['Message Count'].count()
counts.name=url
series.append(counts)
pd.DataFrame(series)
ser = pd.Series(gender_ambiguous_names)
ser.sort_values(ascending=False).plot(kind='bar')
url = "http://lists.w3.org/Archives/Public/public-privacy/"
csv_guessed = ('senders_guessed_gender-%s.csv' % ml_shortname(url))
csv_manual = ('senders_manual_gender-%s.csv' % ml_shortname(url))
guessed = pd.read_csv(csv_guessed)
manual = pd.read_csv(csv_manual)
def combined_gender(row):
if str(row['Manual Gender']) != 'nan':
return row['Manual Gender']
else:
return row['Guessed Gender']
manual['Combined Gender'] = manual.apply(combined_gender, axis=1)
combined_series = manual.groupby('Combined Gender')['Message Count'].sum()
guessed_series = manual.groupby('Guessed Gender')['Message Count'].sum()
compared_counts = pd.DataFrame({'Manual':combined_series, 'Guessed':guessed_series})
compared_counts.plot(kind='bar')
figure,axes = plt.subplots(ncols=2, figsize=(8,4))
guessed_series.rename("Guessed").plot(kind='pie', ax=axes[0])
combined_series.rename("Manual").plot(kind='pie', ax=axes[1])
axes[0].set_aspect('equal')
axes[1].set_aspect('equal')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Resetear
Step2: podemos agregar elementos
Step3: o podemos usar el modulo Numpy para arreglos numericos (Protip
Step4: y que tal arreglos vacios? Aqui creamos un vector de 5 x 1
Step5: podemos cambiar valores usando [ indice ]
Step6: y que tal arreglos de numeros aleatorios?
Step7: For loops (Ciclos For)
Step8: Y si quieres un elemento aleatorio de la lista?
Step9: Videos?
Step10: External Websites, HTML?
|
<ASSISTANT_TASK:>
Python Code:
import time
time.sleep(1000)
lista = ['perro' ,'gato']
print(lista)
lista[1]
import numpy as np
array_1 = np.array([3,4,5])
array_2 = np.array([4,8,7])
array_1 + array_2
array_1 = np.zeros(5)
print(array_1)
array_1[0]
arreglo = np.random.randint(1,10,500000)
np.mean(arreglo)
arreglo = np.random.randint(1,10,5000)
for indice,animal in enumerate(arreglo):
print(indice,animal)
print("fin")
from IPython.display import Image
Image(filename='files/large-hadron-collider.jpg')
from IPython.display import YouTubeVideo
#https://www.youtube.com/watch?v=_6uKZWnJLCM
YouTubeVideo('_6uKZWnJLCM')
from IPython.display import HTML
HTML('<iframe src=http://ipython.org/ width=700 height=350></iframe>')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: set paths
Step2: do a dummy overview scan
Step3: corresponding points
Step4: save the calibration
Step5: test the transformation
|
<ASSISTANT_TASK:>
Python Code:
import os
import logging
import json
from nis_util import do_large_image_scan, set_optical_configuration, get_position
logging.basicConfig(format='%(asctime)s - %(levelname)s in %(funcName)s: %(message)s', level=logging.DEBUG)
logger = logging.getLogger(__name__)
path_to_nis = 'C:\\Program Files\\NIS-Elements\\nis_ar.exe'
save_base_path = 'C:\\Users\\Nikon\\Documents\\David\\overview_calibrations'
calibration_name = 'right_color_251019.json'
if not os.path.exists(save_base_path):
os.makedirs(save_base_path)
if os.path.exists(os.path.join(save_base_path, calibration_name)):
logger.warning('output file already exists, will be overwritten')
calibration_fov = (-26500,
-50000,
-21053,
18177)
calibration_oc = 'DIA4x'
left, right, top, bottom = calibration_fov
set_optical_configuration(path_to_nis, calibration_oc)
do_large_image_scan(path_to_nis, '', left, right, top, bottom, close=False)
pos = get_position(path_to_nis)
coords1px = (2071*3, 2273*3)
coords1st = (-29777, -17306)
coords2px = (11437*3, 6683*3)
coords2st = (-49516, -7747)
coords3px = (2873*3, 10802*3)
coords3st = (-30954, 1180)
res = {}
res['bbox'] = calibration_fov
res['coords_px'] = [coords1px, coords2px, coords3px]
res['coords_st'] = [coords1st, coords2st, coords3st]
res['zpos'] = pos[2]
with open(os.path.join(save_base_path, calibration_name), 'w') as fd:
json.dump(res, fd, indent=1)
print('saved calibration: \n\n{}'.format(json.dumps(res, indent=1)))
import numpy as np
from skimage.transform import AffineTransform
test_coords = (10116*3, 15814*3)
coords_px = np.array([coords1px, coords2px, coords3px], dtype=np.float)
coords_st = np.array([coords1st, coords2st, coords3st], dtype=np.float)
at = AffineTransform()
at.estimate(coords_px, coords_st)
at(np.array(test_coords)).ravel()
from resources import left_color_calib
import numpy as np
from skimage.transform import AffineTransform
test_coords = (11159*3, 19913*3)
field_def_file = left_color_calib
with open(field_def_file, 'r') as fd:
field_calib = json.load(fd)
coords_px = np.array(field_calib['coords_px'], dtype=np.float)
coords_st = np.array(field_calib['coords_st'], dtype=np.float)
at = AffineTransform()
at.estimate(coords_px, coords_st)
at.params, at(np.array(test_coords)).ravel()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After initialization, the "print" method can be used to directly print molar, weight or atomic amounts. Optional variables control the print precision and normalization of amounts.
Step2: Let's do something a little more complicated.
Step3: However, this composition is not the composition we wish to make in the lab. We need to make the following changes
Step4: Then we can change the component set to the oxidised, carbonated compounds and print the desired starting compositions, for 2 g total mass
|
<ASSISTANT_TASK:>
Python Code:
from burnman import Composition
olivine_composition = Composition({'MgO': 1.8,
'FeO': 0.2,
'SiO2': 1.}, 'weight')
olivine_composition.print('molar', significant_figures=4,
normalization_component='SiO2', normalization_amount=1.)
olivine_composition.print('weight', significant_figures=4,
normalization_component='total', normalization_amount=1.)
olivine_composition.print('atomic', significant_figures=4,
normalization_component='total', normalization_amount=7.)
KLB1 = Composition({'SiO2': 44.48,
'Al2O3': 3.59,
'FeO': 8.10,
'MgO': 39.22,
'CaO': 3.44,
'Na2O': 0.30}, 'weight')
CO2_molar = KLB1.molar_composition['CaO'] + KLB1.molar_composition['Na2O']
O_molar = KLB1.molar_composition['FeO']*0.5
KLB1.add_components(composition_dictionary = {'CO2': CO2_molar,
'O': O_molar},
unit_type = 'molar')
KLB1.change_component_set(['Na2CO3', 'CaCO3', 'Fe2O3', 'MgO', 'Al2O3', 'SiO2'])
KLB1.print('weight', significant_figures=4, normalization_amount=2.)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Error plots for MiniZephyr vs. the AnalyticalHelmholtz response
Step2: Relative error of the MiniZephyr solution (in %)
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../')
import numpy as np
from anemoi import MiniZephyr, SimpleSource, AnalyticalHelmholtz
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
matplotlib.rcParams['savefig.dpi'] = 150 # Change this to adjust figure size
systemConfig = {
'dx': 1., # m
'dz': 1., # m
'c': 2500., # m/s
'rho': 1., # kg/m^3
'nx': 100, # count
'nz': 200, # count
'freq': 2e2, # Hz
}
nx = systemConfig['nx']
nz = systemConfig['nz']
dx = systemConfig['dx']
dz = systemConfig['dz']
MZ = MiniZephyr(systemConfig)
AH = AnalyticalHelmholtz(systemConfig)
SS = SimpleSource(systemConfig)
xs, zs = 25, 25
sloc = np.array([xs, zs]).reshape((1,2))
q = SS(sloc)
uMZ = MZ*q
uAH = AH(sloc)
clip = 0.1
plotopts = {
'vmin': -np.pi,
'vmax': np.pi,
'extent': [0., dx * nx, dz * nz, 0.],
'cmap': cm.bwr,
}
fig = plt.figure()
ax1 = fig.add_subplot(1,4,1)
plt.imshow(np.angle(uAH.reshape((nz, nx))), **plotopts)
plt.title('AH Phase')
ax2 = fig.add_subplot(1,4,2)
plt.imshow(np.angle(uMZ.reshape((nz, nx))), **plotopts)
plt.title('MZ Phase')
plotopts.update({
'vmin': -clip,
'vmax': clip,
})
ax3 = fig.add_subplot(1,4,3)
plt.imshow(uAH.reshape((nz, nx)).real, **plotopts)
plt.title('AH Real')
ax4 = fig.add_subplot(1,4,4)
plt.imshow(uMZ.reshape((nz, nx)).real, **plotopts)
plt.title('MZ Real')
fig.tight_layout()
fig = plt.figure()
ax = fig.add_subplot(1,1,1, aspect=100)
plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz')
plt.plot(uMZ.real.reshape((nz, nx))[:,xs], label='MiniZephyr')
plt.legend(loc=4)
plt.title('Real part of response through xs=%d'%xs)
uMZr = uMZ.reshape((nz, nx))
uAHr = uAH.reshape((nz, nx))
plotopts.update({
'cmap': cm.jet,
'vmin': 0.,
'vmax': 20.,
})
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
plotopts.update({'vmax': 5.})
ax2 = fig.add_subplot(1,2,2)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
fig.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Expected results
Step2: Techniques used
Step3: Data wrangling in action
Step4: The extracted tree still contains much noise
Step5: Use of lambda functions and piping
Step6: Further clean-up
|
<ASSISTANT_TASK:>
Python Code:
Image("img/init.png")
Image("img/target_result.png")
# FOR WEB SCRAPING
from lxml import html
import requests
# FOR FUNCTIONAL PROGRAMMING
import cytoolz # pipe
# FOR DATA WRANGLING
import pandas as pd # use of R like dataframes
import re #re for regular expressions
# TO INSERT IMAGES
from IPython.display import Image
### Target URL
outbreakNewsURL = "http://www.who.int/csr/don/archive/disease/zika-virus-infection/en/"
page = requests.get(outbreakNewsURL)
tree = html.fromstring(page.content)
newsXPath = '//li'
zikaNews = tree.xpath(newsXPath)
### Store the relevant news in a list
zikaNews_dirty = [p.text_content() for p in zikaNews]
# Printing the first 20 elements
zikaNews_dirty[1:20] # omitting first element
Image("img/flatten_tree_data.png")
# Extract only the items containing the pattern "Zika virus infection "
#sample= '\n22 April 2016\n\t\t\tZika virus infection – Papua New Guinea - USA\n'
keywdEN ="Zika virus infection "
zikaNews_content = [s for s in zikaNews_dirty if re.search(keywdEN, s)]
zikaNews_content[0:10] # first 11 elements
#### Use of lambdas (avoid creating verbose Python functions with def f():{})
substitudeUnicodeDash = lambda s : re.sub(u'–',"@", s)
substituteNonUnicode = lambda s : re.sub(r"\s"," ",s)
removeSpace = lambda s: s.strip()
# Use of pipe to chain lambda functions within a list comprehension
### Should be familiar to those using R dplyr %>%
zikaNews_dirty = [cytoolz.pipe(s,
removeSpace,
substituteNonUnicode)
for s in zikaNews_content]
# List comprehension
zikaNews_dirty = [s.split("Zika virus infection") for s in zikaNews_dirty ]
zikaNews_dirty[0:10]
# Structure data into a Pandas dataframe
zika = pd.DataFrame(zikaNews_dirty, columns = ["Date","Locations"])
zika.head(n=20)
### Removing the first dash sign / for zika["Locations"]
# Step 1 : transform in a list of strings, via str.split()
# Step 2 : copy the list, except the first element list[1:]
# Step 3 : reconstitute the entire string using ' '.join(list[1:])
# Step 1 : transform in a list of strings, via str.split()
zika["Split_Locations"] = pd.Series(zika["Locations"].iloc[i].split() for i in range(len(zika)))
# Step 2 : copy the list, except the first element list[1:]
zika["Split_Locations"] = pd.Series([s[1:] for s in zika["Split_Locations"]])
# Step 3 : reconstitute the entire string using ' '.join(list[1:])
zika["Split_Locations"] = pd.Series([" ".join(s) for s in zika["Split_Locations"]])
zika["Split_Locations"] = pd.Series([s.split("-") for s in zika["Split_Locations"]])
zika["Split_Date"] = pd.Series([s.split() for s in zika["Date"]])
# Show the first 10 rows using HEAD
zika.head(n=10)
### Extract Day / Month / Year in the Split_Date column, 1 row is of the form [21, January, 2016]
zika["Day"]= pd.Series(zika["Split_Date"].iloc[i][0] for i in range(len(zika)))
zika["Month"]= pd.Series(zika["Split_Date"].iloc[i][1] for i in range(len(zika)))
zika["Year"]= pd.Series(zika["Split_Date"].iloc[i][2] for i in range(len(zika)))
# Show the first 10 rows using HEAD
zika.head(n=10)
# Extract Country and Territory
zika["Country"] = pd.Series(zika["Split_Locations"].iloc[i][0] for i in range(len(zika)))
zika["Territory"] = pd.Series(zika["Split_Locations"].iloc[i][len(zika["Split_Locations"].iloc[i])-1] for i in range(len(zika)))
# Show the first 20 rows using HEAD
zika[['Split_Locations','Country','Territory']].head(20)
zika["Territory"] =pd.Series(zika["Territory"][i]
if zika["Territory"][i] != zika["Country"][i]
else " " for i in range(len(zika))
)
# Show the first 20 rows using HEAD
zika[['Split_Locations','Country','Territory']].head(20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assuming the installation worked, you can now import the tweepy module.
Step2: The next step is to get a costomer_key, consumer_secret, access_token, and access_token_secret from your twitter account. This is not strait forward but fortunalty you should only need to do it once. Here are the basic steps
Step3: Now we can check that it is working by pulling the timeline from your personal twitter feed
Step4: What we really want are the feeds from the presidential candidates. You need to search for the Twitter feed handle for each person. Put their name in the search menu and go to their twitter timeline. Their feed name is in the URL. For example here is Barack Obama's timeline
|
<ASSISTANT_TASK:>
Python Code:
!pip install tweepy
import tweepy
consumer_key=''
consumer_secret = ''
access_token = ''
access_token_secret = ''
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
public_tweets = api.home_timeline()
for tweet in public_tweets:
print(tweet.text)
#Test the code
public_tweets = api.user_timeline(screen_name = 'barackobama', count = 10, include_rts = True)
f = open('barackobama_tweets.txt', 'w')
for tweet in public_tweets:
f.write(tweet.text+'\n')
f.close()
'''
This chunk of code downloads a bunch of tweets. The tweepy API will only
download 200 tweets at a time, and you can only access the last ~3200 tweets.
This is a modified version of https://gist.github.com/yanofsky/5436496
'''
# whose tweets do I want to download?
twitter_names = [ 'BarackObama', 'realDonaldTrump','HillaryClinton','timkaine','mike_pence']
# how many do I want? Max = 3200 or so.
number_to_download = 2000
# loop over all of the twitter handles
for name in twitter_names:
print(name)
# initialize a list to hold all the tweepy tweets
alltweets = []
#make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name = name, count = 200, include_rts = True)
# save most recent tweets in our new list
alltweets.extend(new_tweets)
# save the ID of the oldest tweet
oldest = alltweets[-1].id - 1
# keep grabbing tweets until we've reached the number we want.
while len(alltweets) < number_to_download:
# all subsequent requests use the max_id param to prevent duplication
new_tweets = api.user_timeline(screen_name = name, count=200, max_id=oldest, include_rts=True)
# extend the list again
alltweets.extend(new_tweets)
# save the ID of the oldest tweet again
oldest = alltweets[-1].id - 1
# give the user some sense of what's going on
print("...%s tweets downloaded so far" % (len(alltweets)) )
# write 'em out!
f = open(name+'_tweets.txt', 'w')
for tweet in alltweets:
f.write(tweet.text+'\n')
f.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I'll skip ahead and use a pre-canned PCA routine from scikit-learn (but I'll dig into it a bit later!) Let's see what happens to the transformed variables, ${\bf S}$
Step2: Another way to look at ${\bf V}$ is to think of them as projections. Since the row vectors of ${\bf V}$ is orthogonal to each other, the projected data ${\bf S}$ lines in a new "coordinate system" specified by ${\bf V}$. Furthermore, the new coordinate system is sorted in the decreasing order of variance in the original data. So, PCA can be thought of as calculating a new coordinate system where the basis vectors point toward the direction of largest variances first.
Step3: Yet another use for ${\bf V}$ is to perform a dimensionality reduction. In many scenarios you encounter in image manipulation (as I'll see soon), Imight want to have a more concise representation of the data ${\bf X}$. PCA with $K < P$ is one way to reduce the dimesionality
Step4: Preprocessing
Step5: Then I perform SVD to calculate the projection matrix $V$. By default, U,s,V=svd(...) returns full matrices, which will return $n \times n$ matrix U, $n$-dimensional vector of singular values s, and $d \times d$ matrix V. But here, I don't really need $d \times d$ matrix V; with full_matrices=False, svd only returns $n \times d$ matrix for V.
Step6: I can also plot how much each eigenvector in V contributes to the overall variance by plotting variance_ratio = $\frac{s^2}{\sum s^2}$. (Notice that s is already in the decreasing order.) The cumsum (cumulative sum) of variance_ratio then shows how much of the variance is explained by components up to n_components.
Step7: Since I'm dealing with face data, each row vector of ${\bf V}$ is called an "eigenface". The first "eigenface" is the one that explains a lot of variances in the data, whereas the last one explains the least.
Step8: Now I'll try reconstructing faces with different number of principal components (PCs)! Now, the transformed X is reconstructed by multiplying by the sample standard deviations for each dimension and adding the sample mean. For this reason, even for zero components, you get a face-like image!
Step9: Image morphing
|
<ASSISTANT_TASK:>
Python Code:
from numpy.random import standard_normal # Gaussian variables
N = 1000; P = 5
X = standard_normal((N, P))
W = X - X.mean(axis=0,keepdims=True)
print(dot(W[:,0], W[:,1]))
from sklearn.decomposition import PCA
S=PCA(whiten=True).fit_transform(X)
print(dot(S[:,0], S[:,1]))
from numpy.random import standard_normal
from matplotlib.patches import Ellipse
from numpy.linalg import svd
@interact
def plot_2d_pca(mu_x=FloatSlider(min=-3.0, max=3.0, value=0),
mu_y=FloatSlider(min=-3.0, max=3.0, value=0),
sigma_x=FloatSlider(min=0.2, max=1.8, value=1.8),
sigma_y=FloatSlider(min=0.2, max=1.8, value=0.3),
theta=FloatSlider(min=0.0, max=pi, value=pi/6), center=False):
mu=array([mu_x, mu_y])
sigma=array([sigma_x, sigma_y])
R=array([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]])
X=dot(standard_normal((1000, 2)) * sigma[newaxis,:],R.T) + mu[newaxis,:]
# Plot the points and the ellipse
fig, ax = plt.subplots(figsize=(8,8))
ax.scatter(X[:200,0], X[:200,1], marker='.')
ax.grid()
M=8.0
ax.set_xlim([-M,M])
ax.set_ylim([-M,M])
e=Ellipse(xy=array([mu_x, mu_y]), width=sigma_x*3, height=sigma_y*3, angle=theta/pi*180,
facecolor=[1.0,0,0], alpha=0.3)
ax.add_artist(e)
# Perform PCA and plot the vectors
if center:
X_mean=X.mean(axis=0,keepdims=True)
else:
X_mean=zeros((1,2))
# Doing PCA here... I'm using svd instead of scikit-learn PCA, I'll come back to this.
U,s,V =svd(X-X_mean, full_matrices=False)
for v in dot(diag(s/sqrt(X.shape[0])),V): # Each eigenvector
ax.arrow(X_mean[0,0],X_mean[0,1],-v[0],-v[1],
head_width=0.5, head_length=0.5, fc='b', ec='b')
Ustd=U.std(axis=0)
ax.set_title('std(U*s) [%f,%f]' % (Ustd[0]*s[0],Ustd[1]*s[1]))
import pickle
dataset=pickle.load(open('data/cafe.pkl','r'))
disp('dataset.images shape is %s' % str(dataset.images.shape))
disp('dataset.data shape is %s' % str(dataset.data.shape))
@interact
def plot_face(image_id=(0, dataset.images.shape[0]-1)):
plt.imshow(dataset.images[image_id],cmap='gray')
plt.title('Image Id = %d, Gender = %d' % (dataset.target[image_id], dataset.gender[image_id]))
plt.axis('off')
X=dataset.data.copy() # So that Iwon't mess up the data in the dataset\
X_mean=X.mean(axis=0,keepdims=True) # Mean for each dimension across sample (centering)
X_std=X.std(axis=0,keepdims=True)
X-=X_mean
disp(all(abs(X.mean(axis=0))<1e-12)) # Are means for all dimensions very close to zero?
from numpy.linalg import svd
U,s,V=svd(X,compute_uv=True, full_matrices=False)
disp(str(U.shape))
disp(str(s.shape))
disp(str(V.shape))
variance_ratio=s**2/(s**2).sum() # Normalized so that they add to one.
@interact
def plot_variance_ratio(n_components=(1, len(variance_ratio))):
n=n_components-1
fig, axs = plt.subplots(1, 2, figsize=(12, 5))
axs[0].plot(variance_ratio)
axs[0].set_title('Explained Variance Ratio')
axs[0].set_xlabel('n_components')
axs[0].axvline(n, color='r', linestyle='--')
axs[0].axhline(variance_ratio[n], color='r', linestyle='--')
axs[1].plot(cumsum(variance_ratio))
axs[1].set_xlabel('n_components')
axs[1].set_title('Cumulative Sum')
captured=cumsum(variance_ratio)[n]
axs[1].axvline(n, color='r', linestyle='--')
axs[1].axhline(captured, color='r', linestyle='--')
axs[1].annotate(s='%f%% with %d components' % (captured * 100, n_components), xy=(n, captured),
xytext=(10, 0.5), arrowprops=dict(arrowstyle="->"))
image_shape=dataset.images.shape[1:] # (H x W)
@interact
def plot_eigenface(eigenface=(0, V.shape[0]-1)):
v=V[eigenface]*X_std
plt.imshow(v.reshape(image_shape), cmap='gray')
plt.title('Eigenface %d (%f to %f)' % (eigenface, v.min(), v.max()))
plt.axis('off')
@interact
def plot_reconstruction(image_id=(0,dataset.images.shape[0]-1), n_components=(0, V.shape[0]-1),
pc1_multiplier=FloatSlider(min=-2,max=2, value=1)):
# This is where Iperform the projection and un-projection
Vn=V[:n_components]
M=ones(n_components)
if n_components > 0:
M[0]=pc1_multiplier
X_hat=dot(multiply(dot(X[image_id], Vn.T), M), Vn)
# Un-center
I=X[image_id] + X_mean
I_hat = X_hat + X_mean
D=multiply(I-I_hat,I-I_hat) / multiply(X_std, X_std)
# And plot
fig, axs = plt.subplots(1, 3, figsize=(10, 10))
axs[0].imshow(I.reshape(image_shape), cmap='gray', vmin=0, vmax=1)
axs[0].axis('off')
axs[0].set_title('Original')
axs[1].imshow(I_hat.reshape(image_shape), cmap='gray', vmin=0, vmax=1)
axs[1].axis('off')
axs[1].set_title('Reconstruction')
axs[2].imshow(1-D.reshape(image_shape), cmap='gray', vmin=0, vmax=1)
axs[2].axis('off')
axs[2].set_title('Difference^2 (mean = %f)' % sqrt(D.mean()))
plt.tight_layout()
def plot_morph(left=0, right=1, mix=0.5):
# Projected images
x_lft=dot(X[left], V.T)
x_rgt=dot(X[right], V.T)
# Mix
x_avg = x_lft * (1.0-mix) + x_rgt * (mix)
# Un-project
X_hat = dot(x_avg[newaxis,:], V)
I_hat = X_hat + X_mean
# And plot
fig, axs = plt.subplots(1, 3, figsize=(10, 10))
axs[0].imshow(dataset.images[left], cmap='gray', vmin=0, vmax=1)
axs[0].axis('off')
axs[0].set_title('Left')
axs[1].imshow(I_hat.reshape(image_shape), cmap='gray', vmin=0, vmax=1)
axs[1].axis('off')
axs[1].set_title('Morphed (%.2f %% right)' % (mix * 100))
axs[2].imshow(dataset.images[right], cmap='gray', vmin=0, vmax=1)
axs[2].axis('off')
axs[2].set_title('Right')
plt.tight_layout()
interact(plot_morph,
left=IntSlider(max=dataset.images.shape[0]-1),
right=IntSlider(max=dataset.images.shape[0]-1,value=1),
mix=FloatSlider(value=0.5, min=0, max=1.0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.asarray([1,2,3,4])
pos = 2
element = 66
a = np.insert(a, pos, element)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Peak finding
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
def find_peaks(a):
Find the indices of the local maxima in a sequence.
leest = []
if a[0] > a[1]:
leest.append(0)
for x in range(1,len(a)-1):
if (a[x-1]<a[x]) and (a[x]>a[x+1]):
leest.append(x)
if a[len(a)-1] > a[len(a)-2]:
leest.append(len(a)-1)
return leest
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
x = np.zeros(10000, dtype=int )
for a in range(len(pi_digits_str)):
x[a] = int(pi_digits_str[a])
ind = find_peaks(x)
dis = np.diff(ind)
plt.hist(dis, bins=20, range=(0,15))
plt.title('Distribution of local Maxima in Pi')
plt.xlabel('Distantce between local maxima')
plt.ylabel('Frequency')
plt.show()
assert True # use this for grading the pi digits histogram
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For now, we are only interested in relatively small systems, we will try lattice sizes between $2\times 2$ and $5\times 5$. With this, we set the parameters for DMRG and ED
Step2: We will need a helper function to extract the ground state energy from the solutions
Step3: We invoke the solvers and extract the ground state energies from the solutions. First we use exact diagonalization, which, unfortunately does not scale beyond a lattice size of $4\times 4$.
Step4: DMRG scales to all the lattice sizes we want
Step5: Calculating the ground state energy with SDP
Step6: We set the additional parameters for this formulation, including the order of the relaxation
Step7: Then we iterate over the lattice range, defining a new Hamiltonian and new constraints in each step
Step8: Comparison
|
<ASSISTANT_TASK:>
Python Code:
import pyalps
lattice_range = [2, 3, 4, 5]
parms = [{
'LATTICE' : "open square lattice", # Set up the lattice
'MODEL' : "spinless fermions", # Select the model
'L' : L, # Lattice dimension
't' : -1 , # This and the following
'mu' : 2, # are parameters to the
'U' : 0 , # Hamiltonian.
'V' : 0,
'Nmax' : 2 , # These parameters are
'SWEEPS' : 20, # specific to the DMRG
'MAXSTATES' : 300, # solver.
'NUMBER_EIGENVALUES' : 1,
'MEASURE_ENERGY' : 1
} for L in lattice_range ]
def extract_ground_state_energies(data):
E0 = []
for Lsets in data:
allE = []
for q in pyalps.flatten(Lsets):
allE.append(q.y[0])
E0.append(allE[0])
return sorted(E0, reverse=True)
prefix_sparse = 'comparison_sparse'
input_file_sparse = pyalps.writeInputFiles(prefix_sparse, parms[:-1])
res = pyalps.runApplication('sparsediag', input_file_sparse)
sparsediag_data = pyalps.loadEigenstateMeasurements(
pyalps.getResultFiles(prefix=prefix_sparse))
sparsediag_ground_state_energy = extract_ground_state_energies(sparsediag_data)
sparsediag_ground_state_energy.append(0)
prefix_dmrg = 'comparison_dmrg'
input_file_dmrg = pyalps.writeInputFiles(prefix_dmrg, parms)
res = pyalps.runApplication('dmrg',input_file_dmrg)
dmrg_data = pyalps.loadEigenstateMeasurements(
pyalps.getResultFiles(prefix=prefix_dmrg))
dmrg_ground_state_energy = extract_ground_state_energies(dmrg_data)
from sympy.physics.quantum.dagger import Dagger
from ncpol2sdpa import SdpRelaxation, generate_operators, \
fermionic_constraints, get_neighbors
level = 1
gam, lam = 0, 1
sdp_ground_state_energy = []
for lattice_dimension in lattice_range:
n_vars = lattice_dimension * lattice_dimension
C = generate_operators('C%s' % (lattice_dimension), n_vars)
hamiltonian = 0
for r in range(n_vars):
hamiltonian -= 2*lam*Dagger(C[r])*C[r]
for s in get_neighbors(r, lattice_dimension):
hamiltonian += Dagger(C[r])*C[s] + Dagger(C[s])*C[r]
hamiltonian -= gam*(Dagger(C[r])*Dagger(C[s]) + C[s]*C[r])
substitutions = fermionic_constraints(C)
sdpRelaxation = SdpRelaxation(C)
sdpRelaxation.get_relaxation(level, objective=hamiltonian, substitutions=substitutions)
sdpRelaxation.solve()
sdp_ground_state_energy.append(sdpRelaxation.primal)
data = [dmrg_ground_state_energy,\
sparsediag_ground_state_energy,\
sdp_ground_state_energy]
labels = ["DMRG", "ED", "SDP"]
print ("{:>4} {:>9} {:>10} {:>10} {:>10}").format("", *lattice_range)
for label, row in zip(labels, data):
print ("{:>4} {:>7.6f} {:>7.6f} {:>7.6f} {:>7.6f}").format(label, *row)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $\Rightarrow$ We get various different prices
Step2: $\Longrightarrow$ There was a stock split 7
Step3: For the apple chart one can see, that price increments seem to correlate with the price
Step4: Now the roughness of the chart looks more even
Step5: Call option
Step6: Optionprices
Step7: This curve can also be calculated theoretically. Using stochastic calculus, one can deduce the famous Black-Scholes equation, to calculate this curve. We will not go into detail ...
Step8: ... but will just state the final result!
Step9: For small prices we do not need to own shares, to hedge the option. For high prices we need exactly one share. The interesting area is around the strike price.
Step10: Challenges
Step11: Proposed solution
|
<ASSISTANT_TASK:>
Python Code:
aapl = data.DataReader('AAPL', 'yahoo', '2000-01-01')
print(aapl.head())
plt.plot(aapl.Close)
print(aapl['Adj Close'].head())
%matplotlib inline
plt.plot(aapl['Adj Close'])
plt.ylabel('price')
plt.xlabel('year')
plt.title('Price history of Apple stock')
plt.show()
ibm = data.DataReader('IBM', 'yahoo', '2000-1-1')
print(ibm['Adj Close'].head())
%matplotlib inline
plt.plot(ibm['Adj Close'])
plt.ylabel('price')
plt.xlabel('year')
plt.title('Price history of IBM stock')
Log_Data = plt.figure()
%matplotlib inline
plt.plot(np.log(aapl['Adj Close']))
plt.ylabel('logarithmic price')
plt.xlabel('year')
plt.title('Logarithmic price history of Apple stock')
S0 = 1
sigma = 0.01
mu = 0
r = np.random.randn((1000))
S = S0 * np.cumprod(np.exp(sigma *r))
%matplotlib inline
plt.plot(S)
S0 = 1.5 # start price
K = 1.0 # strike price
mu = 0 # average growth
sigma = 0.2/np.sqrt(252) # volatility
N = 10000 # runs
M = 252*4 # length of each run (252 business days per year times 4 years)
def call_price(S, K):
return max(0.0, S-K)
def MC_call_price(S0, K, mu, sigma, N, M):
CSum = 0
SSum = 0
for n in range(N):
r = np.random.randn((M))
S = S0 * np.cumprod(np.exp(sigma *r))
SSum += S
CSum += call_price(S[M-1], K)
return CSum/N
S0 = np.linspace(0.0, 2.0,21)
C = []
for k in range(21):
C.append(MC_call_price(k*2/20, K, mu, sigma, N, M))
C
plt.plot(S0, C)
plt.ylabel('Call price')
plt.xlabel('Start price')
plt.title('Call price')
plt.show()
from IPython.display import Image
Image("Picture_Then_Miracle_Occurs.PNG")
d_1 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) + 0.5 * (σ ** 2) * (T-t))
d_2 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) - 0.5 * (σ ** 2) * (T-t))
call = lambda σ, T, t, S, K: S * sp.stats.norm.cdf( d_1(σ, T, t, S, K) ) - K * sp.stats.norm.cdf( d_2(σ, T, t, S, K) )
Delta = lambda σ, T, t, S, K: sp.stats.norm.cdf( d_1(σ, T, t, S, K) )
plt.plot(np.linspace(sigma, 4., 100), call(1., 1., .9, np.linspace(0.1, 4., 100), 1.))
plt.plot(d_1(1., 1., 0., np.linspace(0.1, 2.9, 10), 1))
#plt.plot(np.linspace(sigma, 4., 100), Delta(1., 1., .9, np.linspace(0.1, 4., 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.2, np.linspace(0.01, 1.9, 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.6, np.linspace(0.01, 1.9, 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.9, np.linspace(0.01, 1.9, 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.99, np.linspace(0.01, 1.9, 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.9999, np.linspace(0.01, 1.9, 100), 1.))
plt.xlabel("Price/strike price")
plt.ylabel("$\Delta$")
plt.legend(['t = 0.2','t = 0.6', 't = 0.9', 't = 0.99', 't = 0.9999'], loc= )
def Simulate_Price_Series(S0, sigma, N, M):
for n in range(N):
r = np.random.randn((M))
S = S0 * np.cumprod(np.exp(sigma * r))
return S
plt.plot(1+np.cumsum(np.diff(S) *Delta(sigma, 4, 0, S, K)[1, M-1]))
plt.plot(S)
len(Delta(sigma, 4, 0, S, K)[[1:999]])
def Calculate_Portfolio(S0, K, mu, sigma, N, M):
S = Simulate_Price_Series(S0, sigma, N, M)
StockDelta = Delta(sigma, 4, 0, S, K) )
vol = vol0 * np.cumprod(np.exp(sigma*r2)
S = S0 * np.cumprod(np.exp(vol * r))
SSum += S
CSum += call_price(S[M-1], K)
def MC_call_price_Loc_Vol(S0, K, mu, sigma, N, M):
CSum = 0
SSum = 0
for n in range(N):
r = np.random.randn((M))
r2 = np.random.randn((M))
vol = vol0 * np.cumprod(np.exp(sigma*r2)
S = S0 * np.cumprod(np.exp(vol * r))
SSum += S
CSum += call_price(S[M-1], K)
return CSum/N
S0 = np.linspace(0.0, 2.0,21)
CLoc = []
for k in range(21):
CLoc.append(MC_call_price_Loc_Vol(k*2/20, K, mu, 0.1*sigma, N, M))
CLoc
plt.plot(S0, C)
plt.plot(S0, CLoc)
plt.ylabel('Call price')
plt.xlabel('Start price')
plt.title('Call price')
plt.show()
def iterate_series(n=1000, S0 = 1):
while True:
r = np.random.randn((n))
S = np.cumsum(r) + S0
yield S, r
for (s, r) in iterate_series():
t, t_0 = 0, 0
for t in np.linspace(0, len(s)-1, 100):
r = s[int(t)] / s[int(t_0)]
t_0 = t
break
state = (stock_val, besitz)
state = rel_stock_price, tau
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ahora que tenemos la librería, empecemos creando un vector de 5 elementos.
Step2: ¿Cuál es la diferencia entre vector y lista? Que vector, al ser un arreglo de numpy, nos permite hacer varias operaciones matemáticas de forma muy simple.
Step3: Índices y slices (rodajas) de vectores
Step4: Creación de vectores con valor 0 o 1
Step5: Matrices
Step6: Accediendo a las matrices
Step7: Modificando matrices
Step8: Ejercicios de repaso (no se harán durante el taller, son para practicar)
|
<ASSISTANT_TASK:>
Python Code:
# importamos la librería numpy, y le damos como nombre np dentro del programa
import numpy as np
lista=[25,12,15,66,12.5]
vector=np.array(lista)
print(vector)
print("- vector original")
print(vector)
print("- sumarle 1 a cada elemento del vector:")
print(vector+1)
print("- multiplicar por 5 cada elemento del vector:")
print(vector*5)
print("- suma de los elementos:")
print(np.sum(vector))
print("- promedio (media) de los elementos:")
print(np.mean(vector)) #
print("- el vector sumado a si mismo:")
print(vector+vector)
print("- suma de vectores vector1 y vector2 (mismo tamaño):")
vector2=np.array([11,55,1.2,7.4,-8])
print(vector+vector2)
print(vector[3])
print(vector[1:4])
print(vector[1:])
print(vector[:4])
print(vector[:])
print("- Vector de ceros:")
vector_ceros=np.zeros(5)
print(vector_ceros)
print("- Vector de unos:")
vector_unos=np.ones(5)
print(vector_unos)
#Combinando este tipo de creaciones con las operaciones aritméticas,
#podemos hacer varias inicializaciones muy rápidamente
# Por ejemplo, para crear un vector cuyos valores iniciales son todos 2.
print("- Vector con todos los elementos con valor 2:")
vector_dos=np.zeros(5)+2
print(vector_dos)
print("- Vector con todos los elementos con valor 2 (otra forma):")
vector_dos_otro=np.ones((5))*2
print(vector_dos_otro)
print("- Matriz creada con una lista de listas:")
lista_de_listas=[ [1 ,-4],
[12 , 3],
[7.2, 5]]
matriz = np.array(lista_de_listas)
print(matriz)
print("- Matriz creada con np.zeros:")
dimensiones=(2,3)
matriz_ceros = np.zeros(dimensiones)
print(matriz_ceros)
print("- Matriz creada con np.ones:")
dimensiones=(3,2)
matriz_unos = np.ones(dimensiones)
print(matriz_unos)
#también podemos usar np.copy para copiar una matriz
print("- Copia de la matriz creada con np.ones:")
matriz_unos_copia=np.copy(matriz_unos)
print(matriz_unos_copia)
# Ejercicio
# Crear una matriz de 4x9, que esté inicializada con el valor 0.5
#IMPLEMENTAR - COMIENZO
matriz=np.zeros((4,9))+0.5
# matriz=np.ones((4,9))*0.5 #(VERSION ALTERNATIVA)
#IMPLEMENTAR - FIN
print(matriz)
lista_de_listas=[ [1 ,-4],
[12 , 3],
[7.2, 5]]
a = np.array(lista_de_listas)
print("Elementos individuales")
print(a[0,1])
print(a[2,1])
print("Vector de elementos de la fila 1")
print(a[1,:])
print("Vector de elementos de la columna 0")
print(a[:,0])
print("Submatriz de 2x2 con las primeras dos filas")
print(a[0:2,:])
print("Submatriz de 2x2 con las ultimas dos filas")
print(a[1:3,:])
lista_de_listas=[ [1,-4],
[12,3],
[7, 5.0]]
a = np.array(lista_de_listas)
print("- Matriz original:")
print(a)
print("- Le asignamos el valor 4 a los elementos de la columna 0:")
a[:,0]=4
print(a)
print("- Dividimos por 3 la columna 1:")
a[:,1]=a[:,1]/3.0
print(a)
print("- Multiplicamos por 5 la fila 1:")
a[1,:]=a[1,:]*5
print(a)
print("- Le sumamos 1 a toda la matriz:")
a=a+1
print(a)
#Ejercicios
lista_de_listas=[ [-44,12],
[12.0,51],
[1300, -5.0]]
a = np.array(lista_de_listas)
print("Matriz original")
print(a)
# Restarle 5 a la fila 2 de la matriz
print("Luego de restarle 5 a la fila 2:")
#IMPLEMENTAR - COMIENZO
a[2,:]=a[2,:]-5
#IMPLEMENTAR - FIN
print(a)
# Multiplicar por 2 toda la matriz
print("Luego de multiplicar por 2 toda la matriz:")
#IMPLEMENTAR - COMIENZO
a = a * 2
#IMPLEMENTAR - FIN
print(a)
# Dividir por -5 las dos primeras filas de la matriz
print("Luego de dividir por -5 las primeras dos filas de la matriz:")
#IMPLEMENTAR - COMIENZO
a[0:2,:]=a[0:2,:]/5
#IMPLEMENTAR - FIN
print(a)
#Imprimir la ultima fila de la matriz
print("La última fila de la matriz:")
#IMPLEMENTAR - COMIENZO
ultima_fila=a[2,:]
#IMPLEMENTAR - FIN
print(ultima_fila)
# Más ejercicios
lista_de_listas=[ [-44,12],
[12.0,51],
[1300, -5.0]]
a = np.array(lista_de_listas)
# Calcular la suma y el promedio de los elementos de a utilizando dos fors anidados
suma = 0
promedio= 0
#IMPLEMENTAR - COMIENZO
for i in range(3):
for j in range(2):
suma+=a[i,j]
promedio=suma/(3*2)
print("La suma de los elementos de A es:")
print(suma)
print("El promedio de los elementos de A es:")
print(promedio)
#IMPLEMENTAR - FIN
# Imprimir la suma de los elementos de a utilizando np.sum
#IMPLEMENTAR - COMIENZO
print("La suma de los elementos de A es:")
print(np.sum(a))
#IMPLEMENTAR - FIN
# Imprimir el promedio de los elementos de a utilizando slices y np.mean
#IMPLEMENTAR - COMIENZO
print("El promedio de los elementos de A es:")
print(np.mean(a))
#IMPLEMENTAR - FIN
# Generar una matriz de 7 por 9.
# Las primeras 3 columnas de la matriz tienen que tener el valor 0.
# La siguiente columna debe tener el valor 0.5, excepto por el último valor de esa columna, que tiene que ser 0.7.
# Las otras tres columnas deben tener el valor 1.
#IMPLEMENTAR - COMIENZO
a=np.zeros((7,9))
a[:,3]=0.5
a[6,3]=0.7
a[:,4:]=1
# Luego imprimir la matriz
print("La matriz generada:")
print(a)
# Imprimir también el promedio de la ultima fila.
print("Promedio de la ultima fila")
print(np.mean(a[6,:]))
#IMPLEMENTAR - FIN
#La siguiente linea crea una matriz aleatoria de 5 por 5 con valores entre 0 y 1
matriz_aleatoria=np.random.rand(5,5)
print("Valores de la matriz aleatoria:")
print(matriz_aleatoria)
#Imprimir las posiciones (Fila y columna) de los elementos de la matriz
# que son mayores que 0.5
#IMPLEMENTAR - COMIENZO
print("Posiciones con valor mayor a 0.5:")
for i in range(5):
for j in range(5):
if matriz_aleatoria[i,j]>0.5:
print(i,j)
#IMPLEMENTAR - FIN
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 만들어진 ndarray 객체의 표현식(representation)을 보면 바깥쪽에 array()란 것이 붙어 있을 뿐 리스트와 동일한 구조처럼 보인다. 실제로 0, 1, 2, 3 이라는 원소가 있는 리스트는 다음과 같이 만든다.
Step2: 그러나 ndarray 클래스 객체 a와 리스트 클래스 객체 b는 많은 차이가 있다. 우선 리스트 클래스 객체는 내부적으로 linked list와 같은 형태를 가지므로 각각의 원소가 다른 자료형이 될 수 있다. 그러나 ndarray 클래스 객체는 C언어의 행렬처럼 연속적인 메모리 배치를 가지기 때문에 모든 원소가 같은 자료형이어야 한다. 이러한 제약을 가지는 대신 내부의 원소에 대한 접근과 반복문 실행이 빨라진다.
Step3: 리스트 객체의 경우에는 다음과 같이 반복문을 사용해야 한다.
Step4: 각각의 코드 실행시에 IPython의 %time 매직 명령을 이용하여 실행 시간을 측정한 결과 ndarray의 유니버설 연산 실행 속도가 리스트 반복문 보다 빠른 것을 볼 수 있다. ndarray의 메모리 할당을 한 번에 하는 것도 빨라진 이유의 하나이고 유니버설 연산을 사용하게 되면 NumPy 내부적으로 구현된 반복문을 사용하기 때문에 반복문 실행 자체도 빨라진다.
Step5: 다차원 행렬의 생성
Step6: 행렬의 차원 및 크기는 ndim 속성과 shape 속성으로 알 수 있다.
Step7: 다차원 행렬의 인덱싱
Step8: 다차원 행렬의 슬라이싱
Step9: 행렬 인덱싱
Step10: 이는 다음과 같이 간단하게 쓸 수도 있다.
Step11: 2차원 이상의 인덱스인 경우에는 다음과 같이
Step12: 정수 행렬 인덱싱에서는 인덱스 행렬의 원소 각각이 원래 ndarray 객체 원소 하나를 가리키는 인덱스 정수이여야 한다.
Step13: 정수 행렬 인덱스의 크기는 원래의 행렬 크기와 달라도 상관없다. 같은 원소를 반복해서 가리키는 경우에는 원래의 행렬보다 더 커지기도 한다.
Step14: 행렬 인덱싱
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
print(type(a))
a
L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
print(type(L))
L
a = np.arange(1000) #arange : 그냥 array range임 array로 바꿈
%time a2 = a**2
a1 = np.arange(10)
print(a1)
print(2 * a1)
L = range(1000)
%time L2 = [i**2 for i in L]
L = range(10)
print(L)
print(2 * L)
a = np.array([0, 1, 2])
a
b = np.array([[0, 1, 2], [3, 4, 5]]) # 2 x 3 array
b
a = np.array([0, 0, 0, 1])
a
c = np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) # 2 x 2 x 2 array
c
print(a.ndim)
print(a.shape)
a = np.array([[1,2,3 ],[3,4,5]])
a
a.ndim
a.shape
print(b.ndim)
print(b.shape)
print(c.ndim)
print(c.shape)
a = np.array([[0, 1, 2], [3, 4, 5]])
a
a[0,0] # 첫번째 행의 첫번째 열
a[0,1] # 첫번째 행의 두번째 열
a[-1, -1] # 마지막 행의 마지막 열
a = np.array([[0, 1, 2, 3], [4, 5, 6, 7]])
a
a[0, :] # 첫번째 행 전체
a[:, 1] # 두번째 열 전체
a[1, 1:] # 두번째 행의 두번째 열부터 끝열까지
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
idx = np.array([True, False, True, False, True, False, True, False, True, False])
a[idx]
a[a % 2 == 0]
a[a % 2] # 0이 True, 1이 False
a = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]])
[a % 2 == 0]
a[[a % 2 == 0]]
a[a % 2]
a = np.array([0, 1, 2, 3, 4, 10, 6, 7, 8, 9]) * 10
idx = np.array([0, 5, 7, 9, 9]) #위치를 뜻함
a[idx]
a = np.array([0, 1, 2, 3]) * 10
idx = np.array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2])
a[idx]
a[0]
joobun = np.array(["BSY","PJY","PJG","BSJ"])
idx = np.array([0,0,0,1,1,1,2,2,2,3,3,3,0,1,2,3])
joobun[idx]
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
a[(a % 2 == 0) & (a % 3 == 1)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quick Start
Step2: 1. Loading the data
Step3: 2. Import prior-knowledge kinase-substrate relationships from PhosphoSitePlus
Step4: 3. KSEA
Step5: In de Graaf et al., they associated (amongst others) the Casein kinase II alpha (CSNK2A1) with higher activity after prolonged stimulation with prostaglandin E2. Here, we plot the activity scores of CSNK2A1 for all three methods of KSEA, which are in good agreement.
Step6: 3.2. KSEA in detail
Step7: 3.2.1. Mean method
Step8: 3.2.2. Alternative Mean method
Step9: 3.2.3. Delta Method
|
<ASSISTANT_TASK:>
Python Code:
# Import useful libraries
import numpy as np
import pandas as pd
# Import required libraries for data visualisation
import matplotlib.pyplot as plt
import seaborn as sns
# Import the package
import kinact
# Magic
%matplotlib inline
# import data
data_fc, data_p_value = kinact.get_example_data()
# import prior knowledge
adj_matrix = kinact.get_kinase_targets()
print data_fc.head()
print
print data_p_value.head()
# Perform ksea using the Mean method
score, p_value = kinact.ksea.ksea_mean(data_fc=data_fc['5min'].dropna(),
interactions=adj_matrix,
mP=data_fc['5min'].values.mean(),
delta=data_fc['5min'].values.std())
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
# Perform ksea using the Alternative Mean method
score, p_value = kinact.ksea.ksea_mean_alt(data_fc=data_fc['5min'].dropna(),
p_values=data_p_value['5min'],
interactions=adj_matrix,
mP=data_fc['5min'].values.mean(),
delta=data_fc['5min'].values.std())
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
# Perform ksea using the Delta method
score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'].dropna(),
p_values=data_p_value['5min'],
interactions=adj_matrix)
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
# Read data
data_raw = pd.read_csv('../kinact/data/deGraaf_2014_jurkat.csv', sep=',', header=0)
# Filter for those p-sites that were matched ambiguously
data_reduced = data_raw[~data_raw['Proteins'].str.contains(';')]
# Create identifier for each phosphorylation site, e.g. P06239_S59 for the Serine 59 in the protein Lck
data_reduced.loc[:, 'ID'] = data_reduced['Proteins'] + '_' + data_reduced['Amino acid'] + \
data_reduced['Positions within proteins']
data_indexed = data_reduced.set_index('ID')
# Extract only relevant columns
data_relevant = data_indexed[[x for x in data_indexed if x.startswith('Average')]]
# Rename columns
data_relevant.columns = [x.split()[-1] for x in data_relevant]
# Convert abundaces into fold changes compared to control (0 minutes after stimulation)
data_fc = data_relevant.sub(data_relevant['0min'], axis=0)
data_fc.drop('0min', axis=1, inplace=True)
# Also extract the p-values for the fold changes
data_p_value = data_indexed[[x for x in data_indexed if x.startswith('p value') and x.endswith('vs0min')]]
data_p_value.columns = [x.split('_')[-1].split('vs')[0] + 'min' for x in data_p_value]
data_p_value = data_p_value.astype('float') # Excel saved the p-values as strings, not as floating point numbers
print data_fc.head()
print data_p_value.head()
# Read data
ks_rel = pd.read_csv('../kinact/data/PhosphoSitePlus.txt', sep='\t')
# The data from the PhosphoSitePlus database is not provided as comma-separated value file (csv),
# but instead, a tab = \t delimits the individual cells
# Restrict the data on interactions in the organism of interest
ks_rel_human = ks_rel.loc[(ks_rel['KIN_ORGANISM'] == 'human') & (ks_rel['SUB_ORGANISM'] == 'human')]
# Create p-site identifier of the same format as before
ks_rel_human.loc[:, 'psite'] = ks_rel_human['SUB_ACC_ID'] + '_' + ks_rel_human['SUB_MOD_RSD']
# Create adjencency matrix (links between kinases (columns) and p-sites (rows) are indicated with a 1, NA otherwise)
ks_rel_human.loc[:, 'value'] = 1
adj_matrix = pd.pivot_table(ks_rel_human, values='value', index='psite', columns='GENE', fill_value=0)
print adj_matrix.head()
print adj_matrix.sum(axis=0).sort_values(ascending=False).head()
score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'],
p_values=data_p_value['5min'],
interactions=adj_matrix,
)
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
# Calculate the KSEA scores for all data with the ksea_mean method
activity_mean = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std())[0]
for c in data_fc})
activity_mean = activity_mean[['5min', '10min', '20min', '30min', '60min']]
print activity_mean.head()
# Calculate the KSEA scores for all data with the ksea_mean method, using the median
activity_median = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std(), median=True)[0]
for c in data_fc})
activity_median = activity_median[['5min', '10min', '20min', '30min', '60min']]
print activity_median.head()
# Calculate the KSEA scores for all data with the ksea_mean_alt method
activity_mean_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std())[0]
for c in data_fc})
activity_mean_alt = activity_mean_alt[['5min', '10min', '20min', '30min', '60min']]
print activity_mean_alt.head()
# Calculate the KSEA scores for all data with the ksea_mean method, using the median
activity_median_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std(),
median=True)[0]
for c in data_fc})
activity_median_alt = activity_median_alt[['5min', '10min', '20min', '30min', '60min']]
print activity_median_alt.head()
# Calculate the KSEA scores for all data with the ksea_delta method
activity_delta = pd.DataFrame({c: kinact.ksea.ksea_delta(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix)[0]
for c in data_fc})
activity_delta = activity_delta[['5min', '10min', '20min', '30min', '60min']]
print activity_delta.head()
sns.set(context='poster', style='ticks')
sns.heatmap(activity_mean_alt, cmap=sns.blend_palette([sns.xkcd_rgb['amber'],
sns.xkcd_rgb['almost black'],
sns.xkcd_rgb['bright blue']],
as_cmap=True))
plt.show()
kinase='CSNK2A1'
df_plot = pd.DataFrame({'mean': activity_mean.loc[kinase],
'delta': activity_delta.loc[kinase],
'mean_alt': activity_mean_alt.loc[kinase]})
df_plot['time [min]'] = [5, 10, 20, 30, 60]
df_plot = pd.melt(df_plot, id_vars='time [min]', var_name='method', value_name='activity score')
g = sns.FacetGrid(df_plot, col='method', sharey=False, size=3, aspect=1)
g = g.map(sns.pointplot, 'time [min]', 'activity score')
plt.subplots_adjust(top=.82)
plt.show()
data_condition = data_fc['60min'].copy()
p_values = data_p_value['60min']
kinase = 'CDK1'
substrates = adj_matrix[kinase].replace(0, np.nan).dropna().index
detected_p_sites = data_fc.index
intersect = list(set(substrates).intersection(detected_p_sites))
mS = data_condition.loc[intersect].mean()
mP = data_fc.values.mean()
m = len(intersect)
delta = data_fc.values.std()
z_score = (mS - mP) * np.sqrt(m) * 1/delta
from scipy.stats import norm
p_value_mean = norm.sf(abs(z_score))
print mS, p_value_mean
cut_off = -np.log10(0.05)
set_alt = data_condition.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna()
mS_alt = set_alt.mean()
z_score_alt = (mS_alt - mP) * np.sqrt(len(set_alt)) * 1/delta
p_value_mean_alt = norm.sf(abs(z_score_alt))
print mS_alt, p_value_mean_alt
cut_off = -np.log10(0.05)
score_delta = len(data_condition.loc[intersect].where((data_condition.loc[intersect] > 0) &
(p_values.loc[intersect] > cut_off)).dropna()) -\
len(data_condition.loc[intersect].where((data_condition.loc[intersect] < 0) &
(p_values.loc[intersect] > cut_off)).dropna())
M = len(data_condition)
n = len(intersect)
N = len(np.where(p_values.loc[adj_matrix.index.tolist()] > cut_off)[0])
from scipy.stats import hypergeom
hypergeom_dist = hypergeom(M, n, N)
p_value_delta = hypergeom_dist.pmf(len(p_values.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna()))
print score_delta, p_value_delta
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cosmic-ray composition effective area analysis
Step2: Load simulation DataFrame and apply quality cuts
Step3: Define energy binning for this analysis
Step4: Define functions to be fit to effective area
Step5: Calculate effective areas
Step6: Fit functions to effective area data
Step7: Plot result
Step8: Effective area as quality cuts are sequentially applied
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend
%matplotlib inline
from __future__ import division, print_function
from collections import defaultdict
import os
import numpy as np
from scipy import optimize
from scipy.stats import chisquare
import pandas as pd
import matplotlib.pyplot as plt
import seaborn.apionly as sns
import comptools as comp
color_dict = comp.analysis.get_color_dict()
# config = 'IC79'
config = 'IC86.2012'
df_sim = comp.load_sim(config=config, test_size=0)
df_sim
# df_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config=config, return_cut_dict=True)
# selection_mask = np.array([True] * len(df_sim))
# # standard_cut_keys = ['IceTopQualityCuts', 'lap_InIce_containment',
# # # 'num_hits_1_60', 'max_qfrac_1_60',
# # 'InIceQualityCuts', 'num_hits_1_60']
# standard_cut_keys = ['passed_IceTopQualityCuts', 'FractionContainment_Laputop_InIce',
# 'passed_InIceQualityCuts', 'num_hits_1_60']
# # for cut in ['MilliNCascAbove2', 'MilliQtotRatio', 'MilliRloglBelow2', 'StochRecoSucceeded']:
# # standard_cut_keys += ['InIceQualityCuts_{}'.format(cut)]
# for key in standard_cut_keys:
# selection_mask *= cut_dict_sim[key]
# print(key, np.sum(selection_mask))
# df_sim = df_sim[selection_mask]
log_energy_bins = np.arange(5.0, 9.51, 0.05)
# log_energy_bins = np.arange(5.0, 9.51, 0.1)
energy_bins = 10**log_energy_bins
energy_midpoints = (energy_bins[1:] + energy_bins[:-1]) / 2
energy_min_fit, energy_max_fit = 5.8, 7.0
midpoints_fitmask = (energy_midpoints >= 10**energy_min_fit) & (energy_midpoints <= 10**energy_max_fit)
log_energy_bins
np.log10(energy_midpoints[midpoints_fitmask])
def constant(energy, c):
return c
def linefit(energy, m, b):
return m*np.log10(energy) + b
def sigmoid_flat(energy, p0, p1, p2):
return p0 / (1 + np.exp(-p1*np.log10(energy) + p2))
def sigmoid_slant(energy, p0, p1, p2, p3):
return (p0 + p3*np.log10(energy)) / (1 + np.exp(-p1*np.log10(energy) + p2))
def red_chisquared(obs, fit, sigma, n_params):
zero_mask = sigma != 0
return np.nansum(((obs[zero_mask] - fit[zero_mask])/sigma[zero_mask]) ** 2) / (len(obs[zero_mask]) - n_params)
# return np.sum(((obs - fit)/sigma) ** 2) / (len(obs) - 1 - n_params)
np.sum(midpoints_fitmask)-3
eff_area, eff_area_error, _ = comp.calculate_effective_area_vs_energy(df_sim, energy_bins)
eff_area_light, eff_area_error_light, _ = comp.calculate_effective_area_vs_energy(df_sim[df_sim.MC_comp_class == 'light'], energy_bins)
eff_area_heavy, eff_area_error_heavy, _ = comp.calculate_effective_area_vs_energy(df_sim[df_sim.MC_comp_class == 'heavy'], energy_bins)
eff_area, eff_area_error, _ = comp.analysis.get_effective_area(df_sim,
energy_bins, energy='MC')
eff_area_light, eff_area_error_light, _ = comp.analysis.get_effective_area(
df_sim[df_sim.MC_comp_class == 'light'],
energy_bins, energy='MC')
eff_area_heavy, eff_area_error_heavy, _ = comp.analysis.get_effective_area(
df_sim[df_sim.MC_comp_class == 'heavy'],
energy_bins, energy='MC')
eff_area_light
p0 = [1.5e5, 8.0, 50.0]
popt_light, pcov_light = optimize.curve_fit(sigmoid_flat, energy_midpoints[midpoints_fitmask],
eff_area_light[midpoints_fitmask], p0=p0,
sigma=eff_area_error_light[midpoints_fitmask])
popt_heavy, pcov_heavy = optimize.curve_fit(sigmoid_flat, energy_midpoints[midpoints_fitmask],
eff_area_heavy[midpoints_fitmask], p0=p0,
sigma=eff_area_error_heavy[midpoints_fitmask])
print(popt_light)
print(popt_heavy)
perr_light = np.sqrt(np.diag(pcov_light))
print(perr_light)
perr_heavy = np.sqrt(np.diag(pcov_heavy))
print(perr_heavy)
avg = (popt_light[0] + popt_heavy[0]) / 2
print('avg eff area = {}'.format(avg))
eff_area_light
light_chi2 = red_chisquared(eff_area_light, sigmoid_flat(energy_midpoints, *popt_light),
eff_area_error_light, len(popt_light))
print(light_chi2)
heavy_chi2 = red_chisquared(eff_area_heavy,
sigmoid_flat(energy_midpoints, *popt_heavy),
eff_area_error_heavy, len(popt_heavy))
print(heavy_chi2)
fig, ax = plt.subplots()
# plot effective area data points with poisson errors
ax.errorbar(np.log10(energy_midpoints), eff_area_light, yerr=eff_area_error_light,
ls='None', marker='.')
ax.errorbar(np.log10(energy_midpoints), eff_area_heavy, yerr=eff_area_error_heavy,
ls='None', marker='.')
# plot corresponding sigmoid fits to effective area
x = 10**np.arange(5.0, 9.5, 0.01)
ax.plot(np.log10(x), sigmoid_flat(x, *popt_light),
color=color_dict['light'], label='light', marker='None', ls='-')
ax.plot(np.log10(x), sigmoid_flat(x, *popt_heavy),
color=color_dict['heavy'], label='heavy', marker='None')
avg_eff_area = (sigmoid_flat(x, *popt_light) + sigmoid_flat(x, *popt_heavy)) / 2
ax.plot(np.log10(x), avg_eff_area,
color=color_dict['total'], label='avg', marker='None')
ax.fill_between(np.log10(x),
avg_eff_area-0.01*avg_eff_area,
avg_eff_area+0.01*avg_eff_area,
color=color_dict['total'], alpha=0.5)
ax.axvline(6.4, marker='None', ls='-.', color='k')
ax.set_ylabel('Effective area [m$^2$]')
ax.set_xlabel('$\mathrm{\log_{10}(E_{true}/GeV)}$')
# ax.set_title('$\mathrm{A_{eff} = 143177 \pm 1431.77 \ m^2}$')
ax.grid()
# ax.set_ylim([0, 180000])
ax.set_xlim([5.4, 8.1])
ax.set_title(config)
#set label style
ax.ticklabel_format(style='sci',axis='y')
ax.yaxis.major.formatter.set_powerlimits((0,0))
leg = plt.legend(title='True composition')
for legobj in leg.legendHandles:
legobj.set_linewidth(2.0)
# eff_area_outfile = os.path.join(comp.paths.figures_dir, 'effective-area-{}.png'.format(config))
# comp.check_output_dir(eff_area_outfile)
# plt.savefig(eff_area_outfile)
plt.show()
df_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config='IC79', return_cut_dict=True)
standard_cut_keys = ['num_hits_1_60', 'IceTopQualityCuts', 'lap_InIce_containment',
# 'num_hits_1_60', 'max_qfrac_1_60',
'InIceQualityCuts']
# for cut in ['MilliNCascAbove2', 'MilliQtotRatio', 'MilliRloglBelow2', 'StochRecoSucceeded']:
# standard_cut_keys += ['InIceQualityCuts_{}'.format(cut)]
eff_area_dict = {}
eff_area_err_dict = {}
selection_mask = np.array([True] * len(df_sim))
for key in standard_cut_keys:
selection_mask *= cut_dict_sim[key]
print(key, np.sum(selection_mask))
eff_area, eff_area_error, _ = comp.analysis.get_effective_area(df_sim[selection_mask],
energy_bins, energy='MC')
# eff_area, eff_area_error = comp.analysis.effective_area.effective_area(df_sim[selection_mask],
# np.arange(5.0, 9.51, 0.1))
eff_area_dict[key] = eff_area
eff_area_err_dict[key] = eff_area_error
fig, ax = plt.subplots()
cut_labels = {'num_hits_1_60': 'NStations/NChannels', 'IceTopQualityCuts': 'IceTopQualityCuts',
'lap_InIce_containment': 'InIce containment', 'InIceQualityCuts': 'InIceQualityCuts'}
for key in standard_cut_keys:
# plot effective area data points with poisson errors
ax.errorbar(np.log10(energy_midpoints), eff_area_dict[key], yerr=eff_area_err_dict[key],
ls='None', marker='.', label=cut_labels[key], alpha=0.75)
ax.set_ylabel('Effective area [m$^2$]')
ax.set_xlabel('$\log_{10}(E_{\mathrm{MC}}/\mathrm{GeV})$')
ax.grid()
# ax.set_ylim([0, 180000])
ax.set_xlim([5.4, 9.6])
#set label style
ax.ticklabel_format(style='sci',axis='y')
ax.yaxis.major.formatter.set_powerlimits((0,0))
leg = plt.legend()
plt.savefig('/home/jbourbeau/public_html/figures/effective-area-cuts.png')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Steps
Step1: 0. helper methods
Step2: 1. load text-format fragment mech
Step3: 2. get thermo and kinetics
Step4: 2.1 correct entropy for certain fragments
Step5: 2.2 correct kinetics for reactions with certain fragments
Step6: 3. save in chemkin format
Step7: 4. correct atom count in chemkin
|
<ASSISTANT_TASK:>
Python Code:
import os
from tqdm import tqdm
from rmgpy import settings
from rmgpy.data.rmg import RMGDatabase
from rmgpy.kinetics import KineticsData
from rmgpy.rmg.model import getFamilyLibraryObject
from rmgpy.data.kinetics.family import TemplateReaction
from rmgpy.data.kinetics.depository import DepositoryReaction
from rmgpy.data.kinetics.common import find_degenerate_reactions
from rmgpy.chemkin import saveChemkinFile, saveSpeciesDictionary
import afm
import afm.fragment
import afm.reaction
def read_frag_mech(frag_mech_path):
reaction_string_dict = {}
current_family = ''
with open(frag_mech_path) as f_in:
for line in f_in:
if line.startswith('#') and ':' in line:
_, current_family = [token.strip() for token in line.split(':')]
elif line.strip() and not line.startswith('#'):
reaction_string = line.strip()
if current_family not in reaction_string_dict:
reaction_string_dict[current_family] = [reaction_string]
else:
reaction_string_dict[current_family].append(reaction_string)
return reaction_string_dict
def parse_reaction_string(reaction_string):
reactant_side, product_side = [token.strip() for token in reaction_string.split('==')]
reactant_strings = [token.strip() for token in reactant_side.split('+')]
product_strings = [token.strip() for token in product_side.split('+')]
return reactant_strings, product_strings
job_name = 'one-sided'
afm_base = os.path.dirname(afm.__path__[0])
working_dir = os.path.join(afm_base, 'examples', '2mobenzene', job_name)
# load RMG database to create reactions
database = RMGDatabase()
database.load(
path = settings['database.directory'],
thermoLibraries = ['primaryThermoLibrary'], # can add others if necessary
kineticsFamilies = 'all',
reactionLibraries = [],
kineticsDepositories = ''
)
thermodb = database.thermo
# Add training reactions
for family in database.kinetics.families.values():
family.addKineticsRulesFromTrainingSet(thermoDatabase=thermodb)
# average up all the kinetics rules
for family in database.kinetics.families.values():
family.fillKineticsRulesByAveragingUp()
# load fragment from smiles-like string
fragment_smiles_filepath = os.path.join(working_dir, 'fragment_smiles.txt')
fragments = []
with open(fragment_smiles_filepath) as f_in:
for line in f_in:
if line.strip() and not line.startswith('#') and ':' in line:
label, smiles = [token.strip() for token in line.split(":")]
frag = afm.fragment.Fragment(label=label).from_SMILES_like_string(smiles)
frag.assign_representative_species()
frag.species_repr.label = label
for prev_frag in fragments:
if frag.isIsomorphic(prev_frag):
raise Exception('Isomorphic duplicate found: {0} and {1}'.format(label, prev_frag.label))
fragments.append(frag)
# construct label-key fragment dictionary
fragment_dict = {}
for frag0 in fragments:
if frag0.label not in fragment_dict:
fragment_dict[frag0.label] = frag0
else:
raise Exception('Fragment with duplicated labels found: {0}'.format(frag0.label))
# put aromatic isomer in front of species.molecule
# 'cause that's the isomer we want to react
for frag in fragments:
species = frag.species_repr
species.generateResonanceIsomers()
for mol in species.molecule:
if mol.isAromatic():
species.molecule = [mol]
break
# load fragment mech in text
fragment_mech_filepath = os.path.join(working_dir, 'frag_mech.txt')
reaction_string_dict = read_frag_mech(fragment_mech_filepath)
# generate reactions
fragment_rxns = []
for family_label in reaction_string_dict:
# parse reaction strings
print "Processing {0}...".format(family_label)
for reaction_string in tqdm(reaction_string_dict[family_label]):
reactant_strings, product_strings = parse_reaction_string(reaction_string)
reactants = [fragment_dict[reactant_string].species_repr for reactant_string in reactant_strings]
products = [fragment_dict[product_string].species_repr.molecule[0] for product_string in product_strings]
for idx, reactant in enumerate(reactants):
for mol in reactant.molecule:
mol.props['label'] = reactant_strings[idx]
for idx, product in enumerate(products):
product.props['label'] = product_strings[idx]
# this script requires reactants to be a list of Species objects
# products to be a list of Molecule objects.
# returned rxns have reactants and products in Species type
new_rxns = database.kinetics.generate_reactions_from_families(reactants=reactants,
products=products,
only_families=[family_label],
resonance=True)
if len(new_rxns) != 1:
print reaction_string + family_label
raise Exception('Non-unique reaction is generated with {0}'.format(reaction_string))
# create fragment reactions
rxn = new_rxns[0]
fragrxn = afm.reaction.FragmentReaction(index=-1,
reversible=True,
family=rxn.family,
reaction_repr=rxn)
fragment_rxns.append(fragrxn)
from rmgpy.data.rmg import getDB
from rmgpy.thermo.thermoengine import processThermoData
from rmgpy.thermo import NASA
import rmgpy.constants as constants
import math
thermodb = getDB('thermo')
# calculate thermo for each species
for fragrxn in tqdm(fragment_rxns):
rxn0 = fragrxn.reaction_repr
for spe in rxn0.reactants + rxn0.products:
thermo0 = thermodb.getThermoData(spe)
if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']:
thermo0.S298.value_si += constants.R * math.log(2)
spe.thermo = processThermoData(spe, thermo0, NASA)
family = getFamilyLibraryObject(rxn0.family)
# Get the kinetics for the reaction
kinetics, source, entry, isForward = family.getKinetics(rxn0, \
templateLabels=rxn0.template, degeneracy=rxn0.degeneracy, \
estimator='rate rules', returnAllKinetics=False)
rxn0.kinetics = kinetics
if not isForward:
rxn0.reactants, rxn0.products = rxn0.products, rxn0.reactants
rxn0.pairs = [(p,r) for r,p in rxn0.pairs]
# convert KineticsData to Arrhenius forms
if isinstance(rxn0.kinetics, KineticsData):
rxn0.kinetics = rxn0.kinetics.toArrhenius()
# correct barrier heights of estimated kinetics
if isinstance(rxn0,TemplateReaction) or isinstance(rxn0,DepositoryReaction): # i.e. not LibraryReaction
rxn0.fixBarrierHeight() # also converts ArrheniusEP to Arrhenius.
fragrxts = [fragment_dict[rxt.label] for rxt in rxn0.reactants]
fragprds = [fragment_dict[prd.label] for prd in rxn0.products]
fragpairs = [(fragment_dict[p0.label],fragment_dict[p1.label]) for p0,p1 in rxn0.pairs]
fragrxn.reactants=fragrxts
fragrxn.products=fragprds
fragrxn.pairs=fragpairs
fragrxn.kinetics=rxn0.kinetics
for frag in fragments:
spe = frag.species_repr
thermo0 = thermodb.getThermoData(spe)
if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']:
thermo0.S298.value_si += constants.R * math.log(2)
spe.thermo = processThermoData(spe, thermo0, NASA)
if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']:
print spe.label
print spe.getFreeEnergy(670)/4184
for fragrxn in tqdm(fragment_rxns):
rxn0 = fragrxn.reaction_repr
if rxn0.family in ['R_Recombination', 'H_Abstraction', 'R_Addition_MultipleBond']:
for spe in rxn0.reactants + rxn0.products:
if spe.label in ['RCC*CCR', 'LCC*CCR', 'LCC*CCL']:
rxn0.kinetics.changeRate(4)
fragrxn.kinetics=rxn0.kinetics
species_list = []
for frag in fragments:
species = frag.species_repr
species_list.append(species)
len(fragments)
reaction_list = []
for fragrxn in fragment_rxns:
rxn = fragrxn.reaction_repr
reaction_list.append(rxn)
len(reaction_list)
# dump chemkin files
chemkin_path = os.path.join(working_dir, 'chem_annotated.inp')
dictionaryPath = os.path.join(working_dir, 'species_dictionary.txt')
saveChemkinFile(chemkin_path, species_list, reaction_list)
saveSpeciesDictionary(dictionaryPath, species_list)
def update_atom_count(tokens, parts, R_count):
# remove R_count*2 C and R_count*5 H
string = ''
if R_count == 0:
return 'G'.join(parts)
else:
H_count = int(tokens[2].split('C')[0])
H_count_update = H_count - 5*R_count
C_count = int(tokens[3])
C_count_update = C_count - 2*R_count
tokens = tokens[:2] + [str(H_count_update)+'C'] + [C_count_update]
# Line 1
string += '{0:<16} '.format(tokens[0])
string += '{0!s:<2}{1:>3d}'.format('H', H_count_update)
string += '{0!s:<2}{1:>3d}'.format('C', C_count_update)
string += ' ' * (4 - 2)
string += 'G' + parts[1]
return string
corrected_chemkin_path = os.path.join(working_dir, 'chem_annotated.inp')
output_string = ''
with open(chemkin_path) as f_in:
readThermo = False
for line in f_in:
if line.startswith('THERM ALL'):
readThermo = True
if not readThermo:
output_string += line
continue
if line.startswith('!'):
output_string += line
continue
if 'G' in line and '1' in line:
parts = [part for part in line.split('G')]
tokens = [token.strip() for token in parts[0].split()]
species_label = tokens[0]
R_count = species_label.count('R')
L_count = species_label.count('L')
updated_line = update_atom_count(tokens, parts, R_count+L_count)
output_string += updated_line
else:
output_string += line
with open(corrected_chemkin_path, 'w') as f_out:
f_out.write(output_string)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load a frame from a real simulation.
Step2: Load the governing properties from the frame.
Step3: Load the last midplane slice of the scalar field and manipulate it into a periodic box.
Step4: Make sure it looks OK.
Step5: Stokes is linear and we have periodic boundaries, so we can solve it directly using Fourier transforms and the frequency-space Green's function (which is diagonal).
Step6: Look ok?
Step7: Now we want to turn this into something Darcy-Weisbach-esque. We don't have uniform forcing, so we take an average.
Step8: Rayleight-Taylor types like the Froude number, which doesn't really make sense
Step9: Instead, we normalize "the right way" using the viscosity
Step10: Print everything out.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 16.0)
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from scipy import fftpack
from numpy import fft
import json
from functools import partial
class Foo: pass
from chest import Chest
from slict import CachedSlict
from glopen import glopen, glopen_many
name = "HighAspect/HA_viscosity_4.0E-4/HA_viscosity_4.0E-4"
arch = "alcf#dtn_mira/projects/alpha-nek/experiments"
c = Chest(path="{:s}-results".format(name),
open=partial(glopen, endpoint=arch),
open_many = partial(glopen_many, endpoint=arch))
sc = CachedSlict(c)
c.prefetch(sc[:,'t_xy'].full_keys())
with glopen(
"{:s}.json".format(name), mode='r',
endpoint = arch,
) as f:
p = json.load(f)
L = 1./p["kmin"]
Atwood = p["atwood"]
g = p["g"]
viscosity = p["viscosity"]
T_end = sc[:,'t_xy'].keys()[-1]
phi_raw = sc[T_end, 't_xy']
phi_raw = np.concatenate((phi_raw, np.flipud(phi_raw)), axis=0)
phi_raw = np.concatenate((phi_raw, np.flipud(phi_raw)), axis=0)
phi_raw = np.concatenate((phi_raw, np.fliplr(phi_raw)), axis=1)
phi_raw = np.concatenate((phi_raw, np.fliplr(phi_raw)), axis=1)
raw_shape = phi_raw.shape
nx = raw_shape[0]
ny = raw_shape[0]
phi = phi_raw[nx/8:5*nx/8, ny/8:5*ny/8]
nx = phi.shape[0]
ny = phi.shape[1]
plt.figure()
plt.imshow(phi)
plt.colorbar();
# Setup the frequencies
dx = L / ny
X = np.tile(np.linspace(0, L, nx), (ny, 1))
Y = np.tile(np.linspace(0, L, ny), (nx, 1)).transpose()
rfreqs = fft.rfftfreq(nx, dx) * 2 * np.pi;
cfreqs = fft.fftfreq(nx, dx)* 2 * np.pi;
rones = np.ones(rfreqs.shape[0]);
cones = np.ones(cfreqs.shape[0]);
# RHS comes from the forcing
F = phi * Atwood * g / viscosity
# Transform forward
p1 = fft.rfftn(F)
# Green's function
p1 = p1 / (np.square(np.outer(cfreqs, rones)) + np.square(np.outer(cones, rfreqs)))
p1[0,0] = 0
# Transform back
w = fft.irfftn(p1)
plt.figure()
plt.imshow(w)
plt.colorbar();
A_tilde = np.sum(np.abs(phi))/ (nx * ny)
Froude = np.sum(np.abs(w)) / np.sqrt(g * Atwood * A_tilde * L) / (nx * ny)
Right = np.sum(np.abs(w)) * viscosity / (g * Atwood * A_tilde * L**2)/ (nx*ny)
dff = 64 / 16 * 14.227
print("L={:f}, A={:f}, A_til={:f}, g={:f}, nu={:f}. D-W is {:f}".format(
L, Atwood, A_tilde, g, viscosity, 1./(dff*2)))
print(" Froude: {:10f} | Right: {:10f}".format(Froude, Right))
print(" C1 = {:f} * C0 ".format(1./Right))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup plotting
Step2: Load data
Step3: load and cache the raw data (we only load first 100 time points because we're on a single node)
Step4: load the sources
Step5: estimate the mean
Step6: Run a block algorithm
Step7: estimate score (fraction of matches based on centroid distance)
Step8: estimate overlap and exactness (based on degree of pixel overlap for matching sources)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.stats import norm
from thunder import SourceExtraction
from thunder.extraction import OverlapBlockMerger
import matplotlib.pyplot as plt
%matplotlib inline
from thunder import Colorize
image = Colorize.image
path = 's3://neuro.datasets/challenges/neurofinder/02.00/'
data = tsc.loadImages(path + 'images', startIdx=0, stopIdx=100)
data.cache()
data.count();
truth = tsc.loadSources(path + 'sources/sources.json')
im = data.mean()
merger = OverlapBlockMerger(0.1)
model = SourceExtraction('nmf', merger=merger, componentsPerBlock=5, percentile=95, minArea=100, maxArea=500)
sources = model.fit(data, size=(32, 32), padding=8)
image(sources.masks(im.shape, base=truth, color='random', outline=True), size=10)
recall, precision, score = truth.similarity(sources, metric='distance', minDistance=5)
print('recall: %.2f' % recall)
print('precision: %.2f' % precision)
print('score: %.2f' % score)
overlap, exactness = tuple(np.nanmean(truth.overlap(sources, method='rates', minDistance=5), axis=0))
print('overlap: %.2f' % overlap)
print('exactness: %.2f' % exactness)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Raw data
Step2: Plotting the data
Step3: Calculate and plot velocity
|
<ASSISTANT_TASK:>
Python Code:
# First, we'll "import" the software packages needed.
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
inline_rc = dict(mpl.rcParams)
# Starting a line with a hashtag tells the program not to read the line.
# That way we can write "comments" to humans trying to figure out what the code does.
# Blank lines don't do anything either, but they can make the code easier to read.
# Whenever you type "something =" it defines a new variable, "something",
# and sets it equal to whatever follows the equals sign. That could be a number,
# another variable, or in this case an entire table of numbers.
# enter raw data
data = pd.DataFrame.from_items([
('time (s)', [0,1,2,3]),
('position (m)', [0,2,4,6])
])
# display data table
data
# set variables = data['column label']
time = data['time (s)']
pos = data['position (m)']
# Uncomment the next line to make it look like a graph from xkcd.com
# plt.xkcd()
# to make normal-looking plots again execute:
# mpl.rcParams.update(inline_rc)
# this makes a scatterplot of the data
# plt.scatter(x values, y values)
plt.scatter(time, pos)
plt.title("Constant Speed?")
plt.xlabel("Time (s)")
plt.ylabel("Position (cm)")
plt.autoscale(tight=True)
# calculate a trendline equation
# np.polyfit( x values, y values, polynomial order)
trend = np.polyfit(time, pos, 1)
# plot trendline
# plt.plot(x values, y values, other parameters)
plt.plot(time, np.poly1d(trend)(time), label='trendline')
plt.legend(loc='upper left')
# display the trendline's coefficients (slope, y-int)
trend
# create a new empty column
data['velocity (m/s)'] = ''
data
# np.diff() calculates the difference between a value and the one after it
vel = np.diff(pos) / np.diff(time)
# fill the velocity column with values from the formula
data['velocity (m/s)'] = pd.DataFrame.from_items([('', vel)])
# display the data table
data
# That last velocity value will cause problems for further coding
# Make a new table using only rows 0 through 2
data2 = data.loc[0:2,['time (s)', 'velocity (m/s)']]
data2
# set new variables to plot
time2 = data2['time (s)']
vel2 = data2['velocity (m/s)']
# plot data just like before
plt.scatter(time2, vel2)
plt.title("Constant Speed?")
plt.xlabel("Time (s)")
plt.ylabel("Velocity (m)")
plt.autoscale(tight=True)
# calculate trendline equation like before
trend2 = np.polyfit(time2, vel2, 1)
# plot trendline like before
plt.plot(time2, np.poly1d(trend2)(time2), label='trendline')
plt.legend(loc='lower left')
# display the trendline's coefficients (slope, y-int)
trend2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set a valid date frame for building the network.
Step2: Filter data according to date frame and export to .gexf file
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from bigbang.archive import Archive
from bigbang.archive import load as load_archive
import bigbang.parse as parse
import bigbang.graph as graph
import bigbang.mailman as mailman
import bigbang.process as process
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
from pprint import pprint as pp
import pytz
#Insert the list of urls (one or more) from which to gather the data
#e.g. urls = [urls = ["http://mm.icann.org/pipermail/cc-humanrights/",
# "http://mm.icann.org/pipermail/wp4/",
# "http://mm.icann.org/pipermail/ge/"]
urls = ["http://mm.icann.org/pipermail/cc-humanrights/",
"http://mm.icann.org/pipermail/wp4/",
"http://mm.icann.org/pipermail/wp1/"]
try:
arch_paths =[]
for url in urls:
arch_paths.append('../archives/'+url[:-1].replace('://','_/')+'.csv')
archives = [load_archive(arch_path).data for arch_path in arch_paths]
except:
arch_paths =[]
for url in urls:
arch_paths.append('../archives/'+url[:-1].replace('//','/')+'.csv')
archives = [load_archive(arch_path).data for arch_path in arch_paths]
archives_merged = pd.concat(archives)
archives_data = Archive(archives_merged).data
#The oldest date and more recent date for the whole mailing lists are displayed, so you WON't set an invalid time frame
print archives_data['Date'].min()
print archives_data['Date'].max()
#set the date frame
date_from = pd.datetime(2000,11,1,tzinfo=pytz.utc)
date_to = pd.datetime(2111,12,1,tzinfo=pytz.utc)
def filter_by_date(df,d_from,d_to):
return df[(df['Date'] > d_from) & (df['Date'] < d_to)]
#create filtered network
archives_data_filtered = filter_by_date(archives_data, date_from, date_to)
network = graph.messages_to_interaction_graph(archives_data_filtered)
#export the network in a format that you can open in Gephi.
#insert a valid path and file name (e.g. path = 'c:/bigbang/network.gexf')
path = 'c:/users/davide/bigbang/network_for_gephi.gexf'
nx.write_gexf(network, path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bonus2
Step3: Unit Tests
|
<ASSISTANT_TASK:>
Python Code:
def lstrip(iterable, obj):
stop = False
for item in iterable:
if stop:
yield item
elif item != obj:
yield item
stop = True
x = lstrip([0, 1, 2, 3, 0], 0)
x
list(x)
def lstrip(iterable, obj):
lstrip_stop = False
for item in iterable:
if lstrip_stop:
yield item
else:
if not callable(obj):
if item != obj:
yield item
lstrip_stop = True
else:
if not obj(item):
yield item
lstrip_stop = True
def is_falsey(value): return not bool(value)
list(lstrip(['', 0, 1, 0, 2, 'h', ''], is_falsey))
list(lstrip([-4, -2, 2, 4, -6], lambda n: n < 0))
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
list(lstrip(numbers, is_even))
list(lstrip([0, 0, 1, 0, 2, 3], 0))
list(lstrip(' hello ', ' '))
import unittest
class LStripTests(unittest.TestCase):
Tests for lstrip.
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_list(self):
self.assertIterableEqual(lstrip([1, 1, 2, 3], 1), [2, 3])
def test_nothing_to_strip(self):
self.assertIterableEqual(lstrip([1, 2, 3], 0), [1, 2, 3])
def test_string(self):
self.assertIterableEqual(lstrip(' hello', ' '), 'hello')
def test_empty_iterable(self):
self.assertIterableEqual(lstrip([], 1), [])
def test_strip_all(self):
self.assertIterableEqual(lstrip([1, 1, 1], 1), [])
def test_none_values(self):
self.assertIterableEqual(lstrip([None, 1, 2, 3], 0), [None, 1, 2, 3])
def test_iterator(self):
squares = (n**2 for n in [0, 0, 1, 2, 3])
self.assertIterableEqual(lstrip(squares, 0), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_returns_iterator(self):
stripped = lstrip((1, 2, 3), 1)
self.assertEqual(iter(stripped), iter(stripped))
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_function_given(self):
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
self.assertIterableEqual(lstrip(numbers, is_even), [1, 3, 5, 6])
if __name__ == "__main__":
unittest.main(argv=['ignore-first-arg'], exit=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For the record, here are our texts
Step2: Let's get some basic information about these texts
|
<ASSISTANT_TASK:>
Python Code:
import os
fileCount = len(os.walk('./texts').next()[2])
print(fileCount)
print(os.walk('./texts').next()[2])
import glob
import re
files = {}
for fpath in glob.glob("./texts/*.txt"):
with open(fpath) as f:
fixed_text = re.sub("[^a-zA-Z'-]"," ",f.read())
files[fpath] = (len(fixed_text.split()),len(set(fixed_text.split())))
for fname in sorted(files):
print fname + '\t' + str(files[fname][0]) + '\t' + str(files[fname][1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If the distribution of fluxs before transit is significantly different from the distribution of fluxs after transit, mask those results.
Step2: It seems that close-in, large exoplanets orbit more active stars (with larger in-transit RMS) than far out/small planets
Step3: Transit residuals are more asymmetric for far-out, small exoplanets.
Step4: Stars with short period planets have disproportionately larger scatter in transit
|
<ASSISTANT_TASK:>
Python Code:
[n for n in table.colnames if n.startswith('ks')]
p = table['ttest:out_of_transit&before_midtransit-vs-out_of_transit&after_midtransit']
poorly_normalized_oot_threshold = -1
mask_poorly_normalized_oot = np.log(p) > poorly_normalized_oot_threshold
plt.hist(np.log(p[~np.isnan(p)]))
plt.axvline(poorly_normalized_oot_threshold, color='r')
plt.ylabel('freq')
plt.xlabel('log( Ttest(before-transit, after-transit) )')
plt.show()
p = table['ks:out_of_transit&before_midtransit-vs-out_of_transit&after_midtransit']
mask_different_rms_before_vs_after_thresh = -1.5
mask_different_rms_before_vs_after = np.log(p) > mask_different_rms_before_vs_after_thresh
plt.hist(np.log(p[~np.isnan(p)]))
plt.axvline(mask_different_rms_before_vs_after_thresh, color='r')
plt.ylabel('freq')
plt.xlabel('log( KS(before-transit, after-transit) )')
plt.show()
combined_mask = mask_poorly_normalized_oot | mask_different_rms_before_vs_after
print("stars left after cuts:", np.count_nonzero(table['kepid'][combined_mask]))
ks_in_out = table['ks:in_transit-vs-out_of_transit']
b = table['B']
thresh = 0.001
mask_notable_intransit = ks_in_out < thresh
plt.scatter(np.log(ks_in_out), b)
plt.axvline(np.log(thresh), color='r')
ks_in_in = table['ks:in_transit&before_midtransit-vs-in_transit&after_midtransit']
anderson_in_in = table['anderson:in_transit&before_midtransit-vs-in_transit&after_midtransit']
b = table['B']
thresh = 0.05
mask_asymmetric_in = (ks_in_in < thresh) & (anderson_in_in < thresh)
print(table['kepid'][mask_asymmetric_in])
plt.scatter(np.log(ks_in_in), rb)
plt.axvline(np.log(thresh), color='r')
large_planets = table['R'].data > 0.1
close_in_planets = table['PER'] < 10
close_in_large_planets = (large_planets & close_in_planets) & combined_mask
far_out_small_planets = np.logical_not(close_in_large_planets) & combined_mask
np.count_nonzero(close_in_large_planets.data), np.count_nonzero(far_out_small_planets)
plt.hist(np.log(table['ks:in_transit-vs-out_of_transit'])[close_in_large_planets],
label='close in/large', alpha=0.4, normed=True)
plt.hist(np.log(table['ks:in_transit-vs-out_of_transit'])[far_out_small_planets],
label='far out/small', alpha=0.4, normed=True)
plt.legend()
plt.xlabel('log( KS(in vs. out) )')
plt.ylabel('Fraction of stars')
plt.title("Total activity")
plt.show()
plt.hist(np.log(table['ks:in_transit&before_midtransit-vs-in_transit&after_midtransit'])[close_in_large_planets],
label='close in/large', alpha=0.4, normed=True)
plt.hist(np.log(table['ks:in_transit&before_midtransit-vs-in_transit&after_midtransit'])[far_out_small_planets],
label='far out/small', alpha=0.4, normed=True)
plt.legend()
plt.xlabel('log( KS(in-transit (first half) vs. in-transit (second half)) )')
plt.ylabel('Fraction of stars')
plt.title("Residual asymmetry")
plt.show()
plt.loglog(table['ks:in_transit-vs-out_of_transit'],
table['PER'], '.')
plt.xlabel('transit depth scatter: log(ks)')
plt.ylabel('period [d]')
plt.loglog(table['PER'][close_in_large_planets],
table['ks:in_transit-vs-out_of_transit'][close_in_large_planets], 'k.', label='close in & large')
plt.loglog(table['PER'][far_out_small_planets],
table['ks:in_transit-vs-out_of_transit'][far_out_small_planets], 'r.', label='far out | small')
plt.legend()
plt.ylabel('transit depth scatter: log(ks)')
plt.xlabel('period [d]')
ax = plt.gca()
ax.invert_yaxis()
plt.semilogx(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets],
table['B'][close_in_large_planets], 'k.', label='close in/large')
plt.semilogx(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets],
table['B'][far_out_small_planets], 'r.', label='far out/small')
plt.legend()
ax = plt.gca()
ax.set_xlabel('transit depth scatter: log(ks)')
ax.set_ylabel('impact parameter $b$')
ax2 = ax.twinx()
y2 = 1 - np.linspace(0, 1, 5)
y2labels = np.degrees(np.arccos(y2))[::-1]
ax2.set_yticks(y2)
ax2.set_yticklabels([int(round(i)) for i in y2labels])
#ax2.set_ylim([0, 90])
ax2.set_ylabel('abs( latitude )')
def b_to_latitude_deg(b):
return 90 - np.degrees(np.arccos(b))
abs_latitude = b_to_latitude_deg(table['B'])
plt.semilogx(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets],
abs_latitude[close_in_large_planets], 'k.', label='close in/large')
plt.semilogx(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets],
abs_latitude[far_out_small_planets], 'r.', label='far out/small')
plt.legend()
ax = plt.gca()
ax.set_xlabel('in-transit asymmetry: log(ks)')
ax.set_ylabel('stellar latitude (assume aligned)')
from scipy.stats import binned_statistic
bs = binned_statistic(abs_latitude[far_out_small_planets],
np.log(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets]),
statistic='median', bins=10)
bincenter = 0.5 * (bs.bin_edges[:-1] + bs.bin_edges[1:])
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].plot(abs_latitude[far_out_small_planets],
np.log(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets]),
'k.', label='far out/small')
ax[0].plot(bincenter, bs.statistic, label='median')
ax[0].invert_yaxis()
ax[0].set_ylabel('transit depth scatter: log(ks)')
ax[0].set_xlabel('stellar latitude (assume aligned)')
bs = binned_statistic(abs_latitude[close_in_large_planets],
np.log(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets]),
statistic='median', bins=10)
bincenter = 0.5 * (bs.bin_edges[:-1] + bs.bin_edges[1:])
ax[1].plot(abs_latitude[close_in_large_planets],
np.log(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets]),
'k.', label='far out/small')
ax[1].plot(bincenter, bs.statistic, label='median')
ax[1].invert_yaxis()
ax[1].set_ylabel('transit depth scatter: log(ks)')
ax[1].set_xlabel('stellar latitude (assume aligned)')
ax[0].set_title('Small | far out')
ax[1].set_title('large & close in')
from scipy.stats import binned_statistic
bs = binned_statistic(abs_latitude[far_out_small_planets],
np.log(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets]),
statistic='median', bins=10)
bincenter = 0.5 * (bs.bin_edges[:-1] + bs.bin_edges[1:])
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.plot(abs_latitude[far_out_small_planets],
np.log(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets]),
'k.', label='far out | small')
ax.plot(bincenter, bs.statistic, 'k', label='median(far out | small)')
ax.set_ylabel('transit depth scatter: log(ks)')
ax.set_xlabel('stellar latitude (assume aligned)')
bs = binned_statistic(abs_latitude[close_in_large_planets],
np.log(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets]),
statistic='median', bins=10)
bincenter = 0.5 * (bs.bin_edges[:-1] + bs.bin_edges[1:])
ax.plot(abs_latitude[close_in_large_planets],
np.log(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets]),
'r.', label='close in & large')
ax.plot(bincenter, bs.statistic, 'r', label='median(close in & large)')
# ax.set_ylabel('transit depth scatter: log(ks)')
# ax.set_xlabel('stellar latitude (assume aligned)')
ax.legend()
ax.invert_yaxis()
ax.set_ylim([0, -150])
plt.show()
#ax.set_title('Small | far out')
#ax.set_title('large & close in')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Utility functions
Step4: Loading Data
Step5: Initial Alignment
Step6: Look at the transformation, what type is it?
Step7: Final registration
Step8: Look at the final transformation, what type is it?
Step9: Version 1.1
Step10: Look at the final transformation, what type is it? Why is it different from the previous example?
Step11: Version 2
Step12: Look at the final transformation, what type is it?
|
<ASSISTANT_TASK:>
Python Code:
import SimpleITK as sitk
# If the environment variable SIMPLE_ITK_MEMORY_CONSTRAINED_ENVIRONMENT is set, this will override the ReadImage
# function so that it also resamples the image to a smaller size (testing environment is memory constrained).
%run setup_for_testing
# Utility method that either downloads data from the network or
# if already downloaded returns the file name for reading from disk (cached data).
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
# Always write output to a separate directory, we don't want to pollute the source directory.
import os
OUTPUT_DIR = "Output"
# GUI components (sliders, dropdown...).
from ipywidgets import interact, fixed
# Enable display of HTML.
from IPython.display import display, HTML
# Plots will be inlined.
%matplotlib inline
# Callbacks for plotting registration progress.
import registration_callbacks
def save_transform_and_image(transform, fixed_image, moving_image, outputfile_prefix):
Write the given transformation to file, resample the moving_image onto the fixed_images grid and save the
result to file.
Args:
transform (SimpleITK Transform): transform that maps points from the fixed image coordinate system to the moving.
fixed_image (SimpleITK Image): resample onto the spatial grid defined by this image.
moving_image (SimpleITK Image): resample this image.
outputfile_prefix (string): transform is written to outputfile_prefix.tfm and resampled image is written to
outputfile_prefix.mha.
resample = sitk.ResampleImageFilter()
resample.SetReferenceImage(fixed_image)
# SimpleITK supports several interpolation options, we go with the simplest that gives reasonable results.
resample.SetInterpolator(sitk.sitkLinear)
resample.SetTransform(transform)
sitk.WriteImage(resample.Execute(moving_image), outputfile_prefix + ".mha")
sitk.WriteTransform(transform, outputfile_prefix + ".tfm")
def DICOM_series_dropdown_callback(fixed_image, moving_image, series_dictionary):
Callback from dropbox which selects the two series which will be used for registration.
The callback prints out some information about each of the series from the meta-data dictionary.
For a list of all meta-dictionary tags and their human readable names see DICOM standard part 6,
Data Dictionary (http://medical.nema.org/medical/dicom/current/output/pdf/part06.pdf)
# The callback will update these global variables with the user selection.
global selected_series_fixed
global selected_series_moving
img_fixed = sitk.ReadImage(series_dictionary[fixed_image][0])
img_moving = sitk.ReadImage(series_dictionary[moving_image][0])
# There are many interesting tags in the DICOM data dictionary, display a selected few.
tags_to_print = {
"0010|0010": "Patient name: ",
"0008|0060": "Modality: ",
"0008|0021": "Series date: ",
"0008|0031": "Series time:",
"0008|0070": "Manufacturer: ",
}
html_table = []
html_table.append(
"<table><tr><td><b>Tag</b></td><td><b>Fixed Image</b></td><td><b>Moving Image</b></td></tr>"
)
for tag in tags_to_print:
fixed_tag = ""
moving_tag = ""
try:
fixed_tag = img_fixed.GetMetaData(tag)
except: # ignore if the tag isn't in the dictionary
pass
try:
moving_tag = img_moving.GetMetaData(tag)
except: # ignore if the tag isn't in the dictionary
pass
html_table.append(
"<tr><td>"
+ tags_to_print[tag]
+ "</td><td>"
+ fixed_tag
+ "</td><td>"
+ moving_tag
+ "</td></tr>"
)
html_table.append("</table>")
display(HTML("".join(html_table)))
selected_series_fixed = fixed_image
selected_series_moving = moving_image
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
# 'selected_series_moving/fixed' will be updated by the interact function.
selected_series_fixed = ""
selected_series_moving = ""
# Directory contains multiple DICOM studies/series, store the file names
# in dictionary with the key being the series ID.
reader = sitk.ImageSeriesReader()
series_file_names = {}
series_IDs = list(reader.GetGDCMSeriesIDs(data_directory)) # list of all series
if series_IDs: # check that we have at least one series
for series in series_IDs:
series_file_names[series] = reader.GetGDCMSeriesFileNames(
data_directory, series
)
interact(
DICOM_series_dropdown_callback,
fixed_image=series_IDs,
moving_image=series_IDs,
series_dictionary=fixed(series_file_names),
)
else:
print("This is surprising, data directory does not contain any DICOM series.")
# Actually read the data based on the user's selection.
fixed_image = sitk.ReadImage(series_file_names[selected_series_fixed])
moving_image = sitk.ReadImage(series_file_names[selected_series_moving])
# Save images to file and view overlap using external viewer.
sitk.WriteImage(fixed_image, os.path.join(OUTPUT_DIR, "fixedImage.mha"))
sitk.WriteImage(moving_image, os.path.join(OUTPUT_DIR, "preAlignment.mha"))
initial_transform = sitk.CenteredTransformInitializer(
sitk.Cast(fixed_image, moving_image.GetPixelID()),
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY,
)
# Save moving image after initial transform and view overlap using external viewer.
save_transform_and_image(
initial_transform,
fixed_image,
moving_image,
os.path.join(OUTPUT_DIR, "initialAlignment"),
)
print(initial_transform)
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
# The learningRate parameter is always required. Using the default
# configuration this parameter is ignored because it is overridden
# by the default setting of the estimateLearningRate parameter which
# is sitk.ImageRegistrationMethod.Once. For the user selected
# learningRate to take effect you need to also set the
# estimateLearningRate parameter to sitk.ImageRegistrationMethod.Never
registration_method.SetOptimizerAsGradientDescent(
learningRate=1.0, numberOfIterations=100
)
# Scale the step size differently for each parameter, this is critical!!!
registration_method.SetOptimizerScalesFromPhysicalShift()
registration_method.SetInitialTransform(initial_transform, inPlace=False)
registration_method.AddCommand(
sitk.sitkStartEvent, registration_callbacks.metric_start_plot
)
registration_method.AddCommand(
sitk.sitkEndEvent, registration_callbacks.metric_end_plot
)
registration_method.AddCommand(
sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method),
)
final_transform_v1 = registration_method.Execute(
sitk.Cast(fixed_image, sitk.sitkFloat32), sitk.Cast(moving_image, sitk.sitkFloat32)
)
print(
f"Optimizer's stopping condition, {registration_method.GetOptimizerStopConditionDescription()}"
)
print(f"Final metric value: {registration_method.GetMetricValue()}")
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(
final_transform_v1,
fixed_image,
moving_image,
os.path.join(OUTPUT_DIR, "finalAlignment-v1"),
)
print(final_transform_v1)
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(
learningRate=1.0, numberOfIterations=100
)
registration_method.SetOptimizerScalesFromPhysicalShift()
# Set the initial moving and optimized transforms.
optimized_transform = sitk.Euler3DTransform()
registration_method.SetMovingInitialTransform(initial_transform)
registration_method.SetInitialTransform(optimized_transform)
registration_method.AddCommand(
sitk.sitkStartEvent, registration_callbacks.metric_start_plot
)
registration_method.AddCommand(
sitk.sitkEndEvent, registration_callbacks.metric_end_plot
)
registration_method.AddCommand(
sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method),
)
registration_method.Execute(
sitk.Cast(fixed_image, sitk.sitkFloat32), sitk.Cast(moving_image, sitk.sitkFloat32)
)
# Need to compose the transformations after registration.
final_transform_v11 = sitk.CompositeTransform(optimized_transform)
final_transform_v11.AddTransform(initial_transform)
print(
f"Optimizer's stopping condition, {registration_method.GetOptimizerStopConditionDescription()}"
)
print(f"Final metric value: {registration_method.GetMetricValue()}")
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(
final_transform_v11,
fixed_image,
moving_image,
os.path.join(OUTPUT_DIR, "finalAlignment-v1.1"),
)
print(final_transform_v11)
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(
learningRate=1.0, numberOfIterations=100
) # , estimateLearningRate=registration_method.EachIteration)
registration_method.SetOptimizerScalesFromPhysicalShift()
final_transform = sitk.Euler3DTransform(initial_transform)
registration_method.SetInitialTransform(final_transform)
registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4, 2, 1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2, 1, 0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
registration_method.AddCommand(
sitk.sitkStartEvent, registration_callbacks.metric_start_plot
)
registration_method.AddCommand(
sitk.sitkEndEvent, registration_callbacks.metric_end_plot
)
registration_method.AddCommand(
sitk.sitkMultiResolutionIterationEvent,
registration_callbacks.metric_update_multires_iterations,
)
registration_method.AddCommand(
sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method),
)
registration_method.Execute(
sitk.Cast(fixed_image, sitk.sitkFloat32), sitk.Cast(moving_image, sitk.sitkFloat32)
)
print(
f"Optimizer's stopping condition, {registration_method.GetOptimizerStopConditionDescription()}"
)
print(f"Final metric value: {registration_method.GetMetricValue()}")
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(
final_transform,
fixed_image,
moving_image,
os.path.join(OUTPUT_DIR, "finalAlignment-v2"),
)
print(final_transform)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: GGA - GPS Fix Data
Step2: The lat and lon attributes are in DDDMM.SSSSS format while latitude and longitude are their float values.
Step3: VTG - Track Made Good and Ground Speed
Step4: ZDA - Time and Date
Step5: Generating NMEA messages
Step6: Notes
|
<ASSISTANT_TASK:>
Python Code:
import pynmea2
msg = pynmea2.parse("$GPGGA,184353.07,1929.045,S,02410.506,E,1,04,2.6,100.00,M,-33.9,M,,0000*6D", check=True)
msg
msg.lat, msg.latitude, msg.latitude_minutes, msg.latitude_seconds
msg.lon, msg.longitude, msg.longitude_minutes, msg.longitude_seconds
pynmea2.parse("$GPVTG,054.7,T,034.4,M,005.5,N,010.2,K*48")
pynmea2.parse("$GPZDA,201530.00,04,07,2002,00,00*60") # Time is in UTC, or GPS time if offset is not yet known
msg = pynmea2.GGA('GP', 'GGA', ('184353.07', '1929.045', 'S', '02410.506', 'E', '1', '04', '2.6',
'100.00', 'M', '-33.9', 'M', '', '0000'))
msg
str(msg)
def sd_to_dm(latitude, longitude):
if latitude < 0:
lat_dir = 'S'
else:
lat_dir = 'N'
lat = ('%010.5f' % (abs(int(latitude)) * 100 + (abs(latitude) % 1.0) * 60)).rstrip('0')
if longitude < 0:
lon_dir = 'W'
else:
lon_dir = 'E'
lon = ('%011.5f' % (abs(int(longitude)) * 100 + (abs(longitude) % 1.0) * 60)).rstrip('0')
return lat, lat_dir, lon, lon_dir
# 1929.045,S,02410.506,E
sd_to_dm(-19.484083333333334, 24.1751)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interact.jl provides interactive widgets for IJulia. Interaction relies on React.jl reactive programming package. React provides the type Signal which represent time-varying values. For example, a Slider widget can be turned into a "signal of numbers". Execute the following two cells, and then move the slider. You will see that the value of signal(slider) changes accordingly.
Step2: Let us now inspect the types of these entities.
Step3: You can have many instances of the same widget in a notebook, and they stay in sync
Step4: Using Widget Signals
Step5: Go ahead and vary the slider to see this in action.
Step6: You can of course use several inputs as arguments to lift enabling you to combine many signals. Let's create a full color-picker.
Step7: the @lift macro provides useful syntactic sugar to do this
Step8: We can use the HTML widget to write some text you can change the color of.
Step9: The @manipulate Macro
Step10: Signal of Widgets
Step11: Now vary the x slider to see sin(2x) and cos(2x) get set to their appropriate values.
|
<ASSISTANT_TASK:>
Python Code:
using React, Interact
s = slider(0:0.01:1, label="Slider X:")
signal(s)
display(typeof(s));
isa(s, Widget)
display(typeof(signal(s)));
isa(signal(s), Signal{Float64})
s
xsquared = lift(x -> x*x, signal(s))
using Color
lift(x -> RGB(x, 0.5, 0.5), signal(s))
r = slider(0:0.01:1, label="R")
g = slider(0:0.01:1, label="G")
b = slider(0:0.01:1, label="B")
map(display, [r,g,b]);
color = lift((x, y, z) -> RGB(x, y, z), r, g, b)
color = @lift RGB(r, g, b)
@lift html(string("<div style='color:#", hex(color), "'>Hello, World!</div>"))
@manipulate for r = 0:.05:1, g = 0:.05:1, b = 0:.05:1
html(string("<div style='color:#", hex(RGB(r,g,b)), "'>Color me concise</div>"))
end
x = slider(0:.1:2pi, label="x")
s = @lift slider(-1:.05:1, value=sin(2x), label="sin(2x)")
c = @lift slider(-1:.05:1, value=cos(2x), label="cos(2x)")
map(display, [x,s,c]);
fx = Input(0.0) # A float input
x = slider(0:.1:2pi, label="x")
y = lift(v -> slider(-1:.05:1, value=sin(v), input=fx, label="f(x)"), x)
map(display, (x,y));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The mexican hat function/wavelet is the rescaled negative second derivative of the gaussian function (the probability distribution function of the normal distribution).
Step2: In fact, for the basic wavelet transform there is only one serious theoretical limitations to what a wavelet can be.
Step4: Wavelet theory and the Fourier transform
Step5: Now that we have our sample data, we can look at its power spectrum using the fourier transform.
Step6: As expected, we see clear peaks at the frequencies that are present in the data (50 hz, 100 hz, 130 hz, and 150 hz).
Step9: As you can see from the plots above, the windowed fourier transform preserves the spatial information at the cost of a higher computational complexity.
Step10: Now, to introduce the time domain again, we define a function that calculates a single coefficient of the windowed fourier transform for a given time and frequency
Step11: With this definition we have a function that can tell us for each frequency at each point in time how much of that frequency is present in our signal.
Step12: As you can see, the wavelet transform with the mexican hat function also filters frequency information and retains spatial information.
Step13: With this representation we can view the fourier transform as a special case of the wavelet transform with a "fourier wavelet".
Step14: We can see that the parameter m dilates the wavelet and the parameter n translates the wavelet along the x-axis.
Step15: You can both see that the two implementations yield identical outputs and that twav_mn is much more efficient.
Step16: We will now try to approximate this function with haar wavelets.
Step17: The essential trick for our approximation is to write our "function" rfunc as the sum of a more "coarse" function and some delta function that holds the difference of this approximation function and the original function.
Step18: The interesting part of this split is the $\delta_1$.
Step19: We can now exploit this property by "fitting" a haar wavelet to these adjacent pairs.
Step20: For our whole $\delta_1$ the fit then looks as follows
Step21: In other words, we now have a description of $\delta_1$ in terms of coefficients for our haar wavelets
Step22: We are of course still left with $f_1$ which needs to be approximated.
Step23: As you can see the coefficients of the DWT using a Haar wavelet describe the approximated function across different scales or in different levels of details.
Step24: The first plot illustrates the fact that $\phi(x) = \sum_{n = -\infty}^{\infty} \alpha_n \phi(2x -n)$ while the second plot shows how the construction recipe really does produce the haar wavelet.
Step26: This second implementation is more computationally efficient and generally applicable at the cost of being a little less readable.
Step30: For the Haar basis we again see the similar shapes to the mother and father wavelets.
Step32: This simple example shows how the subband filtering works.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# we will use numpy and matplotlib for all the following examples
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
def mexican_hat(x, mu, sigma):
return 2 / (np.sqrt(3 * sigma) * np.pi**0.25) * (1 - x**2 / sigma**2) * np.exp(-x**2 / (2 * sigma**2) )
xvals = np.arange(-10,10,0.1)
plt.plot(xvals, mexican_hat(xvals, 0, 1))
plt.show()
def gauss(x, mu, sigma):
return 1.0 / (sigma * np.sqrt(2 * np.pi)) * np.exp(- (x - mu)**2 / (2 * sigma**2))
g = gauss(xvals, 0, 1)
m = mexican_hat(xvals, 0, 1)
dg = g[1:] - g[:-1] # linear approximation of first derivative
ddg = dg[1:] - dg[:-1] # linear approximation of second derivative
plt.plot(xvals, m, color="blue", lw=6, alpha=0.3)
fac = m[len(xvals)//2] / -ddg[len(xvals)//2] # scaling factor
plt.plot(xvals[1:-1], -ddg*fac, "r-")
plt.show()
np.sum(m)
def hamming(n):
Hamming window of size N for smoothing the edges of sound waves
return 0.54 - 0.46 * np.cos(2 * np.pi * np.arange(n) / (n-1))
def sound(freq, dur, res=10000):
ln = dur*res
sound = np.zeros(ln)
sound = np.sin(np.arange(ln)*2*np.pi*freq/res)
return sound * hamming(ln)
def add_sound(audio, loc, freq, dur, res=10000):
audio[loc:loc+dur*res] += sound(freq, dur, res=res)
res = 10000 # sound resolution in hz
plt.figure(figsize=(15,6))
plt.subplot(121)
snd = sound(10,1)
plt.plot(np.arange(len(snd),dtype="float32")/res,snd)
plt.xlabel("t[s]")
plt.title("sound window at 10hz")
plt.subplot(122)
audio = np.zeros(15000)
add_sound(audio, 1000, 100, 0.5)
add_sound(audio, 3000, 130, 0.5)
add_sound(audio, 2000, 50, 1)
add_sound(audio, 10000, 150, 0.5)
plt.plot(np.arange(len(audio),dtype="float32")/res,audio)
plt.xlabel("t[s]")
plt.title("audio signal with overlaying sound waves")
plt.show()
fourier = np.fft.fft(audio)
xvals = np.fft.fftfreq(len(audio))*res
idx = np.where(np.abs(xvals) < 200)
plt.plot(xvals[idx],np.abs(fourier)[idx])
plt.show()
# note: the execution of this code block might take a few seconds
def fourier_w(signal, window_size=1000):
out = np.zeros((len(signal),window_size))
window = hamming(window_size)
for i in range(window_size//2, len(signal)-window_size//2):
s = i - window_size//2
e = s + window_size
wsig = signal[s:e] * window
out[i,:] = np.abs(np.fft.fft(wsig))
return out
s,e = 4000,8000 # range of signal
wsize = 1000 # window size
fs,fe = 0,18 # range of frequencies to plot
fw = fourier_w(audio[s:e],window_size=wsize)
fwcut = fw[wsize//2:-wsize//2,fs:fe]
plt.figure(figsize=(20,10))
plt.subplot(211)
plt.pcolormesh(fwcut.T)
yt = np.arange(0,len(fwcut[0]),1)
plt.yticks(yt+0.5,(np.fft.fftfreq(wsize)*res)[yt+fs])
xt = np.arange(0,len(fwcut),wsize//2)
plt.xticks(xt,(xt+s+wsize//2)/res)
plt.ylabel("freq[hz]")
plt.ylim(0,len(fwcut[0]))
plt.subplot(212)
plt.plot(audio[s+wsize//2:e-wsize//2])
plt.xticks(xt, (xt+s+wsize//2)/res)
plt.xlabel("t[s]")
plt.show()
def fourier_coeff_i(signal, freq, res=10000):
calculates the imaginary fourier coefficient of signal at frequency freq
s = -np.sin(np.arange(len(signal))*2*np.pi*freq/res) # sine wave with given frequency
return np.sum(signal * s) # integral
def fourier_coeff_r(signal, freq, res=10000):
calculates the real fourier coefficient of signal at frequency freq
s = np.cos(np.arange(len(signal))*2*np.pi*freq/res) # sine wave with given frequency
return np.sum(signal * s) # integral
freqs = [50,70,100,110,120,125,130,140,150]
faudio = np.fft.fft(audio)
fbins = np.fft.fftfreq(len(audio))
coeff_lib = lambda f: faudio[int(np.floor(f/res*len(audio)))]
for f in freqs:
i = fourier_coeff_i(audio,f)
r = fourier_coeff_r(audio,f)
print("{0:3d}hz: {1:5.0f} + {2:5.0f}i (fft: {3.real:5.0f} {3.imag:+5.0f}i)".format(f,r,i,coeff_lib(f)))
def windowed_fourier(signal, freq, t, wsize=1000, res = 10000):
window = hamming(wsize)
s = int(np.floor(t * res - wsize//2))
wsig = signal[s:s+wsize] * window
return [f(wsig, freq, res=res) for f in [fourier_coeff_r, fourier_coeff_i]]
args = [
(50,0.5),
(50,0.7),
(150,0.6),
(150,1.2)
]
for f, t in args:
r, i = windowed_fourier(audio, f, t)
print("{0:3d}hz, {1:5.3f}s: {2:+5.0f} {3:+5.0f}i".format(f, t, r, i))
# note the response of a mexican hat wavelet of 1s length is highest for a frequency of approximately 4hz
def twav(signal, f):
wav = mexican_hat(np.arange(-5,5,10.0/10000 * f/4.0), 0, 1)
return np.convolve(signal, wav, "same")
# remember: the sound at 50hz starts at t = 0.2s and has a duration of 1s
plt.plot(np.arange(len(audio))/res,twav(audio,50))
plt.xlabel("t[s]")
plt.show()
def wfourier_conv(signal, freq, wsize=1000, res=10000):
window = hamming(wsize)
x = (wsize-1-np.arange(wsize)) * 2 * np.pi * freq / res
swindow = window * np.sin(x)
cwindow = window * np.cos(x)
sfft, cfft = [np.convolve(signal,x,"same") for x in [swindow, cwindow]]
return swindow, cwindow, cfft - sfft * 1j
# remember: we have issued a sound with 150hz at t = 1s with duration 5s
sw150, cw150, f150 = wfourier_conv(audio, 150, res=res)
for t in [6000, 12000]:
print("{0:3d}hz, {1:5.3f}s: {2.real:+5.0f} {2.imag:+5.0f}i".format(150, t/res, f150[t]))
plt.figure(figsize=(15,4))
plt.subplot(121)
plt.plot(np.arange(len(audio),dtype="float32")/res,np.abs(f150))
plt.title("windowed fourier transform at 150hz")
plt.xlabel("t[s]")
plt.subplot(122)
plt.plot(np.arange(len(sw150),dtype="float32")/res,sw150)
plt.plot(np.arange(len(cw150),dtype="float32")/res,cw150)
plt.title("fourier 'wavelets'")
plt.xlabel("t[s]")
plt.show()
# we assume a0 = 2 and b0 = 1
def psi_mn(psi, m, n):
a = 2**m
b = n*2**m
wav = np.zeros(len(psi)*a + b)
wav[b:b+len(psi)*a] = np.interp(np.arange(len(psi)*a)/a,np.arange(len(psi)),psi)
return wav
psi = mexican_hat(np.arange(-5,5,0.1),0,1)
xlim = (0,350)
ns = [1, 30, 60]
ms = [0, 1]
plt.figure(figsize=(15,4))
plt.subplot(121)
for mi in range(len(ms)):
m = ms[mi]
plt.subplot(1,len(ms),mi+1)
for n in ns:
plt.plot(psi_mn(psi, m, n), label="n="+str(n))
plt.title("m = "+str(m))
plt.legend(loc="best")
plt.xlim(xlim)
plt.show()
def twav_mn(f, psi, m, n):
f_scaled = f[::2**m]
# we have 2 scaling factors: 2**(-m/2.0) from the formula and 2**m from our step length
# => total scaling factor is 2**(-m/2.0) * 2**m = 2 ** (m - m/2.0) = 2**(m/2.0)
return 2**(m/2.0) * np.sum(f_scaled[n:n+len(psi)] * psi)
def twav_mn_naive(f, psi, m, n):
pmn = psi_mn(psi, m, n)
return 2**(-m/2.0) * np.sum(f[:len(pmn)] * pmn)
m = 3
ns = np.arange(1000,1500)
plt.plot([twav_mn(audio, psi, m, n) for n in ns],color="blue", lw=6, alpha=0.3)
plt.plot([twav_mn_naive(audio, psi, m, n) for n in ns], "r-")
plt.show()
# generate a random walk
# note: for reasons of simplicity we choose the length of our function to be 2^n
# all code below can be made to work with signals of arbitrary length, but it
# would make some examples less readable
rfunc = np.cumsum(np.random.random(128)-0.5)
plt.plot(rfunc)
plt.show()
def haar(width):
h = np.zeros(width)
h[:width//2] = 1
h[width//2:] = -1
return h
h50 = np.zeros(100)
h50[25:75] = haar(50)
plt.plot(h50)
plt.ylim(-1.1,1.1)
plt.show()
def haar_split(f):
approx = 0.5*(f[::2]+f[1::2])
detail = f - np.repeat(approx, 2)
return approx, detail
rfunc_1, delta_1 = haar_split(rfunc)
plt.plot(np.repeat(rfunc_1,2), label="$f_1$")
plt.plot(rfunc, label="$f$")
plt.legend()
plt.show()
delta_diff = np.abs(delta_1[::2])-np.abs(delta_1[1::2])
plt.plot(delta_1, label="$\delta_1$")
plt.plot(delta_diff, label="piecewise absolute difference")
plt.legend()
plt.show()
plt.plot(delta_1[:4],color="blue", lw=6, alpha=0.3)
plt.plot([0,1],haar(2)*delta_1[0],"r-")
plt.plot([2,3],haar(2)*delta_1[2],"k-")
plt.show()
def haar_fit(delta):
fit = np.zeros(len(delta))
for i in range(len(delta)//2):
fit[2*i:2*(i+1)] = haar(2) * delta[2*i]
return fit
plt.plot(delta_1)
plt.plot(haar_fit(delta_1),"r--")
plt.show()
def haar_coeff(delta):
return delta[::2]
def haar_reconst(coeff):
return np.tile(haar(2),len(coeff)) * np.repeat(coeff,2)
plt.plot(delta_1)
plt.plot(haar_reconst(haar_coeff(delta_1)),"r--")
plt.show()
def dwt_haar(signal):
approx, detail = haar_split(signal)
coeffs = []
while len(approx) > 1:
coeffs.extend(haar_coeff(detail))
approx, detail = haar_split(approx)
coeffs.extend(haar_coeff(detail))
coeffs.append(approx)
return coeffs
def inv_dwt_haar(coeffs, plot_steps=[]):
signal = np.array([coeffs[-1]]) # last coefficient is the mean
csize = 1
cidx = len(coeffs) - 1
while cidx > 0:
signal = np.repeat(signal, 2)
signal += haar_reconst(coeffs[cidx-csize:cidx])
cidx -= csize
csize *= 2
if csize in plot_steps:
plt.plot(np.repeat(signal,len(coeffs)//len(signal)), label="csize="+str(csize))
return signal
coeffs = dwt_haar(rfunc)
plt.plot(rfunc, label="original function",color="blue", lw=6, alpha=0.3)
plt.plot(inv_dwt_haar(coeffs),"r-", label="reconstructed function")
plt.legend()
plt.show()
plt.plot(rfunc, label="original function")
inv_dwt_haar(coeffs, plot_steps=[2, 8, 32, 64])
plt.legend()
plt.show()
def phi_haar_f(x):
return 1 if x >= 0 and x < 1 else 0
def alpha_f(n, func, xvals=np.arange(-1,10,0.1), dt=0.1):
f = [2 * func(x) * func(2*x -n) for x in xvals]
return np.sum(f) * dt
def psi_f(x, phi, nvals=range(10)):
return sum([(-1)**n * alpha_f(-n + 1, phi) * phi(2*x - n) for n in nvals])
xvals = np.arange(-2,3,0.1)
plt.plot(xvals,[phi_haar_f(x) for x in xvals])
for n in range(4):
alpha_n = alpha_f(n, phi_haar_f)
phi_m1 = np.array([phi_haar_f(2*x - n) for x in xvals])
plt.plot(xvals,phi_m1*alpha_n)
plt.ylim(-0.1,1.1)
plt.show()
psi_haar = [psi_f(x, phi_haar_f) for x in xvals]
plt.plot(xvals, psi_haar)
plt.ylim(-1.1,1.1)
plt.show()
def phi_haar(width):
return np.ones(width)
def alpha(n, phi):
# blow up phi => phi[x] = phi2[2*x]
phi2 = np.repeat(phi,2)
n = 2*n
start = max(0, n//2)
end = min(len(phi2),(len(phi2) + n)//2)
xs = np.arange(start, end)
xs2 = 2*xs - n
return np.sum(phi2[xs] * phi2[xs2])
def psi(phi):
s = np.zeros(len(phi)*2)
ns = range(1-len(phi), len(s))
for n in ns:
a = alpha(-n + 1, phi)
before = min(max(0,n),len(phi))
after = len(phi)-before
phi_tmp = np.pad(phi,(before,after),"constant")
sign = -1 if n % 2 == 1 else 1
s += sign * a * phi_tmp
return s
for n in range(-1,4):
tmpl = "alpha_f({0:2d}): {1:.1f}, alpha({0:2d}): {2:.1f}"
print(tmpl.format(n,alpha_f(n, phi_haar_f),alpha(n,phi_haar(1))))
psi_haar_6 = np.zeros(6)
psi_haar_6[2:4] = psi(phi_haar(1))
plt.plot(np.arange(-1,2,0.5),psi_haar_6)
plt.show()
psi_haar_60 = np.repeat(psi_haar_6,10)
plt.plot(np.arange(-1,2,0.05),psi_haar_60)
plt.ylim(-1.1,1.1)
plt.show()
def filters_hg(phi):
Constructs the filters h and g
phi2 = np.repeat(phi,2)
# we only need the indices [-len(phi)+1, len(phi)*2) but we want the
# filter to be centered at index zero
ns_h = np.arange(-len(phi)+1,len(phi)*2)
ns_g = - ns_h + 1
ns_g = ns_g[::-1]
hs = np.zeros(len(ns_h), dtype="float32")
gs = np.zeros(len(ns_h), dtype="float32")
for i in range(len(ns_h)):
n_h = 2*ns_h[i]
n_g = 2*ns_g[i]
start = max(0, n_h//2)
end = min(len(phi2),(len(phi2) + n_h)//2)
xs = np.arange(start, end)
xs2 = 2*xs - n_h
# we need to divide by two because we operate on a "blown up" version of phi
hs[i] = np.sqrt(2) * np.sum(phi2[xs] * phi2[xs2]) / 2.0
gs[len(gs)-1-i] = -(n_g % 4 - 1) * hs[i]
# we want our filters to be centered at index zero => add zeros at front or back as required
add_front_h = len(phi) # len(phi)*2-1 = last index, -len(phi)+1 = first index => difference = len(phi)
add_back_g = len(phi)-2 # -len(phi)*2 + 2 = first index, len(phi) = last index => difference = len(phi)-2
hs = np.pad(hs, (add_front_h, 0), "constant")
gs = np.pad(gs, (max(0,-add_back_g), max(0, add_back_g)), "constant")
return hs, gs
hs, gs = filters_hg(phi_haar(1))
print("h", hs)
print("g", gs)
plt.plot(hs, label="$h_n$")
plt.plot(gs, label="$g_n$")
plt.ylim(-1,1)
plt.legend(loc="best")
plt.show()
def upsampleZero(ar,n):
upsampling function that adds zeros between the sample values
res = np.zeros(len(ar)*n)
res[::n] = ar
return res
def sbf_split(data, h, g):
decomposition by convolution and downsampling
# set starting points so that first filter position is centered at data[0]
sh, sg = (len(h)//2, len(g)//2)
approx = np.convolve(data, h[::-1], "full")[sh:sh+len(data):2]
detail = np.convolve(data, g[::-1], "full")[sg:sg+len(data):2]
return approx, detail
def sbf_reconst(approx, detail, h, g):
reconstruction by upsampling and convolution
# set starting points so that first filter position is centered at approx[0]/detail[0]
sh, sg = (len(h)//2, len(g)//2)
data = np.convolve(upsampleZero(approx, 2), h, "full")[sh:sh+len(approx)*2]
data += np.convolve(upsampleZero(detail, 2), g, "full")[sg:sg+len(detail)*2]
return data
h, g = filters_hg(phi_haar(1))
#h = [0, 0.7071, 0.7071]
#g = [0, -0.7071, 0.7071]
data = [1,2,3,4]
cs, ds = sbf_split(data, h, g)
recs = sbf_reconst(cs, ds, h, g)
print("c_0 (orig.) ",data)
print("c_1 ",cs)
print("d_1 ",ds)
print("c_0 (rec.) ",recs)
def approx_f(coeffs, phi, j=0):
reconstructs the actual function approximation from approximation coefficients
# TODO this has to be checked for correctness for any other wavelet basis than Haar wavelets
return 2**(-j/2.0) * np.correlate(coeffs, phi, "same")
def phi0(signal, phi):
return np.convolve(signal, phi, "same")
def dwt_subband(signal, phi, plot_steps=[], colors={}):
hs, gs = filters_hg(phi)
phi0n = phi0(signal, phi)
approx = phi0n
coeffs = []
while len(approx) > 1:
approx, detail = sbf_split(approx, hs, gs)
if len(approx) in plot_steps:
j = np.log2(len(signal)/len(approx))
l = "f^{:.0f} (dwt)".format(j)
plt.plot(np.repeat(approx_f(approx, phi, j), 2**j), lw=6, alpha=0.3, color=colors[len(approx)], label=l)
coeffs.append(detail)
coeffs.append(approx)
return coeffs
def inv_dwt_subband(coeffs, phi, plot_steps=[], colors={}):
hs, gs = filters_hg(phi)
psi_base = psi(phi)
approx = coeffs[-1]
idx = len(coeffs)-2
while idx >= 0:
detail = coeffs[idx]
approx = sbf_reconst(approx, detail, hs, gs)
if len(approx) in plot_steps:
l = "f^{:.0f} (idwt)".format(idx)
plt.plot(np.repeat(approx_f(approx, phi, idx), 2**idx), color=colors[len(approx)], label=l)
idx -= 1
return approx_f(approx, phi)
filter_len = 128
filter_signal = np.cumsum(np.random.random(filter_len)-0.5)
steps = [2, 8, 64]
colors = {2 : "red", 8: "blue", 64: "green"}
dsb = dwt_subband(filter_signal, phi_haar(1), plot_steps=steps, colors=colors)
dsbi = inv_dwt_subband(dsb, phi_haar(1), plot_steps=steps, colors=colors)
plt.legend(loc="best")
plt.show()
plt.plot(filter_signal, lw=6, alpha=0.3, label="original signal")
plt.plot(dsbi, label="reconstructed signal")
plt.legend(loc="best")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can get all data IDs/Dimensions.
Step2: In Gen3, we can also get the WCS and the file URI without dumping the images as Python objects, for example
Step3: With the DatasetRef, we may also use butler.getDirect
|
<ASSISTANT_TASK:>
Python Code:
exp = butler.get("calexp", {"visit":903334, "detector":22, "instrument":"HSC"})
print(exp.getWcs())
wcs = butler.get("calexp.wcs", {"visit":903334, "detector":22, "instrument":"HSC"})
print(wcs)
vinfo = butler.get("calexp.visitInfo", {"visit":903334, "detector":22, "instrument":"HSC"})
print(vinfo)
for ref in butler.registry.queryDatasets("calexp", collections=['shared/ci_hsc_output']):
print(ref.dataId)
for ref in butler.registry.queryDatasets("calexp.wcs", collections=['shared/ci_hsc_output']):
wcs = butler.get(ref)
uri = butler.datastore.getUri(ref)
print("calexp has ", wcs, "\nand the file is at \n", uri)
rows = butler.registry.queryDatasets("calexp", collections=['shared/ci_hsc_output'])
ref = list(rows)[0] # Just to get the first DatasetRef
exp = butler.getDirect(ref)
exp.getWcs()
import lsst.geom as geom
for ref in butler.registry.queryDatasets("calexp", collections=['shared/ci_hsc_output']):#, where="detector = 22"):
uri = butler.datastore.getUri(ref)
print("==== For the file of ", ref.dataId, "at \n", uri)
exp = butler.getDirect(ref)
wcs = exp.getWcs()
print("dimensions:", exp.getDimensions())
print("pixel scale:", wcs.getPixelScale().asArcseconds())
imageBox = geom.Box2D(exp.getBBox())
corners = [wcs.pixelToSky(pix) for pix in imageBox.getCorners()]
imageCenter = wcs.pixelToSky(imageBox.getCenter())
print("ra and dec for the center:", imageCenter.getRa().asDegrees(), imageCenter.getDec().asDegrees())
print("ra and dec for the corners:")
[print(corner) for corner in corners]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 随机数
Step2: 2. 超几何分布
Step3: 3. 连续分布
Step4: 3.2 对数正态分布
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib.pyplot import show, plot
import matplotlib.pyplot as plt
# 初始化一个全0的数组来存放剩余资本
# 以参数10000调用binomial函数,进行10000轮硬币赌博游戏
cash = np.zeros(10000)
cash[0] = 1000
outcome = np.random.binomial(9, 0.5, size=len(cash))
# 模拟每一轮抛硬币的结果,更新cash数组
# 打印出outcome的最大最小值,检查输出中是否有异常
for i in xrange(1, len(cash)):
if outcome[i] < 5:
cash[i] = cash[i-1] - 1
elif outcome[i] < 10:
cash[i] = cash[i-1] + 1
else:
raise AssertionError("Unexpected outcome" + outcome)
print outcome.min(), outcome.max()
plot(np.arange(len(cash)), cash)
show
points = np.zeros(100)
# 第一个参数是罐中普通球的个数
# 第二个参数是倒霉球的个数
# 第三个参数是每次摸球的个数(采样数)
outcomes = np.random.hypergeometric(25, 1, 3, size=len(points))
for i in xrange(len(points)):
if outcomes[i] == 3:
points[i] = points[i-1] + 1
elif outcomes[i] == 2:
points[i] = points[i-1] - 6
else:
print outcomes[i]
plot(points)
show()
# 产生指定数量的随机数
N = 10000
normal_values = np.random.normal(size=N)
# 绘制分布直方图
dummy, bins, dummy = plt.hist(normal_values, np.sqrt(N), normed=True, lw=1)
sigma = 1
mu = 0
plot(bins, 1/(sigma*np.sqrt(2*np.pi)) * np.exp(-(bins-mu)**2 / (2*sigma**2)), lw=2)
show()
import matplotlib.gridspec as gridspec
gs = gridspec.GridSpec(2,2)
fig = plt.figure(figsize=(10,8))
ax = []
N = 10000
sigma = 1
mu = 0
for a in xrange(2):
for b in xrange(2):
ax.append(fig.add_subplot(gs[a,b]))
normal_values = np.random.normal(size=N)
dummy, bins, dummy = plt.hist(normal_values, np.sqrt(N), normed=True, lw=1)
ax[-1].plot(bins, 1/(sigma*np.sqrt(2*np.pi)) * np.exp(-(bins-mu)**2 / (2*sigma**2)), lw=2)
# 使得子图适应figure的间距
fig.tight_layout()
show()
N = 10000
lognormal_values = np.random.lognormal(size=N)
dummy, bins, dummy = plt.hist(lognormal_values, np.sqrt(N), normed=True, lw=1)
sigma = 1
mu = 0
x = np.linspace(min(bins), max(bins), len(bins))
pdf = np.exp(-(np.log(x)-mu)**2 / (2*sigma**2)) / (x*sigma*np.sqrt(2*np.pi))
plot(x, pdf, lw=3)
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First for the Obama Love and Iran Deal Approval
Step2: Now for Obama Love and Confidence in Negotiations with Iran
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
df_Obama = pd.read_csv("data/Fox_polls - Obama Job.csv")
df_Iran_Deal = pd.read_csv("data/Fox_polls - Iran Deal.csv")
df_Iran_Nego = pd.read_csv("data/Fox_polls - Iran Nego.csv")
df_Obama.head(3)
df_Iran_Deal.head(3)
df_Obama_Iran_Deal = df_Obama.merge(df_Iran_Deal, left_on = 'Unnamed: 0', right_on='Unnamed: 0')
del df_Obama_Iran_Deal['Disapprove']
del df_Obama_Iran_Deal["(Don't know)_x"]
del df_Obama_Iran_Deal["Oppose"]
del df_Obama_Iran_Deal["(Don't know)_y"]
df_Obama_Iran_Deal.head(3)
df_Obama_Iran_Deal.columns = ['Group', 'Obama', 'Iran_Deal']
fig, ax = plt.subplots(figsize =(7,5))
#Font
csfont = {'fontname':'DIN Condensed'}
lm = smf.ols(formula='Iran_Deal~Obama',data=df_Obama_Iran_Deal).fit()
lm.params
Intercept, Obama_love = lm.params
df_Obama_Iran_Deal.plot(kind='scatter', x='Obama', y='Iran_Deal', ax= ax, color='tomato')
ax.plot(df_Obama_Iran_Deal["Obama"],Obama_love*df_Obama_Iran_Deal["Obama"]+Intercept,"-",color="green")
ax.set_axis_bgcolor("WhiteSmoke")
ax.set_ylabel('')
ax.xaxis.grid(color='darkgrey', linestyle=':', linewidth=0.5)
ax.yaxis.grid(color='darkgrey', linestyle=':', linewidth=0.5)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
plt.tick_params(
#axis='x',
top='off',
which='off',
left='off',
right='off',
bottom='off',
labeltop='off',
labelbottom='off')
#labelling, getting rid of boarders
ax.set_xlabel('Obama Love', **csfont, fontsize=12)
ax.set_title("Obama Love versus approval of Iran Deal", **csfont, fontsize=24)
ax.set_ylabel('Iran Deal Approval', **csfont, fontsize=12)
ax.set_axisbelow(True)
df_Obama.head()
df_Iran_Nego.head()
df_Iran_Nego['Confident'] = df_Iran_Nego['Very confident'] + df_Iran_Nego['Somewhat confident']
df_Obama_Iran_Nego = df_Obama.merge(df_Iran_Nego, left_on = 'Unnamed: 0', right_on='Unnamed: 0')
del df_Obama_Iran_Nego['Disapprove']
del df_Obama_Iran_Nego["(Don't know)_x"]
del df_Obama_Iran_Nego["Very confident"]
del df_Obama_Iran_Nego["Somewhat confident"]
del df_Obama_Iran_Nego['Not very confident']
del df_Obama_Iran_Nego["Not at all confident"]
del df_Obama_Iran_Nego["(Don't know)_y"]
df_Obama_Iran_Nego.head()
df_Obama_Iran_Nego.columns = ['Group', 'ObamaApp', 'Confidence']
df_Obama_Iran_Nego.head()
fig, ax = plt.subplots(figsize =(7,5))
#Font
csfont = {'fontname':'DIN Condensed'}
lm = smf.ols(formula='Confidence~ObamaApp',data=df_Obama_Iran_Nego).fit()
lm.params
Intercept, Obama_love = lm.params
df_Obama_Iran_Nego.plot(kind='scatter', x='ObamaApp', y='ObamaApp', ax= ax, color='tomato')
ax.plot(df_Obama_Iran_Nego["ObamaApp"],Obama_love*df_Obama_Iran_Nego["ObamaApp"]+Intercept,"-",color="green")
ax.set_axis_bgcolor("WhiteSmoke")
ax.set_ylabel('')
ax.xaxis.grid(color='darkgrey', linestyle=':', linewidth=0.5)
ax.yaxis.grid(color='darkgrey', linestyle=':', linewidth=0.5)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
plt.tick_params(
#axis='x',
top='off',
which='off',
left='off',
right='off',
bottom='off',
labeltop='off',
labelbottom='off')
#labelling, getting rid of boarders
ax.set_xlabel('Obama Love', **csfont, fontsize=12)
ax.set_title("Obama Love versus confidence in Admins Negotiations", **csfont, fontsize=24)
ax.set_ylabel('Confindence in Negotiations', **csfont, fontsize=12)
ax.set_axisbelow(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Html is the default value (that can be configured) , so the verbose form would be
Step2: You can also convert to latex, which will take care of extractin the embeded base64 encoded png, or the svg and call inkscape to convert those svg to pdf if necessary
Step3: You should just have to compile the generated .tex file. If you get the required packages installed, if should compile out of the box.
Step4: Have a look at 04 - Custom Display Logic.pdf, toward the end, where we compared display() vs display_html() and returning the object.
Step5: We see that the non-code cell are exported to the file. To have a cleaner script, we will export only the code contained in the code cells.
|
<ASSISTANT_TASK:>
Python Code:
!ipython nbconvert 'Working With Markdown Cells.ipynb'
!ipython nbconvert --to=html 'Working With Markdown Cells.ipynb'
!ipython nbconvert --to=latex 'Working With Markdown Cells.ipynb'
!ipython nbconvert --to=latex 'Working With Markdown Cells.ipynb' --post=pdf
pyfile = !ipython nbconvert --to python 'Working With Markdown Cells.ipynb' --stdout
for l in pyfile[20:40]:
print l
%%writefile simplepython.tpl
{% extends 'python.tpl'%}
{% block markdowncell -%}
{% endblock markdowncell %}
## we also want to get rig of header cell
{% block headingcell -%}
{% endblock headingcell %}
## and let's change the appearance of input prompt
{% block in_prompt %}
# This was input cell with prompt number : {{ cell.prompt_number if cell.prompt_number else ' ' }}
{%- endblock in_prompt %}
pyfile = !ipython nbconvert --to python 'Working With Markdown Cells.ipynb' --stdout --template=simplepython.tpl
for l in pyfile[4:40]:
print l
print '...'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Knapsack problem
Step3: Early stopping using ftol
Step4: Extending property using ftol_iter
|
<ASSISTANT_TASK:>
Python Code:
# Import modules
import random
import numpy as np
# Import PySwarms
from pyswarms.single import GlobalBestPSO
# Algorithm paramters
random.seed(0)
# The weight capacity of the knapsack
capacity = 50
number_of_items = 10
item_range = range(number_of_items)
value = [random.randint(1,number_of_items) for i in item_range]
weight = [random.randint(1,number_of_items) for i in item_range]
# PSO paramters
n_particles = 2
n_processes = 2
iterations = 1000
options = {'c1': 2, 'c2': 2, 'w': 0.7}
dim = number_of_items
LB = [0] * dim
UB = [1] * dim
constraints = (np.array(LB), np.array(UB))
kwargs = {'value':value,
'weight': weight,
'capacity': capacity
}
# Helper function
def get_particle_obj(X, **kwargs):
Calculates the objective function value which is
total revenue minus penalty of capacity violations
# X is the decision variable. X is vector in the lenght of number of items
# $ value of items
value = kwargs['value']
# weight of items
weight = kwargs['weight']
# Total revenue
revenue = sum([value[i]*np.round(X[i]) for i in item_range])
# Total weight of selected items
used_capacity = sum([kwargs['weight'][i]*np.round(X[i]) for i in item_range])
# Total capacity violation with 100 as a penalty cofficient
capacity_violation = 100 * min(0,capacity - used_capacity)
# the objective function minimizes the negative revenue, which is the same
# as maximizing the positive revenue
return -1*(revenue + capacity_violation)
# Objective function
def objective_function(X, **kwargs):
n_particles_ = X.shape[0]
dist = [get_particle_obj(X[i], **kwargs) for i in range(n_particles_)]
return np.array(dist)
KP_optimizer = GlobalBestPSO(n_particles=n_particles,
dimensions=dim,
options=options,
bounds=constraints,
bh_strategy='periodic',
ftol = 1e-3,
velocity_clamp = (-0.5,0.5),
vh_strategy = 'invert')
best_cost, best_pos = KP_optimizer.optimize(objective_function,
iters=iterations,
n_processes= n_processes,
**kwargs)
print("\nThe total knapsack revenue is: "+str(-best_cost))
print("Indices of selected items:\t " + str(np.argwhere(np.round(best_pos)).flatten()))
KP_optimizer.ftol_iter = 20
best_cost, best_pos = KP_optimizer.optimize(objective_function,
iters=iterations,
n_processes= n_processes,
**kwargs)
print("\nThe total knapsack revenue is: "+str(-best_cost))
print("Indices of selected items:\t " + str(np.argwhere(np.round(best_pos)).flatten()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
Step7: Problem 2
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
|
<ASSISTANT_TASK:>
Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
xmin = np.min(image_data)
xmax = np.max(image_data)
return (image_data-xmin)*(0.8)/(xmax-xmin) + 0.1
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.05
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy for epochs {} learning rate {} at {}'.format(epochs, learning_rate, validation_accuracy))
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: REINFORCE agent
Step2: Hyperparameters
Step3: Environment
Step4: We can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up.
Step5: The time_step = environment.step(action) statement takes action in the environment. The TimeStep tuple returned contains the environment's next observation and reward for that action. The time_step_spec() and action_spec() methods in the environment return the specifications (types, shapes, bounds) of the time_step and action respectively.
Step6: So, we see that observation is an array of 4 floats
Step7: Usually we create two environments
Step8: Agent
Step9: We also need an optimizer to train the network we just created, and a train_step_counter variable to keep track of how many times the network was updated.
Step10: Policies
Step11: Metrics and Evaluation
Step12: Replay Buffer
Step13: For most agents, the collect_data_spec is a Trajectory named tuple containing the observation, action, reward etc.
Step14: Training the agent
Step15: Visualization
Step17: Videos
Step18: The following code visualizes the agent's policy for a few episodes
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg freeglut3-dev
!pip install 'imageio==2.4.0'
!pip install pyvirtualdisplay
!pip install tf-agents[reverb]
!pip install pyglet xvfbwrapper
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import IPython
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import reverb
import tensorflow as tf
from tf_agents.agents.reinforce import reinforce_agent
from tf_agents.drivers import py_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.networks import actor_distribution_network
from tf_agents.policies import py_tf_eager_policy
from tf_agents.replay_buffers import reverb_replay_buffer
from tf_agents.replay_buffers import reverb_utils
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
env_name = "CartPole-v0" # @param {type:"string"}
num_iterations = 250 # @param {type:"integer"}
collect_episodes_per_iteration = 2 # @param {type:"integer"}
replay_buffer_capacity = 2000 # @param {type:"integer"}
fc_layer_params = (100,)
learning_rate = 1e-3 # @param {type:"number"}
log_interval = 25 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 50 # @param {type:"integer"}
env = suite_gym.load(env_name)
#@test {"skip": true}
env.reset()
PIL.Image.fromarray(env.render())
print('Observation Spec:')
print(env.time_step_spec().observation)
print('Action Spec:')
print(env.action_spec())
time_step = env.reset()
print('Time step:')
print(time_step)
action = np.array(1, dtype=np.int32)
next_time_step = env.step(action)
print('Next time step:')
print(next_time_step)
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
actor_net = actor_distribution_network.ActorDistributionNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
tf_agent = reinforce_agent.ReinforceAgent(
train_env.time_step_spec(),
train_env.action_spec(),
actor_network=actor_net,
optimizer=optimizer,
normalize_returns=True,
train_step_counter=train_step_counter)
tf_agent.initialize()
eval_policy = tf_agent.policy
collect_policy = tf_agent.collect_policy
#@test {"skip": true}
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# Please also see the metrics module for standard implementations of different
# metrics.
table_name = 'uniform_table'
replay_buffer_signature = tensor_spec.from_spec(
tf_agent.collect_data_spec)
replay_buffer_signature = tensor_spec.add_outer_dim(
replay_buffer_signature)
table = reverb.Table(
table_name,
max_size=replay_buffer_capacity,
sampler=reverb.selectors.Uniform(),
remover=reverb.selectors.Fifo(),
rate_limiter=reverb.rate_limiters.MinSize(1),
signature=replay_buffer_signature)
reverb_server = reverb.Server([table])
replay_buffer = reverb_replay_buffer.ReverbReplayBuffer(
tf_agent.collect_data_spec,
table_name=table_name,
sequence_length=None,
local_server=reverb_server)
rb_observer = reverb_utils.ReverbAddEpisodeObserver(
replay_buffer.py_client,
table_name,
replay_buffer_capacity
)
#@test {"skip": true}
def collect_episode(environment, policy, num_episodes):
driver = py_driver.PyDriver(
environment,
py_tf_eager_policy.PyTFEagerPolicy(
policy, use_tf_function=True),
[rb_observer],
max_episodes=num_episodes)
initial_time_step = environment.reset()
driver.run(initial_time_step)
#@test {"skip": true}
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
tf_agent.train = common.function(tf_agent.train)
# Reset the train step
tf_agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few episodes using collect_policy and save to the replay buffer.
collect_episode(
train_py_env, tf_agent.collect_policy, collect_episodes_per_iteration)
# Use data from the buffer and update the agent's network.
iterator = iter(replay_buffer.as_dataset(sample_batch_size=1))
trajectories, _ = next(iterator)
train_loss = tf_agent.train(experience=trajectories)
replay_buffer.clear()
step = tf_agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss.loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim(top=250)
def embed_mp4(filename):
Embeds an mp4 file in the notebook.
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
num_episodes = 3
video_filename = 'imageio.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = tf_agent.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
embed_mp4(video_filename)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One of the key assumptions of (Static) Nested Sampling is that we "shrink" according to
Step2: We will now sample from this distribution using 'multi'. We will change the defaults so that our bounding updates begin immediately.
Step3: Let's now compare the set of samples with the expected theoretical shrinkage. The contours for the bounding volume
Step4: Now let's turn bootstrapping off.
Step5: We see that without incorporating the bootstrap expansion factors the ellipsoids have a tendency to over-constrain the remaining prior volume and shrink too quickly. What happens if we increase the number of dimensions?
Step6: As expected, these trends get substantially worse as we move to higher dimensions. To mitigate this trend, in addition to bootstrapping dynesty also incorporates a built-in enlargement factor to increase the size of the bounding ellipsoids, as well as a more conservative decomposition algorithm. Ultimately, however, the better approach is to use a sampling method that is less sensitive to the bounding distributions, as shown below.
|
<ASSISTANT_TASK:>
Python Code:
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# seed the random number generator
rstate = np.random.default_rng(121)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
# define the eggbox log-likelihood
s, sigma = 100., 1.
def loglike(x):
return -max(abs((x - 0.5) / sigma))**(1. / s)
# define the prior transform
def prior_transform(x):
return x
# plot the log-likelihood surface
plt.figure(figsize=(10., 10.))
axes = plt.axes(aspect=1)
xx, yy = np.meshgrid(np.linspace(0., 1., 200),
np.linspace(0., 1., 200))
L = np.array([loglike(np.array([x, y]))
for x, y in zip(xx.flatten(), yy.flatten())])
L = L.reshape(xx.shape)
axes.contourf(xx, yy, L, 200, cmap=plt.cm.Purples_r)
plt.title('Log-Likelihood Surface', y=1.01)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.tight_layout()
ndim = 2
nlive = 500
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=ndim, bootstrap=50,
first_update={'min_ncall': 0, 'min_eff': 100.},
bound='multi', sample='unif', nlive=nlive,
rstate=rstate)
sampler.run_nested(dlogz=0.01, maxiter=1500, add_live=False)
res = sampler.results
from scipy.stats import kstest
vol = (2 * (-res['logl'])**s)**ndim # real volumes
t = vol[1:] / vol[:-1] # shrinkage
S = 1 - t**(1. / ndim) # slice
# define our PDF/CDF
def pdf(s):
return ndim * nlive * (1. - s)**(ndim * nlive - 1.)
def cdf(s):
return 1. - (1. - s)**(ndim * nlive)
# check whether the two distributions are consistent
k_dist, k_pval = kstest(S, cdf)
# plot results
xgrid = np.linspace(0., 0.1, 10000)
# PDF
fig, axes = plt.subplots(1, 2, figsize=(20, 6))
ax = axes[0]
pdfgrid = pdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
color='navy', density=True, lw=4, label='Samples')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 5])
ax.set_ylabel('PDF')
ax.legend()
# CDF
ax = axes[1]
cdfgrid = cdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
cumulative=True, color='navy',
density=True, lw=4, label='Theory')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, cumulative=True,
histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 5])
ax.set_ylabel('CDF')
ax.text(0.95, 0.2, 'dist: {:6.3}'.format(k_dist),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
ax.text(0.95, 0.1, 'p-value: {:6.3}'.format(k_pval),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
plt.tight_layout()
ndim = 2
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=ndim,
bootstrap=0,
enlarge=1,
bound='multi', sample='unif', nlive=nlive,
first_update={'min_ncall': 0, 'min_eff': 100.},
rstate=rstate)
sampler.run_nested(dlogz=0.01, maxiter=1500, add_live=False)
res = sampler.results
vol = (2 * (-res['logl'])**s)**ndim # real volumes
t = vol[1:] / vol[:-1] # shrinkage
S = 1 - t**(1. / ndim) # slice
# check whether the two distributions are consistent
k_dist, k_pval = kstest(S, cdf)
# PDF
fig, axes = plt.subplots(1, 2, figsize=(20, 6))
ax = axes[0]
pdfgrid = pdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
color='dodgerblue', density=True, lw=4, label='Samples')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 5])
ax.set_ylabel('PDF')
ax.legend()
# CDF
ax = axes[1]
cdfgrid = cdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
cumulative=True, color='dodgerblue',
density=True, lw=4, label='Theory')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, cumulative=True,
histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 5])
ax.set_ylabel('CDF')
ax.text(0.95, 0.2, 'dist: {:6.3}'.format(k_dist),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
ax.text(0.95, 0.1, 'p-value: {:6.3}'.format(k_pval),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
plt.tight_layout()
ndim = 7
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=ndim, bootstrap=50,
bound='multi', sample='unif', nlive=nlive,
first_update={'min_ncall': 0, 'min_eff': 100.},
rstate=rstate)
sampler.run_nested(dlogz=0.01, maxiter=1500, add_live=False)
res = sampler.results
vol = (2 * (-res['logl'])**s)**ndim # real volumes
t = vol[1:] / vol[:-1] # shrinkage
S = 1 - t**(1. / ndim) # slice
# check whether the two distributions are consistent
k_dist, k_pval = kstest(S, cdf)
# PDF
fig, axes = plt.subplots(1, 2, figsize=(20, 6))
ax = axes[0]
pdfgrid = pdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
color='navy', density=True, lw=4, label='Samples')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 1.5])
ax.set_ylabel('PDF')
ax.legend()
# CDF
ax = axes[1]
cdfgrid = cdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
cumulative=True, color='navy',
density=True, lw=4, label='Theory')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, cumulative=True,
histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 1.5])
ax.set_ylabel('CDF')
ax.text(0.95, 0.2, 'dist: {:6.3}'.format(k_dist),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
ax.text(0.95, 0.1, 'p-value: {:6.3}'.format(k_pval),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
plt.tight_layout()
ndim = 7
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=ndim,
bound='multi', sample='unif', nlive=nlive,
first_update={'min_ncall': 0, 'min_eff': 100.},
rstate=rstate)
sampler.run_nested(dlogz=0.01, maxiter=1500, add_live=False)
res = sampler.results
vol = (2 * (-res['logl'])**s)**ndim # real volumes
t = vol[1:] / vol[:-1] # shrinkage
S = 1 - t**(1. / ndim) # slice
# check whether the two distributions are consistent
k_dist, k_pval = kstest(S, cdf)
# PDF
fig, axes = plt.subplots(1, 2, figsize=(20, 6))
ax = axes[0]
pdfgrid = pdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
color='dodgerblue', density=True, lw=4, label='Samples')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 1.5])
ax.set_ylabel('PDF')
ax.legend()
# CDF
ax = axes[1]
cdfgrid = cdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
cumulative=True, color='dodgerblue',
density=True, lw=4, label='Theory')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, cumulative=True,
histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 1.5])
ax.set_ylabel('CDF')
ax.text(0.95, 0.2, 'dist: {:6.3}'.format(k_dist),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
ax.text(0.95, 0.1, 'p-value: {:6.3}'.format(k_pval),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
plt.tight_layout()
ndim = 7
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=ndim,
bound='multi', sample='rslice', nlive=nlive,
first_update={'min_ncall': 0, 'min_eff': 100.},
rstate=rstate)
sampler.run_nested(dlogz=0.01, maxiter=1500, add_live=False)
res = sampler.results
vol = (2 * (-res['logl'])**s)**ndim # real volumes
t = vol[1:] / vol[:-1] # shrinkage
S = 1 - t**(1. / ndim) # slice
# check whether the two distributions are consistent
k_dist, k_pval = kstest(S, cdf)
# PDF
fig, axes = plt.subplots(1, 2, figsize=(20, 6))
ax = axes[0]
pdfgrid = pdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
color='gray', density=True, lw=4, label='Samples')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 1.5])
ax.set_ylabel('PDF')
ax.legend()
# CDF
ax = axes[1]
cdfgrid = cdf(xgrid)
n, b, p = ax.hist(S * 1e3, bins=50, histtype='step',
cumulative=True, color='gray',
density=True, lw=4, label='Theory')
ax.hist(xgrid * 1e3, bins=b, color='red', density=True,
weights=pdfgrid, lw=4, cumulative=True,
histtype='step', label='Theory')
ax.set_xlabel('Shrinkage [1e-3]')
ax.set_xlim([0., 1.5])
ax.set_ylabel('CDF')
ax.text(0.95, 0.2, 'dist: {:6.3}'.format(k_dist),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
ax.text(0.95, 0.1, 'p-value: {:6.3}'.format(k_pval),
horizontalalignment='right', verticalalignment='center',
transform=ax.transAxes)
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we need the particle physics specific libraries. If you installed the libraries from the command shell properly as shown in the local setup tutorial, then most of the work needed to use these should already be done. From here it should be as simple as using the import command like for the included libraries.
Step2: With all of the tools imported, we now need our data files. Included in pps_tools are several functions for this purpose. Currently all of the files for the activities are located in this Google Drive folder. The function you will need to download files from this Drive is download_drive_file, for which the argument is simply the file name. You can also use the function download_file_from_google_drive if you want to save the file you download under a different name, or if for some reason you need to download a file from Google Drive that is not included, in which case you will need to put the file's id tag instead of the filename.
Step3: With the tools imported and files downloaded, you should now have everything you need to start interfacing with the data!
Step4: To organize the data in a way that makes it easy to find what you need, you will need to use the hep tools we imported. This can be done in several ways.
Step5: This returns a list called collisions which has all of the collision events as entries. Each event is in turn its own list whose entries are the different types of particles involved in that collision. These are also lists, containing each individual particle of that particular type as entries, which are also lists of the four-momentum and other characteristics of each particle.
Step6: You might notice that each individual event is callable from all collisions by is entry number, as are the individual particles from within their lists of particle types. However, the particle types themselves are only callable from the event list by their names. The characteristics of each particle are also only callable from their lists by the name of the characteristic. The exact dictionary entry needed to call them can be referenced by printing event.keys as above.
Step7: More involved way
Step8: The above commands do not actually make the data directly usable, we need one more step for that, which is the get_collision function. This function is different from the get_collisions function used in the simpler method in that it only pulls out the information of a single event rather than all of them. This means that to get information from multiple events, you will need to use this command in a loop, for which you can define a range that determines what events you actualy want to use.
Step9: Other than that get_collision only gets the information from one event rather than all of them, it essentially organizes the information in the same way that get_collisions does. You can interact with this data the same way you would for any individual event from the big list of events that get_collisions would give you.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pylab as plt
%matplotlib notebook
import h5hep
import pps_tools as pps
filename = 'dimuons_1000_collisions.hdf5'
pps.download_drive_file(filename)
### Other examples: ###
#pps.download_file_from_google_drive('dimuons_1000_collisions.hdf5','data/file.hdf5')
#pps.download_file_from_google_drive('<google drive file id>','data/file.hdf5')
#pps.download_file('https://github.com/particle-physics-playground/playground/blob/master/data/dimuons_1000_collisions.hdf5')
# Print the keys to see what is in the dictionary OPTIONAL
for key in event.keys():
print(key)
infile = '../data/dimuons_1000_collisions.hdf5'
collisions = pps.get_collisions(infile,experiment='CMS',verbose=False)
print(len(collisions), " collisions") # This line is optional, and simply tells you how many events are in the file.
second_collision = collisions[1] # the second event (list indexes start at 0)
print("Second event: ",second_collision)
all_muons = second_collision['muons'] # all of the jets in the first event
print("All muons: ",all_muons)
first_muon = all_muons[0] # the first jet in the first event
print("First muon: ",first_muon)
muon_energy = first_muon['e'] # the energy of the first photon
print("First muon's energy: ",muon_energy)
energies = []
for collision in collisions: # loops over all the events in the file
jets = collision['jets'] # gets the list of all photons in the event
for jet in jets: # loops over each photon in the current event
e = jet['e'] # gets the energy of the photon
energies.append(e) # puts the energy in a list
infile = '../data/dimuons_1000_collisions.hdf5'
alldata = pps.get_all_data(infile,verbose=False)
nentries = pps.get_number_of_entries(alldata)
print("# entries: ",nentries) # This optional line tells you how many events are in the file
for entry in range(nentries): # This range will loop over ALL of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
for entry in range(0,int(nentries/2)): # This range will loop over the first half of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
for entry in range(int(nentries/2),nentries): # This range will loop over the second half of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
energies = []
for event in range(0,int(nentries/3)): # Loops over first 3rd of all events
collision = pps.get_collision(alldata,entry_number=event,experiment='CMS') # organizes the data so you can interface with it
jets = collision['jets'] # gets the list of all photons in the current event
for jet in jets: # loops over all photons in the event
e = jet['e'] # gets the energy of the photon
energies.append(e) # adds the energy to a list
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Download raw data
Step3: Itemized receipts reported by campaigns
Step5: Hack down RCPT_CD table outside of Python
Step6: Concatenate them together into one DataFrame.
Step8: Remove amended filings
Step9: Filter the table down to monetary contributions reported via Schedule A.
Step10: Reduce the number of the headers down to the ones we want to keep
Step11: Rename the ugly ones
Step12: Import and trim the FILER_FILINGS_CD table
Step13: Since this table does not indicate if the filing is an amendment, let's just reduce it to the distinct connections between filers and filings.
Step14: Import the shortlist of Prop. 64 committees we want to study
Step15: Here are the committees the state lists as opposing the measure.
Step16: Join Prop. 64 committees to the contributions they've received
Step17: Export the data to a CSV file
|
<ASSISTANT_TASK:>
Python Code:
import os
import requests
from datetime import datetime
from clint.textui import progress
import pandas
pandas.set_option('display.float_format', lambda x: '%.2f' % x)
pandas.set_option('display.max_columns', None)
def download_csv(name):
Accepts the name of a calaccess.download CSV and returns its path.
path = os.path.join(os.getcwd(), '{}.csv'.format(name))
if not os.path.exists(path):
url = "http://calaccess.download/latest/{}.csv".format(name)
r = requests.get(url, stream=True)
with open(path, 'w') as f:
total_length = int(r.headers.get('content-length'))
for chunk in progress.bar(r.iter_content(chunk_size=1024), expected_size=(total_length/1024) + 1):
if chunk:
f.write(chunk)
f.flush()
return path
rcpt_path = download_csv("rcpt_cd")
ff_path = download_csv("filer_filings_cd")
def rcpt_part_to_dataframe(part_name):
Import a slide of the RCPT_CD table prepared for this notebook.
file_name = "rcpt_cd_parta{}.csv".format(part_name)
path = os.path.join(os.getcwd(), file_name)
return pandas.read_csv(path, sep=',', dtype="unicode")
itemized_receipts_df_h = rcpt_part_to_dataframe("h")
itemized_receipts_df_i = rcpt_part_to_dataframe("i")
itemized_receipts_df_j = rcpt_part_to_dataframe("j")
recent_itemized_receipts = pandas.concat([
itemized_receipts_df_h,
itemized_receipts_df_i,
itemized_receipts_df_j
])
def remove_amended_filings(df):
Accepts a dataframe with FILING_ID and AMEND_ID files.
Returns only the highest amendment for each unique filing id.
max_amendments = df.groupby('FILING_ID')['AMEND_ID'].agg("max").reset_index()
merged_df = pandas.merge(df, max_amendments, how='inner', on=['FILING_ID', 'AMEND_ID'])
print "Removed {} amendments".format(len(df)-len(merged_df))
print "DataFrame now contains {} rows".format(len(merged_df))
return merged_df
real_recent_itemized_receipts = remove_amended_filings(recent_itemized_receipts)
real_sked_a = real_recent_itemized_receipts[
real_recent_itemized_receipts['FORM_TYPE'] == 'A'
]
trimmed_itemized = real_sked_a[[
'FILING_ID',
'AMEND_ID',
'CTRIB_NAMF',
'CTRIB_NAML',
'CTRIB_CITY',
'CTRIB_ST',
'CTRIB_ZIP4',
'CTRIB_EMP',
'CTRIB_OCC',
'RCPT_DATE',
'AMOUNT',
]]
clean_itemized = trimmed_itemized.rename(
index=str,
columns={
"CTRIB_NAMF": "FIRST_NAME",
"CTRIB_NAML": "LAST_NAME",
"CTRIB_CITY": "CITY",
"CTRIB_ST": "STATE",
"CTRIB_ZIP4": "ZIPCODE",
"CTRIB_EMP": "EMPLOYER",
"CTRIB_OCC": "OCCUPATION",
"RCPT_DATE": "DATE"
}
)
filer_filings_df = pandas.read_csv(ff_path, sep=',', index_col=False, dtype='unicode')
filer_to_filing = filer_filings_df[['FILER_ID', 'FILING_ID']].drop_duplicates()
supporting_committees = pandas.DataFrame([
{"COMMITTEE_ID":"1343793","COMMITTEE_NAME":"Californians for Responsible Marijuana Reform, Sponsored by Drug Policy Action, Yes on Prop. 64"},
{"COMMITTEE_ID":"1376077","COMMITTEE_NAME":"Californians for Sensible Reform, Sponsored by Ghost Management Group, LLC dba Weedmaps"},
{"COMMITTEE_ID":"1385506","COMMITTEE_NAME":"Drug Policy Action - Non Profit 501c4, Yes on Prop. 64"},
{"COMMITTEE_ID":"1385745","COMMITTEE_NAME":"Fund for Policy Reform (Nonprofit 501(C)(4))"},
{"COMMITTEE_ID":"1371855","COMMITTEE_NAME":"Marijuana Policy Project of California"},
{"COMMITTEE_ID":"1382525","COMMITTEE_NAME":"New Approach PAC (MPO)"},
{"COMMITTEE_ID":"1386560","COMMITTEE_NAME":"The Adult Use Campaign for Proposition 64"},
{"COMMITTEE_ID":"1381808","COMMITTEE_NAME":"Yes on 64, Californians to Control, Regulate and Tax Adult Use of Marijuana While Protecting Children, Sponsored by Business, Physicians, Environmental and Social-Justice Advocate Organizations"}
])
supporting_committees['COMMITTEE_POSITION'] = 'SUPPORT'
opposing_committees = pandas.DataFrame([
{"COMMITTEE_ID":"1382568","COMMITTEE_NAME":"No on Prop. 64, Sponsored by California Public Safety Institute"},
{"COMMITTEE_ID":"1387789","COMMITTEE_NAME":"Sam Action, Inc., a Committee Against Proposition 64 with Help from Citizens (NonProfit 501(C)(4))"}
])
opposing_committees['COMMITTEE_POSITION'] = 'OPPOSE'
prop_64_committees = pandas.concat([supporting_committees, opposing_committees])
prop_64_filings = filer_to_filing.merge(
prop_64_committees,
how="inner",
left_on='FILER_ID',
right_on="COMMITTEE_ID"
)
prop_64_itemized = prop_64_filings.merge(
clean_itemized,
how="inner",
left_on="FILING_ID",
right_on="FILING_ID"
)
print len(prop_64_itemized)
prop_64_itemized.drop('FILER_ID', axis=1, inplace=True)
prop_64_itemized.to_csv("./prop_64_contributions.csv", index=False)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.